Subscribe
About

Deepfake threats: A cyber security dialogue among experts

Deepfakes have become a major cyber security threat.
Deepfakes have become a major cyber security threat.

Deepfakes – AI-generated synthetic media – have become a major cyber security threat, affecting individuals, businesses and governments worldwide. Fraud, misinformation and privacy violations are causing significant financial losses and reputational damage. But how severe is the threat and what can be done about it?

To explore this issue, we bring together three cyber security experts to hold this important dialogue:

  • Dr Bright Gameli Mawudor, Cyber Security Specialist, Kenya
  • Caesar Tonkin, Managing Director, Armata Cyber Security, South Africa
  • Craig du Plooy, Director, Cysec, South Africa

Moderator: Lesley Rencontre, DUO Marketing, Director

The nature of deepfake threats

Rencontre: Dr Mawudor, let’s start with you. How do you define deepfakes, and why are they such a serious cyber security risk?

Dr Mawudor: Deepfakes use AI to create realistic but entirely fabricated videos, images and audio recordings. While the technology has legitimate uses in media and entertainment, it’s being weaponised for fraud, disinformation and cyber crime. The risk is particularly high because deepfake content is becoming nearly indistinguishable from real footage.

Tonkin: The accessibility of deepfake technology is a major issue. Previously, it was confined to AI researchers, but now, freely available tools allow anyone to create highly convincing fakes. A recent iProov report found that 47% of organisations have encountered deepfake attacks, yet 62% admit they aren’t adequately prepared to counter them.

Du Plooy: Deepfakes have revolutionised cyber fraud. In 2024 in Hong Kong, a finance employee was tricked into paying over $25 million during a deepfake video conference, where criminals impersonated senior executives. This wasn’t just a phishing e-mail – it was an AI-powered con that mimicked facial expressions and voices. That was a wake-up call to the cyber security community.

Deepfake threats in the US, Australia, Kenya and Hong Kong

Rencontre: Craig, you mentioned Hong Kong. How are deepfake threats emerging in different regions?

Du Plooy: In the United States, AI-generated deepfakes have been used for election misinformation, stock market manipulation and even celebrity scams. A recent case saw deepfake videos of Taylor Swift used to promote fraudulent crypto-currency schemes.

Tonkin: Australia has also experienced deepfake-related crimes. There have been reports of voice deepfakes being used in corporate fraud schemes, where attackers clone an executive's voice to authorise fraudulent financial transactions.

Dr Mawudor: Kenya has seen deepfake-driven disinformation campaigns aimed at manipulating public opinion during elections. The government flagged instances where AI-generated videos were used to spread false narratives about political candidates.

The role of digital forensics in deepfake detection

Rencontre: Caesar, detecting deepfakes is becoming harder. How is digital forensics helping in the fight?

Tonkin: Digital forensics is now a critical tool in deepfake detection. Traditional cyber security tools aren’t enough – we need AI-driven forensic analysis to identify manipulated content. Forensic teams use techniques like:

  • Reverse image searches to track the origins of suspected deepfakes.
  • Frame-by-frame analysis to detect irregularities in facial movements.
  • Metadata examination to identify discrepancies in timestamps or file origins.

Du Plooy: One of the biggest breakthroughs has been deepfake forensics AI. Leading AI companies with deepfake threat detection capabilities are using neural network-based detectors that analyse pixel-level inconsistencies in videos. But cyber criminals are evolving – some deepfake software now removes digital fingerprints, making the work of ourselves in the digital forensic analysis space even more challenging.

Dr Mawudor: There’s also progress in audio forensics. AI-generated voices often struggle with breath control and emotional nuance. Forensic specialists can use spectrogram analysis to detect unnatural sound patterns. However, real-time detection remains difficult – by the time a deepfake spreads online, it may have already caused damage.

Misuse of deepfake technology

Rencontre: Beyond fraud and misinformation, how else are deepfakes being misused?

Tonkin: One growing concern is deepfake-enhanced phishing attacks. Imagine receiving a video call from your "boss", instructing you to process a financial transaction. Without advanced deepfake threat detection and forensic tools, it’s almost impossible to detect if it’s fake.

Du Plooy: Another misuse is nation-state propaganda. Governments and intelligence agencies are using deepfakes to manipulate public sentiment and destabilise adversaries. AI-generated fake speeches, news clips and political leaders' impersonations are being deployed in these influencing campaigns.

Dr Mawudor: AI-generated terrorism content is also on the rise. Google flagged over 250 cases globally where AI was used to create deepfake propaganda for radicalisation. In some cases, security agencies are now integrating forensic AI tools to identify manipulated extremist content.

Who is most at risk?

Rencontre: Individuals, businesses and governments all seem to be at risk. Who is the most vulnerable?

Tonkin: Right now, corporations are the top target. In the 2024 Identity Fraud Report by Entrust, deepfake attacks occurred every five minutes in 2024, and digital document forgeries increased by 244% year over year. Financial services, including crypto-currency platforms, lending, mortgages and traditional banks, were the most targeted industries by these sophisticated, AI-powered fraud techniques. Criminals are using deepfakes to bypass traditional fraud detection systems. As per Gartner 2024, by 2026, 30% of enterprises may no longer consider identity verification and authentication solutions reliable when used in isolation. This scepticism arises from the increasing use of AI-generated deepfakes to compromise facial biometric systems, undermining the effectiveness of traditional identity verification methods.

Du Plooy: Governments are equally vulnerable. Deepfake-driven political misinformation is already influencing elections, and nation-state actors are using deepfakes for diplomatic sabotage.

Dr Mawudor: But we can’t overlook individuals. AI-generated scams where criminals clone the voice of a family member to ask for money are increasing. People need to be aware that even voice calls can be faked. As per Gartner 2025, by 2027, AI agents, including those utilising deepfake techniques, are expected to reduce the time required to exploit account exposures by 50%. These AI-driven tools will automate various steps in account takeover processes, from social engineering using deepfake voices to the end-to-end exploitation of user credentials.

Emerging trends in deepfake threats

Rencontre: Where do you see deepfake threats evolving?

Tonkin: Cyber crime as a service (CaaS) is a major concern. Criminal groups are selling deepfake toolkits, making it easier for anyone to create convincing scams.

Du Plooy: AI-generated news anchors spreading fake news will be a major issue. Imagine deepfake videos of trusted reporters spreading fabricated stories.

Dr Mawudor: The good news is that deepfake detection AI is improving. Companies like Microsoft are developing forensic tools to analyse facial micro expressions and voice intonations. However, legal frameworks need to catch up – many countries still lack laws addressing deepfake fraud.

Mitigating the deepfake threat

Rencontre: What steps should be taken to combat deepfakes?

Tonkin: Companies need real-time forensic detection tools and multifactor authentication beyond facial recognition. Employees should be trained to verify unusual requests through secondary channels. It is crucial, therefore, that the deepfake technology threat detection and prevention industry rapidly evolves and matures to rein in these highly evolving threats and risks. This needs to be thoroughly tackled in order to minimise the risk impact to communities, governments and industries.

Du Plooy: Individuals should remain sceptical – if something seems off, verify it. Financial institutions should strengthen their authentication processes, as the crypto industry has seen a 50% rise in deepfake-related fraud. As per Bitget, a leading crypto-currency exchange and Web3 company, there has been a sharp increase in the use of deepfakes for criminal purposes that has led to total losses of over $79.1 billion since the beginning of 2022.

Dr Mawudor: Governments must enforce AI content labelling and require social media platforms to flag AI-generated videos. Legal consequences for deepfake abuse need to be strengthened, especially for fraud and digital harassment.

Rencontre: This certainly has been a most informative dialogue by our three cyber security experts.

Share