The year 2020 will see a new level of cyber-attacks – fake audio-visuals are forecast to be increasingly used by criminals as a social engineering tactic to extort money from companies and individuals.
This is according to security experts, who warn that deepfakes will transition from being predominantly used to create fake celebrity pornographic videos to the new threat used to sabotage enterprises for financial gains.
Anna Collard, founder and MD of KnowBe4 company Popcorn Training, says deepfakes are spilling beyond the world of celebrity cyber bullying and into company systems.
“Most deepfake technologies use existing media, video or audio typically to train an AI to create a virtual model of the item they want to change. While they can be used for entertainment in applications such as those on phones that swap faces, they can also be used for deception and fraud, to deceive employees into transferring funds or making critical decisions.
“Imagine getting a phone call or voicemail from an executive asking you to transfer money into a bank account. If you believe the person calling is the executive, why would you question the request?” Collard asks.
Electoral blackmail
According to experts, only one social media profile picture is sufficient to create a deepfake video.
Deepfake can also be used for anything from traditional blackmail of politicians, to election influencing through releasing fake videos of candidates, to cyber bullying victims.
According to Forrester’s 2020 Predictions, deepfakes alone will cost businesses over a quarter of a billion dollars, as attackers use AI, machine learning and natural language tools to generate fake audio and video designed to deceive employees into releasing company funds.
Jonathan Miles, head of strategic intelligence and security research at Mimecast, says in the same ways as threat actors impersonate e-mail addresses, domains, subdomains, landing pages, Web sites, mobile apps, and social media profiles, deepfakes are emerging as a new threat, targeting enterprises.
“Deepfake attacks, or voice phishing attacks, are an extension of business e-mail compromise (BEC) and have introduced a new dimension to the attacker’s arsenal. This methodology is becoming more prevalent as an additional vector used for eliciting fraudulent fund transfers.
“Many people are aware of fake videos of politicians, carefully crafted to convey false messages and statements that call their integrity into question. But with companies becoming more vocal and visible on social media, and CEOs speaking out about purpose-driven brand strategies using videos and images, there is a risk that influential business leaders will provide source material for kicking off possible deepfake attacks,” he explains.
According to security firm Trend Micro, deepfake ransomware is among the top ten security trends to watch out for in 2020.
Deepfake audio fraud is a new cyber attack tool, further highlighting how AI can be abused by cyber criminals to make scams harder to detect, often used alongside BEC scams, notes the report.
“For years, email-based scams have been largely perpetrated by fraudsters in West Africa – and we do not expect this to change. We do foresee fraud advancing in 2020, with AI technology being used to create highly believable counterfeits in image, video, or audio format that depict individuals saying or doing things that did not occur.
“The rise of deepfakes raises concern. It inevitably moves from creating fake celebrity pornographic videos to manipulating company employees and procedures.”
This was exemplified when a fake, AI-generated voice of an energy firm’s CEO was used to defraud the company of $243 000.
Experts also believe newsrooms, journalists in particular, could also become a prime target for deepfake creators.
Preventing deepfakes
According to Reuters, China has introduced a new lawgoverning video and audio content online, banning the publishing and distribution of “fake news” created with technologies such as AI and virtual reality, effective from January 2020.
Google, in partnership with Jigsaw, has released a vast dataset of deepfake videos to help researchers in detecting forgeries. It includes 3 000 AI-generated videos that were made using various publicly available algorithms.
Collard believes as scammers constantly seek new ways of earning trust from their victims, the crime is expected to also be increasingly used as part of online dating scams.
“Fraudsters working in the romance scam industry can use deepfakes to further perpetuate scams to extort money from victims. In instances where the romantic interest is made out to be a celebrity, the voice or face of a person known to the potential victim can be used, creating a situation that could damage a person's reputation.”
Ilonka Badenhorst, GM and lobbying committee chair of the Wireless Application Service Providers’ Association, says SA’s Cyber Crimes Bill, which is nearing the stages of becoming law, is expected to protect victims of deepfakes.
“While the Bill does not specifically provide for protection from the spectre of deepfakes, it is a step in the right direction, by introducing into the legislation the concept of all forms of cyber bullying. This means anyone convicted of the offenses outlined in the Bill is likely to be fined and/or imprisoned for up to 15 years.”
Share