Subscribe
About

Top three deepfake threats in 2023

Staff Writer
By Staff Writer, ITWeb
Johannesburg, 09 Jun 2023

Cyber security firm Kaspersky says according to the World Economic Forum (WEF), the number of deepfake videos online is increasing at an annual rate of 900%.

The security company defines deepfake as the use of neural networks and deep learning (hence ‘deep fake’) to enable users to apply images, video and audio materials to create realistic videos of a person in which their face or body has been digitally altered so that they appear to be someone else.

These manipulated videos and images are frequently used for malicious purposes (such as harassment, revenge, crypto scams) to spread false information.

Kaspersky research sheds light on the top three fraud schemes users should be aware of: 

  • Financial fraud

    Deepfakes can be used for social engineering, where criminals use enhanced images to impersonate celebrities to bait victims into falling for their scams. For example, an artificially created video of Elon Musk promising high returns from a dubious cryptocurrency investment scheme went viral last year, causing users to lose their money. To create deepfakes like this one, scammers use footage of celebrities or splice together old videos, and launch live streams on social media platforms, promising to double any cryptocurrency payment sent to them.
  • Pornographic deepfakes

    Another use for deepfakes is to violate an individual's privacy. Deepfake videos can be created by manipulating a person’s face onto a pornographic video, causing harm and distress. In one case, deepfake videos of some celebrities surfaced online, showing their faces superimposed onto the bodies of pornographic actresses in explicit scenes. As a consequence, in such cases, the attack victims have their reputation harmed and rights violated.
  • Business risks

    Often, deepfakes are used to target businesses for crimes such as extortion from company managers, blackmail and industrial espionage.

    For instance, there is a known case where cyber criminals managed to deceive a bank manager in the UAE and steal US$35-million, using a voice deepfake – just a small recording of employee’s boss’s voice was enough to generate a convincing deepfake.

    In another case, scammers tried to fool the largest cryptocurrency platform, Binance. The Binance executive was surprised when he started receiving 'thank you!' messages about a Zoom meeting he never attended. With his publicly available images, the attackers managed to generate a deepfake and successfully apply it at an online meeting, speaking for the executive.

On high alert

In general, the aims of scammers who exploit deepfakes include disinformation and manipulation of public opinion, blackmail or even espionage.

Kaspersky says HR managers are already on alert regarding the use of deepfakes by candidates who apply for remote work, according to an FBI warning. In the case of Binance, attackers used images of people from the Internet to create deepfakes and were even able to add the photos of these people to resumes. If they manage to trick HR managers in this way and later receive an offer, they can steal employer data.

While the number of deepfakes is increasing, it remains an expensive type of fraud which requires a big budget. Earlier research by Kaspersky revealed cost of deepfakes on the darknet. If an ordinary user finds software on the Internet and tries to make a deepfake, the result will be unrealistic and obvious to the human eye. Few people will buy into a poor quality deepfake: they will notice lags in facial expression or a blurring of the shape of the chin.

Therefore, when cybercriminals are preparing for an attack, they will need a big amount of data: photos, videos and audio of the person they want to impersonate. Different angles, brightness of lighting, facial expressions, all play a big role in the final quality. For the result to be realistic, up-to-date computer power and software is necessary.

All this demands a huge amount of resources, Kaspersky adds, and is only available to a small number of cyber criminals.

“Despite the dangers that a deepfake can provide, it is still an extremely rare threat and only a small number of buyers will be able to afford it – after all, the price for one minute of a deepfake can start from US$20,000,” Kaspersky states.

Risk to reputation

Dmitry Anikin, a senior security expert at Kaspersky, says, “One of the most serious threats that deepfake poses to business is not always the theft of corporate data. Sometimes reputational risks can have very severe consequences. Imagine a video is published in which your executive (apparently) makes polarising statements on sensitive issues. For corporations, this can quickly lead to a crash in share prices. However, despite the fact that the risks of such a threat are extremely high, the chance that you will be attacked in this way remains extremely low due to the cost of creating deepfakes and the fact that few attackers are able to create a high-quality deepfake.”

Anikin says users to be aware of the key characteristics of deepfake videos and keep a sceptical attitude to voicemail and videos they receive. "Also, ensure your employees understand what deepfake is and how they can recognise it: for instance, jerky movement, shifts in skin tone, strange blinking or no blinking at all, and so on," he adds.

Kaspersky advises that continuous monitoring of darknet resources provides valuable insights into the deepfake industry, allowing researchers to track the latest trends and activities of threat actors in this space. By monitoring the darknet, researchers can uncover new tools, services, and marketplaces used for the creation and distribution of deepfakes.

Share