Subscribe
About
  • Home
  • /
  • Access Control
  • /
  • Beware of deepfakes, catfishing, social engineering this Valentine’s Day

Beware of deepfakes, catfishing, social engineering this Valentine’s Day

By Doros Hadjizenonos, Regional Director at Fortinet, Southern Africa
Doros Hadjizenonos, Regional Director at Fortinet, Southern Africa.
Doros Hadjizenonos, Regional Director at Fortinet, Southern Africa.

Finding love online might be a risky prospect due to scammers and catfishing. This arena has grown even more dangerously with the emergence of deepfakes. According to FortiGuard Labs’ Cyber Threat Predictions for 2022, deepfakes are a growing concern because they use AI to mimic humans and can be used to enhance social engineering attacks.

Doros Hadjizenonos, Regional Director at Fortinet, Southern Africa, says confident tricksters have been defrauding victims for generations, but the emergence of ever more sophisticated technology is enabling them to do so faster, in greater numbers and at lower risk to themselves. “Attackers are even more likely to strike at romance-focused times like Valentine’s Day,” he says.

From romance scams to social engineering attacks

Romance scams usually involve a cyber criminal developing a relationship with the victim to gain a victim’s affection and trust and then using the close relationship to manipulate and steal from the victim. Some also request intimate photos and videos and later use these to extort money.

The scams are rife around the world, and the US Federal Trade Commission (FTC) reports that individuals lose more money on romance scams than on any other fraud type. Indeed, in 2020, reported losses to romance scams in the US reached a record $304 million, up about 50% from 2019. No country is immune: just a few months ago, eight people were arrested in South Africa in connection with romance scams in which over 100 victims around the world lost over R100 million.

With the Valentine’s Day season in full swing and the increase of digital activity due to the pandemic situation, this is also the perfect opportunity for hackers to create enticing and appropriate lures. By relying on social engineering tactics such as phishing, smishing or even vishing, cyber criminals will try to fool people online and seize sensitive data.

Cyber criminals use AI to master deepfakes

Artificial intelligence (AI) is already used defensively in many ways, such as detecting unusual behaviour that may indicate an attack, usually by botnets. Going forward, this will evolve as deepfakes, a form of AI that can be used to create convincing hoax images, sounds and videos.

Deepfake technology can be used within social engineering scams, with audio fooling people into believing trusted individuals have said something they did not. It can also be used to spread automated disinformation attacks or even to create new identities and steal the identities of real people.

Hadjizenonos adds: “Changing your face to look like someone else is now so easy, there are free apps for this. Most of these are simply for fun, but the next step is to change the speaker’s face and voice to look convincingly like someone else and chat to a victim in real-time. This could eventually lead to impersonations over voice and video applications that could pass biometric analysis posing challenges for secure forms of authentication such as voiceprints or facial recognition.”

Even if deepfake technology continues to evolve, it can be spotted by recognising unusual activity or unnatural movement such as a lack of blinking or normal eye movements, unusual or unnatural facial expressions or body shape, unrealistic looking hair, abnormal skin colours, bad lip-syncing and jerky movements and distorted images when people move or turn their heads. “As this technology become mainstream, we will need to change how we detect and mitigate threats, including using AI to detect voice and video anomalies,” concludes Hadjizenonos.

Share