Subscribe

Beware of AI-powered virtual kidnapping

Christopher Tredger
By Christopher Tredger, Portals editor
Johannesburg, 09 Oct 2023
Zaheer Ebrahim,  solutions architect, Middle East and Africa, at Trend Micro.
Zaheer Ebrahim, solutions architect, Middle East and Africa, at Trend Micro.

Trend Micro has warned that threat actors are using artificial intelligence (AI) to unleash more sophisticated and terrifying attacks, particularly exemplified by the rise of virtual kidnapping.

The security company recently released findings of its 2023 Midyear Cybersecurity Threat Report, which showed a global increase in imposter scams like virtual kidnapping.

Virtual kidnapping refers to incidents in which malicious actors create a deepfake voice of their victim’s child and use it as proof that they have the child in their possession to pressure the victim into paying a large ransom.

In a recently published blog, the company explains: “Malicious actors who are able to create a deepfake voice of someone’s child can use an input script (possibly one that’s pulled from a movie script) to make the child appear to be crying, screaming, and in deep distress. The malicious actors could then use this deepfake voice as proof that they have the targeted victim’s child in their possession to pressure the victim into sending large ransom amounts.”

While the security company's research is based on analysis of international markets, Zaheer Ebrahim, solutions architect, Middle East and Africa, at Trend Micro, says it's inevitable that the tactic will head to this region too.

“With the proliferation of generative AI in recent months, we’ve started to see this technology being used in virtual kidnapping cases across the globe. It’s a relatively new tactic and one we expect will make its way to local shores in time.”

Virtual kidnapping, harpoon whaling and pig butchering (cryptocurrency investment scam) are just some examples of how AI technologies are being used in cyber attacks, adds Ebrahim.

Emerging malicious AI tools including WormGPT and FraudGPT are already being built on top of open-source generative AI platforms to democratise cybercrime, the company claims, making hackers more productive and attacks more likely to succeed.

‘Big fish’


AI is also being used in harpoon whaling attacks to target high-profile individuals, aiming to extract valuable information or significant sums of money. This social engineering scam involves meticulous research on the target, with e-mails crafted to create a sense of urgency and containing highly personalised details.

“With AI tools becoming increasingly adept at creating text that can seem human-crafted, the effort needed to attack executives has been drastically reduced, making the targeting of hundreds of thousands of executives easier than ever before,” notes Trend Micro.

Smart home networks (SHN) have also come under attack and during the first six months of the year, Trend Micro detected more than one-and-a-half million inbound SHN attacks in South Africa alone.

Threat actors have also been casting a wider net by leveraging vulnerabilities in smaller platforms for more specific targets, such as file transfer service MOVEit, business communications software 3CX, and print management software solution PaperCut.

Ransomware threat

The report highlights the persistent threat of ransomware for local companies, citing almost 2 500 attempts detected by Trend Micro in June alone. 

Earlier this year Trend Micro researchers discovered a new ransomware named ‘Mimic’, linked to the notorious Conti group. Collaborative efforts among criminal groups are suspected, aiming to reduce costs and increase market presence.

Moreover, the report notes a shift in ransomware tactics toward cryptocurrency theft and business e-mail compromise (BEC). 

Ebrahim says it’s critical for defenders to know these threats so they can make informed decisions and take proactive measures ''to stay ahead in the increasingly convoluted cat-and-mouse game of cyber security.”

Share