[CAPTION] Lisa Flynn, founder, Catalysts & Canaries Research Institute & Training Academy.
AI-driven cyber threats are increasing in volume and sophistication, and now also easily accessible, enabling anyone to use the technology, irrespective of their level of skill. This has prompted the cyber security market to use the term ‘cheapfakes’, another weapon in the expanding arsenal of cyber criminals.
This means organisations must strengthen their defence capabilities to combat AI social engineering attacks and deepfakes.
This is part of what Lisa Flynn, founder, Catalysts & Canaries Research Institute & Training Academy, plans to discuss at the 20th annual ITWeb Security Summit 2025, on 3 and 4 June at the Sandton Convention Centre.
Speaking to ITWeb ahead of the summit, Flynn said she will provide context for the theme by relaying details of an AI social engineering competition run at 2024 DEFCON 32, an annual cyber security and hacker-focused event held in Las Vegas, Nevada.
The competition showcased the rising danger of AI-driven social engineering and deepfake attacks.
Flynn added: “My partner and I created AI bots – synthetic voices – built on a decent prompt. I used several different models, and these agents carried out vhishing (voice phishing) attacks on an unsuspecting target company. Our bots competed against two of the world’s best social engineers. At the end of the competition, the humans won – but not by much.”
Emerging technology is advancing very quickly, said Flynn, and the market must keep up. For example, to create and use deepfakes requires specialised skills, experience and knowledge, but today, anyone who is connected to the internet can easily access the requisite hardware.
“Now we have consumer-grade AI available online, which makes it so easy for pretty much anyone to create very realistic deepfakes,” said Flynn, adding that people are initiating fake/scam calls to their targets who, on hearing a familiar voice, fall easily into the trap.
According to Flynn, the number of deepfakes posted each year and shared on social media is growing at an extreme rate. She said the outlook and projection is that by the end of 2025, there could be 8 million deepfakes shared – and the trend is that number doubles every six months.
Drawing from this and other real-world cases, her talk will examine how attackers leverage LLMs and emerging technologies to deceive and compromise systems at scale.
Flynn will highlight critical vulnerabilities and actionable strategies to combat these threats. Attendees will gain practical insights into defending against adversarial AI, safeguarding systems and staying ahead of the rapidly evolving landscape of AI-driven cyber threats.
The focus is on practical application, Flynn explained, because cognitive understanding of these issues is one aspect, while the other vital component is what happens in practice.
Share