AI, generative AI and AI agents are playing a growing role in cyber security – both in the hands of cyber criminals and defenders.
This is according to Anna Collard, SVP content strategy and evangelist at KnowBe4 Africa and Global Ambassador for South Africa in the Global Council for Responsible AI, who was addressing a webinar on AI in cyber risk.
A poll of attendees found that 95% are concerned about AI’s impact on cyber crime, which Collard said aligned with global trends.
Collard said: “Cyber criminals are weaponising AI – their tactics are not particularly new, but AI allows them to scale and automate their existing tactics, such as social engineering. Criminals now have easy access to generative AI tools: Egress, a company we recently acquired, recently brought out research showing that most phishing toolkits now offer AI features, and 82% of the criminal toolkits they looked into have deepfake capabilities.”
She said: “Currently, the way criminals are using AI is to augment and automate their existing attack vectors. We also see growth in cognitive and narrative attacks and disinformation campaigns that manipulate public perceptions and erode trust in democratic institutions. Targeted disinformation campaigns on social media alone have grown fourfold on the African continent since 2022.”
Collard noted, however, that AI risk is not only due to the bad actors using AI: “There are also risks when legitimate companies adopt AI in our processes. Depending on how we implement it, AI can be abused, put data privacy at risk and could be susceptible to data poisoning from outside the organisation. There are user risks too – our surveys find that 63% of users are comfortable or very comfortable sharing very sensitive or personal information with generative AI as it becomes part of their daily lives.”
Collard also highlighted AI agents – the evolution of generative AI, which are more autonomous.
“There is the risk that AI agents could autonomously plan, execute, adapt and automate an entire cyber crime operation,” she said. “For example, an AI agent could look through divorce databases and personalise romance scam campaigns. There are also concerns about the risk of AI agents going rogue – researchers have found AI systems taking actions to enhance their own survivability.
“As defenders, we need to prepare for this. We can use AI for good to augment our security teams. We need to ensure it's properly configured, and we need to address the human risk around AI.
"AI agents will be helpful in threat monitoring, vulnerability management, SOAR, predictive threat intelligence, SOC support and to automate phishing and fraud prevention. AI also supports security awareness and training,” she said.
“To protect our own AI systems, we need to sanitise and validate inputs, have regular red team assessments, track components of the AI bills of material, have principles of zero trust, train system engineers and stay informed.”
Collard emphasised that to address the human risk, it is necessary to counter psychological vulnerability and over-trust, drive a zero trust mindset, educate people about the latest scams and implement multi-literacy – teaching people to think critically about any incoming information.
AIDA to enhance human risk mitigation
Deshen Padayachee, account manager at KnowBe4 Africa, demonstrated KnowBe4’s new AI defence agents – AIDA – which create data-driven automations, including phishing template generators. The generator creates mails that can be personalised, spoof e-mail addresses and URLs, compose phishing content in various styles, and include red flags and common tactics.
AIDA agents can also automate remedial training, creating assignments, scores and reports. It can create knowledge refresher content to keep security top of mind.
In KnowBe4’s policy tools, AIDA will immediately test a user’s knowledge after they have accepted the company’s policy.
“This ensures that users have retained the information in your policies,” he said. “If they fail, they have to review the policy again and redo the quiz.”
Share