Subscribe
About

AI and cyber security: Do we need to panic?

While AI allows attackers to increase their speed and accuracy, using AI within cyber security prevention processes removes the panic element.
Luke Cifarelli
By Luke Cifarelli, South Africa country manager, Cymulate.
Johannesburg, 24 Mar 2025
Luke Cifarelli, country manager, South Africa, Cymulate.
Luke Cifarelli, country manager, South Africa, Cymulate.

Artificial intelligence (AI) and its adoption by businesses operating in the digital economy anxious to gain the potential benefits is a hot topic. Right on the heels of that discussion is the concern about what impact its introduction into any business can have on cyber security.

Let's take a minute to recap AI’s capabilities, and the primary concerns from a cyber security perspective around embracing it in business.

The National Cyber Security Centre (NCSC) in the UK notes business managers don't need to be technical experts, but they should at least know enough about the potential risks attached to AI to be able to discuss issues with key staff.

The NCSC defines AI as: “Any computer system that can perform tasks usually requiring human intelligence. This could include visual perception, text generation, speech recognition or translation between languages.”

Of course, arguably the biggest development in AI came with the introduction of generative AI (GenAI) in 2022. This involves AI tools that can produce different types of content − text, images, video and/or combinations of more than one type in the case of 'multimodal' tools, with most generative AI tools being geared towards specific tasks or domains.

Understanding how AI works is key to grasping where the security risks come in.

ChatGPT effectively allows users to 'ask a question' as you would when holding a conversation with a chatbot, whereas tools such as DALL-E can create digital images from natural language descriptions.

According to Gartner, GenAI technologies will evolve quickly over the next four years.

Technologies underpinning GenAI have been progressing at an unprecedented pace, thanks, in large part, to enormous investments from large technology companies and research labs.

In fact, Gartner notes that GenAI seems to be immune to the overall slowdown in venture capital investment, while well-funded start-ups continue to emerge and mature.

However, the speed at which GenAI technologies are emerging poses significant challenges for IT leaders tasked with staying abreast of industry developments.

It appears likely that future models will be capable of producing content for a broader range of situations, with OpenAI and Google reporting success across a range of benchmarks for their respective GPT-4 and Gemini models.

However, the NCSC emphasises that despite this broader applicability, there remains no consensus on whether the dystopian vision of the future − where an autonomous system surpasses human capabilities − will ever become a reality.

Understanding how AI works is key to grasping where the security risks come in. Most AI tools are built using machine learning techniques, which is when computer systems find patterns in data (or automatically solve problems) without having to be explicitly programmed by a human.

Machine learning enables a system to 'learn' for itself about how to derive information from data, with minimal supervision from a human developer.

For example, large language models (LLMs) are a type of GenAI that can generate different styles of text that mimic content created by a human. To enable this, an LLM is 'trained' on a large amount of text-based data, typically scraped from the internet. Depending on the LLM, this potentially includes web pages and other open-source content, such as scientific research, books and social media posts.

The process of training the LLM covers such a large volume of data that it's not possible to filter all of this content, and so 'controversial' (or simply incorrect) material is likely to be included.

Attackers using AI tools are not essentially creating new tactics or techniques; however, their speed and accuracy have increased. Criminals using AI are able to iterate on malware, craft more believable phishing e-mails and upscale their attacks, increasing their reach.

So, what's the solution?

Most organisations have invested in the tools capable of preventing these known tactics. However, continuously validating these controls against the latest threats will significantly reduce the chance of a security slip, regardless of the scale increase fuelled by AI.

There is a growing concern that employees may unintentionally upload sensitive data into an AI platform. Traditionally, organisations have used data loss prevention (DLP) solutions to prevent data leaving the network via various channels.

The next question is surely this is just another web-based channel needing protecting? The answer to that is yes and no.

At its core, it is just another channel needing a DLP policy that recognises sensitive data before allowing the data to leave − it should be noted that to achieve this, many DLP solutions now integrate with popular AI platforms.

But AI demands DLP policy maturity, as the fragmented and skewed manner in which data can be transferred, can magnify policy gaps.

What about AI output within the workplace? From a security point of view, this is less of a concern. AI platforms are self-governed with regards to results, with users always having had access to private devices.

The up-side of the coin is that with AI, defences are strengthening, time to prevention has decreased and the ability to create bespoke attack simulation scenarios with incorporated AI is a considerable win for organisations.

Criminals using AI is obviously a concern and most likely one that may lead to the development of new tactics in the future; however, cyber security defence benefits more.

Ensuring security controls are performing through validation, utilising AI within cyber security prevention processes and keeping policies such as DLP in-line with AI data modelling, removes the panic element from the situation.

Share