Subscribe
About

Beating cybercriminals at their own game

By Tiana Cline, Contributor
Johannesburg, 10 May 2023
Sergey Shykevich, Check Point.
Sergey Shykevich, Check Point.

The cybersecurity landscape is evolving at a rapid pace. Where threats once meant your email password wasn’t strong enough, they can now put an entire country into a national state of emergency. But for cybersecurity companies, AI is a double-edged sword, an ever-evolving contest of skill and intelligence. According to McKinsey, AI, ML, and automation will be used to accelerate cyberattacks over the next several years, and to defend against the attacks, companies will need to adapt and use the same AI techniques.

Feeding the phish

“The advancement of AI, when used efficiently, is going to make the world a lot better, but cybercriminals will also start leveraging things like AI to make their attacks better,” says Nomalizo Hlazo, the head of security governance at Investec. Phishing, for example, is one area where bad actors can take advantage of AI. “They could imitate an executive’s voice and get staff to respond. Conversational AI makes it a lot harder to identify when you’re being phished.” And there’s a reason for that: phishing emails created with AI often look better than the real thing. Hackers no longer need to intentionally misspell words to avoid spam filters – they can achieve this with AI.

“With AI, phishing emails will become more in tune with the user and make it a lot harder to figure out if it’s an attack or not,” says Hlazo. “When you combine AI with automation and machine learning, it also becomes easier to deploy a phishing attack. Gone are the days of manually trawling for information when AI can be used to collate information in order to make the attack a bit more targeted.”

Implementing AI

According to Mimecast's `State of Email Security 2023’ report, 55% of South African organisations are using some combination of AI and ML, with a further 38% saying they have plans to do so. “We’re using AI to combat attacks,” says Hlazo. “As AI progresses in the attacker’s sphere, it’s being integrated into security tools to help companies take a more defensive position. Using AI, they're able to use the correlation, check the data and get more information to respond even quicker and prevent any attacks. The more data we have, the more it empowers us to strengthen our security.” Kaspersky Lab, for example, relies on multi-layered, next-generation protection that uses AI and ML methods extensively on all stages of the detection pipeline.

“This includes scalable clustering methods used for pre-processing the incoming file stream in infrastructure to robust and compact deep neural network models for behavioural detection that will work directly on users’ machines,” says Bethwel Opil, Kaspersky in Africa’s enterprise lead.

Gone are the days of manually trawling for information when AI can be used to collate information in order to make the attack a bit more targeted.

Nomalizo Hlazo, Investec

For Sergey Shykevich, Check Point’s Threat Intelligence Group lead, there’s another area where AI can play a role – helping in the day-to-day life of a cybersecurity researcher. “Researchers are reverse engineers who work on malware families to try to analyse how they work. They’re using a system code, IDA, that allows them to do the reverse engineering,” he explains. “There are now many cool AI-enabled plug-ins to IDA that make the work much faster for reverse engineers. They highlight different functions in the code and show what the researcher should look at.” Shykevich emphasises that using these plug-ins doesn’t replace what a researcher is trying to achieve; it saves them time. This is also an area where a tool like OpenAI’s ChatGPT can be used effectively.

“When a script is implemented by a malicious actor, instead of doing a full analysis for the script, we just wrote in ChatGPT: ‘Could you please provide us a summary of what these specific scripts from this specific threat actor is doing’ and it gave us an answer.” Even though the research team checked the answer, ChatGPT has provided answers ‘very similar to what a researcher can do’, he adds. ChatGPT may be versatile, but the use of such a powerful tool also introduces new risks and vulnerabilities in the cybersecurity domain. “It’s still relatively easy to build a full infection chain using ChatGPT,” says Shykevich. Another issue with free-to-use conversational AI tools is privacy. “There is no clear policy on the site on how it stores or uses data,” he says. “It’s not only about sharing proprietary code; it could be an email to a customer with sensitive information on which I want to improve the English. It could later give this information to a competitor, or, even worse, a cybercriminal.”

AI's influence in the realm of cybersecurity is undeniable. With malicious actors harnessing emerging technologies, organisations must maintain their vigilance and adopt proactive defence strategies. This approach is important not just for catching the elusive cyber mouse, but also for putting an end to its menacing activities for good.

Malicious chatter

Beneath the internet we know lies a marketplace of hackers, drug dealers and criminals, where anything goes and anonymity is a top priority. Everyday web uses don’t see the dark web because its pages are encrypted and won’t come up in common search engines – you need a network like Tor to browse its secrets. According to NordVPN, ChatGPT is becoming the dark web’s hottest topic. In the first few months of 2023, threads about ChatGPT increased by 145%. “We went into the underground forums to try to see what cybercriminals are doing,” says Check Point’s Sergey Shykevich. “We saw already in December (2022) that cybercriminals were starting to create malware families using ChatGPT. But it’s not only ChatGPT, there’s also OpenAI Codex, which is focussed on code generation.”

A few weeks after the launch of ChatGPT, Shykevich’s team spotted a cybercriminal on the dark web who managed to build his first tool. “We’re yet to see something actionable or someone generating malware, but there are a lot of discussions,” he adds. One of the ways cybercriminals are getting around the restrictions of ChatGPT is to rather use its API. “The API has less restrictions and you can use it to generate everything,” says Shykevich. “We did analyse different code snippets on the dark web and the output is 80 20 – I call it the approach of Google Translate.” Shykevich compares the AI output to translating between two common languages like English and Spanish. The output will be good, but it usually isn’t perfect and requires human input to do adjustments. “The same can be said for the code outputted by ChatGPT – it’s not perfect. For example, you cannot ask it to write the whole architecture of a new firewall to replace your current one. The output is not there – it’s not even close,” he says. “It’s an exciting period to see what cybercriminals are thinking about, what they’re looking at…it’s a cat and mouse game.”

It’s still relatively easy to build a full infection chain using ChatGPT.

Sergey Shykevich, Check Point

On the other side, there are companies and law enforcement agencies using AI to solve one of the dark web’s biggest issues: data – approximately 75 000 terabytes and counting. It’s simply too much to go through manually, which means, until now, only a small percent of the crimes could be properly investigated. The good news is that companies such as Cobwebs Technologies are developing AI tools that can search for information about crimes before they happen. There are also law enforcement agencies using AI to link child abuse databases to data on the dark web and tackle sexual exploitation online.

Never trust, verify

When an organisation decides to implement a zero-trust strategy, they’re essentially transitioning from a security approach centred on compliance to one that is risk-driven. Zero trust makes sense in a world where the security perimeter is expanding and every endpoint, human or machine-based, needs to be authenticated. But there is a downside – continuous re-authentication is time-consuming and the amount of data required (being that every endpoint needs to be visible) is daunting for already overloaded IT teams, many of which are still using spreadsheets to manually track digital certificates. That’s where AI comes in.

“Zero trust is rooted in the principal of never trust, always verify,” says Jen Lombardo, a solutions architect at Nvidia. “There are petabytes of data created daily, mostly unstructured, unlabelled. It varies in type, size, granularity and importance…it becomes overwhelming very quickly to the point that many organisations limit the amount of data they’re capturing and analysing – even though the data may be available to them.” Lombardo believes cybersecurity is a data problem. “There may be a global shortage of security analysts, but even if there were enough, there’s still too much data to investigate and remediate,” she adds. To achieve zero trust, Nvidia created Morpheus, a cloud-native cybersecurity framework that uses machine learning to identify, capture and take action on threats and anomalies that were previously impossible to identify.

“Zero trust meets the challenge of protecting the perimeter by accommodating any number of devices and users with equally robust security,” says Bethwel Opil, from Kaspersky. “With every process being checked and rechecked continuously, it has become the ideal cybersecurity environment for modern organisations.”

A detection revolution

”AI is transforming cybersecurity by providing faster, more accurate threat detection and response, and enabling organisations to better protect their systems and data from cyberattacks,” says Wayne Olsen, managing executive, Cyber Security, BCX.

These are the four key areas where Olsen sees AI playing:

1. Detection

AI is identifying potential cyber threats before they can cause significant damage. AI algorithms can analyse vast amounts of data, including network traffic and user behaviour, to detect anomalies and suspicious activity that might go unnoticed by humans.

2. Identification

AI is helping identify vulnerabilities in software and systems, allowing organisations to patch them before they are exploited by cybercriminals. AI can also prioritise vulnerabilities based on their severity and potential impact, enabling organisations to focus their resources on the most critical ones.

3. Automation

AI is improving how we automate responses to cyber threats, including isolating infected devices, blocking malicious traffic, and initiating incident response procedures. This can help organisations respond more quickly to cyberattacks. The speed of machines is vastly superior to that of a human and decisions can be made faster, with remedial actions put in place quicker than a person could.

4. Prediction

AI can help predict future cyberattacks based on past patterns and trends. This can help organisations prepare for potential threats and take proactive measures to mitigate the risks, thus ensuring teams are better prepared.

* Article first published on brainstorm.itweb.co.za

Share