Subscribe
About
  • Home
  • /
  • Security
  • /
  • Emerging cyber security threats in 2023: ChatGPT and beyond

Emerging cyber security threats in 2023: ChatGPT and beyond

GPT-driven phishing campaigns are a real threat.
GPT-driven phishing campaigns are a real threat.

Concerning cyber security, it is widely acknowledged that humans often present the weakest link. We tend to click on links without due caution, open e-mail attachments without verifying the sender's authenticity, and download programs from questionable sources. Phishing remains a persistent and favoured weapon among threat actors, serving as a prelude to various attacks, including ransomware, and a means to steal credentials, infiltrate infrastructures and disrupt operations.

Even if you are unfamiliar with the term "phishing", you have likely come across stories of its unfortunate victims, which frequently make headlines. Concurrently, the generative pre-trained transformer (GPT), a prominent large language model (LLM) and a leading framework for generative artificial intelligence, is making waves in the tech world. While GPT is not an entity one would readily associate with digital compromise, it may play an unforeseen role.

GPT models, driven by neural networks and machine learning, are currently the darlings of the technology landscape. Nearly everyone has something to say about them, and their influence is widespread. For instance, Microsoft has a substantial stake in OpenAI's ChatGPT. However, while Google's counterpart, Bard, had a less-than-ideal introduction, it is safe to assume that any technology associated with Alphabet is unlikely to be a passing trend.

The dangers of GPT

One of GPT's standout features is its ability to generate convincingly human-like text on a massive scale. This capability raises red flags for security professionals, as it has profound implications for phishing and other e-mail-based threats, including business e-mail compromise (BEC). Phishing relies on percentages; targeting a large number of users with messages ultimately leads to the successful "catch" of a victim. When scaled up, this equates to significant success. However, there is a potentially catastrophic downside for regional businesses. In regions like the UAE, warnings about phishing come from various agencies such as the TDRA and the Central Bank on an almost monthly basis.

In phishing, the objective is to deceive the end-user into believing that the message, whether via e-mail, SMS, WhatsApp or other platform, originates from a legitimate entity, such as a known or trusted brand. With BEC, the e-mails are meticulously crafted to appear as if they come from an IT help desk agent or a superior. GPT can facilitate this process by reverse-engineering aspects like style and language to make it appear as if the recipient is interacting with a colleague rather than a malicious actor. Although tools currently exist to differentiate between machine-generated and human-written text in messages, we must prepare for a future in which GPT evolves to a point where these tools become ineffective. Additionally, while restrictions are currently in place to prevent the misuse of GPT technology, the darker corners of the internet may eventually find ways to bypass these safeguards. We should also anticipate cyber threat actors, who are already leveraging AI in their campaigns, figuring out how to employ GPT-like models to generate images, videos or target specific industries.

Mitigating the risks

Phishing and other e-mail-based attacks consistently yield results for cyber criminals. In this underground industry, where the only bottom line is financial gain, results speak for themselves. If the cyber criminal community can find ways to make their e-mails appear even more genuine, they will seize the opportunity. However, legitimate companies are equally resourceful and can enhance their resilience by implementing several cyber security measures. On the technological front, they can deploy AI-powered e-mail protection solutions. Artificial intelligence is increasingly becoming a standard defence tool in the security market, effectively countering modern cyber criminal tactics, such as account hijacking and BEC. AI is exceptionally adept at identifying and monitoring suspicious activities, including logins from unusual locations and the presence of abnormal IP addresses. By cross-referencing this data with the approximate geographic locations of employees, it becomes feasible to block access attempts from foreign countries. Additionally, multi-factor authentication (MFA) can provide an extra layer of defence against infiltration when the only other identity factors are basic usernames and passwords. Although SMS-delivered authentication codes are not ideal, biometric identification methods like thumbprints, retinal scans and facial recognition can significantly enhance security. In addition to these measures, automated incident-response solutions can facilitate timely inbox purges and gather valuable intelligence to streamline future remediation efforts.

"Even as we make strides in minimising the threats that reach our inboxes and enhance our authentication processes, it remains evident that the individual is the most vulnerable point," remarks Ian Parker, Executive Product Manager at LOOPHOLD Security Distribution. "While adopting essential AI-powered e-mail protection solutions like Barracuda Email Protection (https://www.loophold.com/technology-partners/barracuda-networks) is paramount, we must also invest in security awareness training within our organisations. This ensures that our employees are well-informed and vigilant against suspicious e-mails and other potential threats."

Addressing the human factor

From a human perspective, organisations should prioritise regular training. Recognising and responding to phishing and spear-phishing attacks is straightforward in principle, but it requires vigilant employees. The concept of the "employee experience" is often discussed, and in cyber security, the aim is to foster employees with the experience to resist phishing attempts effectively. Gamification and simulations can play a role in making users more conscious of their actions when they understand the potential impact. Training sessions of this nature also help security teams identify users at higher risk of falling victim to social engineering.

Furthermore, internal policies should undergo thorough review and employees must be educated about what information can and cannot be communicated via e-mail. Robust policies can be established to ensure that sensitive information cannot be transmitted via e-mail.

Preparation for potential threats

Cyber security is fundamentally about anticipation. GPT could soon become a significant threat and thus we must act proactively to comprehend it fully and equip ourselves with the skills and tools required to withstand GPT-driven phishing campaigns if they ever become widespread.

Share