Subscribe
About

AI-generated texts could increase people’s exposure to threats

Research finds that large language models are susceptible to abuse through creative prompt engineering, forcing people to become even more skeptical about what they read.

Nearly universal access to models that deliver human-sounding text in seconds presents a turning point in human history, according to new research from WithSecure (formerly known as F-Secure Business).

The research details a series of experiments conducted using GPT-3 (Generative Pre-trained Transformer 3) – language models that use machine learning to generate text.

The experiments used prompt engineering – a concept related to large language models that involves discovering inputs that yield desirable or useful results – to produce a variety of content the researchers deemed harmful.

Numerous experiments assessed how changes in inputs to the current available models affected the synthetic text output. The goal was to identify how AI-language generation can be misused through malicious and creative prompt engineering, in hopes that the research could be used to direct the creation of safer large language models in the future.

The experiments covered phishing and spear-phishing, harassment, social validation for scams, the appropriation of a written style, the creation of deliberately divisive opinions, using the models to create prompts for malicious text and fake news.

“The fact that anyone with an internet connection can now access powerful large language models has one very practical consequence: it’s now reasonable to assume any new communication you receive may have been written with the help of a robot,” said WithSecure Intelligence Researcher Andy Patel, who spearheaded the research.“Going forward, AI’s use to generate both harmful and useful content will require detection strategies capable of understanding the meaning and purpose of written content.”

The responses from the models in these use cases along with the general development of GPT-3 models led the researchers to several conclusions, including (but not limited to):

  • Prompt engineering will develop as a discipline, as will malicious prompt creation;
  • Adversaries will develop capabilities enabled by large language models in unpredictable ways;
  • Identifying malicious or abusive content will become more difficult for platform providers; and
  • Large language models already give criminals the ability to make any targeted communication as part of an attack more effective.

“We began this research before ChatGPT made GPT-3 technology available to everyone,” Patel said. “This development increased our urgency and efforts. Because, to some degree, we are all Blade Runners now, trying to figure out if the intelligence we’re dealing with is ‘real’ or artificial.”

The full research is now available at https://labs.withsecure.com/publications/creatively-malicious-prompt-engineering.

This work was supported by CC-DRIVER, a project funded by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 883543.


Share

WithSecure

WithSecure™, formerly F-Secure Business, is cyber security's reliable partner. IT service providers, MSSPs and businesses – along with the largest financial institutions, manufacturers, and thousands of the world's most advanced communications and technology providers – trust us for outcome-based cyber security that protects and enables their operations. Our AI-driven protection secures endpoints and cloud collaboration, and our intelligent detection and response are powered by experts who identify business risks by proactively hunting for threats and confronting live attacks. Our consultants partner with enterprises and tech challengers to build resilience through evidence-based security advice. With more than 30 years of experience in building technology that meets business objectives, we've built our portfolio to grow with our partners through flexible commercial models.

WithSecure™ Corporation was founded in 1988, and is listed on NASDAQ OMX Helsinki Ltd.

Editorial contacts

Adam Pilkey
Media relations
(+35) 844 343 4274