More and more businesses are restricting the use of GenAI, or simply banning it, on data privacy and security grounds. A global survey by Cisco found that one in four organisations has now banned its use. Employees may inadvertently leak sensitive company information, which is exactly what happened at Samsung. It was reported that an engineer fed glitchy source code from a semiconductor database into ChatGPT, prompting the chatbot to fix the error. Later, another Samsung employee converted a smartphone recording of an internal company meeting to a document file, asking ChatGPT to generate the minutes. Both these incidents are cause for concern because ChatGPT saves user inputs to train its models, meaning Samsung’s trade secrets ended up in the hands of OpenAI, which owns the AI service.
The South Korean tech giant isn’t against AI – it’s said to be developing its own AI software for internal development – but it does highlight a growing challenge. How can businesses harness the power of GenAI while also protecting their data? As organisations rush to adopt GenAI tools, the security risks are multiplying, creating a complex landscape where innovation and security must coexist.
“AI is not a creator of problems, but, rather, a multiplier of existing ones,” says Ferdinand Steenkamp, a co-founder of Tregter, a data management company.
Real attackers don’t break in, they log in.
Ravi Govender, Momentum
“Big data AI doesn’t necessarily introduce new issues. Instead, it amplifies and highlights the challenges already present within a company’s systems and processes.” This means organisations should scrutinise their existing security frameworks carefully. While no single framework can address every challenge, Steenkamp suggests focusing on the fundamentals, like workplace training.
A recent survey by the National Cybersecurity Alliance and CybSafe says 38% of employees admitted to sharing sensitive data with AI tools. In the same survey, 52% of employed participants said they hadn’t received any training on safe AI use. “I will always recommend that a company start with user awareness on how these tools work, and how they should use them responsibly,” says Neda Smith, founder and CEO at Agile Advisory Services. If a company is using public GenAI tools, Smith says it’s important to implement a monitoring solution to track the usage and flow of data to ensure compliance to the company’s AI policies. Tools such as Varonis, which monitors employee usage of Copilot, or Snow Software, which was acquired by Flexera and tracks ChatGPT usage, can provide valuable insights. “But if a company wants to eliminate the risk of data leakage, the most secure approach would be to develop their own GenAI tools, using a base LLM, like OpenAI GPT or Stability AI,” she says.
Safety first
Ravi Govender, CIO, Momentum, says this approach has proved transformative. By building secure AI tools trained on its internal data, the company says it has found a way to give employees the benefits of GenAI while protecting its sensitive information. The approach has been particularly successful with broker consultants, who need accurate information quickly. Rather than having them turn to ChatGPT or other external AI tools, Momentum developed its own internal tool. “We’ve looked at how we take key product information and policies, train a combination of language models and make it available to our broker consultants,” says Govender. When consultants need specific information, instead of searching through a library of documents, they can query the AI and get immediate responses. The broker is also supplied with a source list, so they can double-check the information and confirm the validity of what’s been found. For underwriters and claims agents, Momentum is now rolling out similar capabilities. Each tool is trained on Momentum’s internal data, giving staff the same AI capabilities they’d get from public tools, but in a secure environment. “The faster you can enable the assessor to be able to get the information they need, the faster you’re going to be able to go back to the client with a more accurate answer,” says Govender.
GenAI can be a time-saving productivity tool, but Govender says its use must complement open communication and clear security protocols. “Culture is a very important part of ensuring that you don’t have to have eyes everywhere, all the time, because every single colleague understands the risks and responsibilities. The easier you make communication and engagement, the safer the environment is,” he says. “People feel they can ask and engage first around potential opportunities and risks and don’t feel somebody’s just standing in their way and being a killjoy.”
A culture of communication
Recognising the new security challenges around GenAI, Momentum also established an “AI Centre of Excellence” to set standards, provide support and ensure consistency in how AI is adopted in the organisation. “Real hackers don’t break in, they log in,” says Govender. “And there’s an interesting corollary to that on AI. Real attackers don’t write code, they now prompt an engineer.”
AI is not a creator of problems, but, rather, a multiplier of existing ones.
Ferdinand Steenkamp, Tregter
To address this threat, the hub brings together experts in data, cybersecurity and risk and compliance. Their role is to set guardrails and enable innovation, as AI becomes increasingly prevalent not just within Momentum’s systems, but also through third-party applications. By managing the risks and opportunities, Govender’s hope is that this structured, collaborative approach will help Momentum strike the right balance between the velocity of AI adoption and the necessary safety measures.
SECURING AI FROM WITHIN
Every enterprise rushing to adopt GenAI faces the same question: how do you use it safely? For Chris Betz, CISO of Amazon Web Services (AWS), the answer lies not in a single solution, but in a structured approach that protects data at every level of interaction. “It’s now one of the most frequent conversations I have with CIOs – how to secure generative AI workloads,” he says. Simply having a secure GenAI model is not enough. Organisations need to also carefully manage and protect the data that is used to power those models. Betz compares it to a layer cake: the first layer is about protecting the data during fine-tuning or training models. Making an AI model work for a business involves processing sensitive data. A financial institution, for example, will need to provide customer information, which is the most important, and sensitive, data in its possession. “You have to make sure that you have appropriate protection around that data to begin with,” he says. The next layer, says Betz, is inference. Once a model is trained and fine-tuned, prompting can begin. The responses that you get back in those prompts now include customer data. “But at the same time, you don’t want to store everyone’s information in that model. That wouldn’t be smart,” he says, “because the model learns from it and this information might be compromised.”
A financial institution needs to store personalised private data securely, and this data should only be “unwrapped” in order to get the inference. “You have to make sure that you have the right guard rails in that environment,” he says. “You want to make sure the data coming in and the data going out is well protected.” He says it’s important that the queries only cover the things that a business wants, because when a model is unconstrained, there may be unintended consequences.
Secure foundation
The third layer focuses on how models operate in the real world. No model exists in isolation; they must be integrated into applications to provide specific functionality and capabilities. “It’s a fun experiment to send a prompt into a model and get a response. We can all play around with that. But at an enterprise scale, models need to exist within applications,” says Betz, who adds that this leads to a common mistake. Companies will focus on securing the data and the GenAI models themselves, but they often forget about securing the applications that use those models. “It’s where we see many companies falling – they think about protecting data in layer one. They think about the model in layer two. But they forget to make sure that the application is secure,” he says. Neglecting the application layer can undermine all the good security work done at the other levels.
Betz believes that a secure foundation for a GenAI platform is important because it provides customers with the trust and confidence to build their applications on top of these services. Amazon Bedrock, for example, includes features such as customer-managed encryption keys and Nitro-based isolation to ensure the data remains protected. Nitro is a set of hardware and software systems for EC2 instances. It also restricts access to the data and models, allowing only authorised users and processes to interact with them. “When you have security in your DNA as an organisation, it helps you to think about security at every step. I’ve seen companies take what appears to be ‘shortcuts’ in order to get generative AI solutions to market quickly,” says Betz. “But building good security tools is hard. Building simple, good security tools is art.”
* Article first published on brainstorm.itweb.co.za
Share