“Harnessing technological progress means balancing potential risks and rewards. In terms of technology, 2023 has been the year of generative artificial intelligence (AI), with ChatGPT, Bard and DALL-E becoming household names at a remarkable speed and scale of adoption,” says Spiros Fatouros, CEO of Marsh McLennan Africa. For example, ChatGPT acquired 57 million monthly active users in its first month; in comparison, TikTok took nine months to achieve the same user base.
Generative AI raises new questions for risk management
Generative AI represents a type of artificial intelligence that is capable of creating new and believable content, such as highly technical text, realistic audio and lifelike images. Some estimates say that by 2025, 10% of all data will be the result of generative AI creations.
With generative AI’s rise comes a number of questions for risk and insurance professionals and their organisations. What unique risks and opportunities does generative AI pose, if any at all? Why is there heightened attention around generative AI, when other forms of AI have been around for decades? What risks will insureds, brokers and insurers need to navigate as generative AI evolves and interacts with other emerging technologies?
While headlines tout both generative AI’s extreme risks and opportunities, a balanced perspective is critical to making informed decisions and managing risks responsibly. To stay relevant and competitive, companies will need to learn how and when to leverage generative AI to optimally achieve objectives, such as realising operational efficiencies, increasing customer satisfaction and developing new products and services.
“Companies will need to strategically assess how and when to adopt generative AI systems, partner with vendors, implement appropriate governance and risk management protocols and train employees with new skillsets (such as prompt engineering),” continues Fatouros.
Navigating new and familiar risks
Many risks associated with generative AI are extensions of existing, familiar risks such as data privacy, which has been a concern for decades. Misuse of technology to generate harmful content has long been associated with social media platforms. Potential intellectual property rights infringement from content generation is a familiar risk that many industries, from music and publishing to software development, have grappled with historically. Technological errors have existed since the advent of technology.
These risks may become more concentrated or surface in new circumstances as generative AI is applied to increasing and diverse use cases, but they remain extensions of existing, familiar risks, which may generally be addressed by existing casualty, media, cyber and first party insurance products, among others.
However, new risks may emerge from generative AI in two primary ways:
Emergent capabilities
Advanced generative AI models can develop emergent capabilities, which are capabilities not originally intended or expected by the model creators. For example, a generative AI system trained on text-based datasets may develop the capability to write code even though it was not explicitly programmed to do so.
While emergent capabilities can provide new benefits to users of generative AI systems, they also portend a lack of predictability. As generative AI systems increasingly interact with complex human-made systems such as financial markets, healthcare ecosystems and social networks, we may confront unforeseen risks such as new types of vulnerabilities and attack vectors.
Technological convergence
The convergence of generative AI and other emerging technologies may give rise to new risks. For example, the combination of generative AI and mixed reality technologies is further blurring the line between physical and digital worlds, making it challenging (if not impossible) to differentiate between artificial/digital and human/physical entities and creations.
Technological convergence may also pose new challenges for legal and regulatory domains. For example, as generative AI systems increasingly produce believable, realistic outputs, how will we be able to tell between AI-generated and human-generated outputs, and how will corresponding intellectual property rights be determined?
As generative AI continues to develop, its creators, service providers and users need to determine how and when to use the technology, and to proactively anticipate and manage its risks. Companies should continue to verify any outputs when using generative AI, as with all technologies. For instance, when generative AI models produce nonsensical, erroneous outputs – popularly referred to as “hallucinations” – the burden remains, as ever, on human users to verify the accuracy and contextual relevance of such outputs before using them.
How the insurance sector can help enhance responsible risk taking
As new categories of risks emerge from evolving capabilities and technological convergence, the insurance sector should take a thoughtful, methodical approach to underwriting, pricing and developing products with the end customer in mind. The lack of historical claims data and legal precedent creates a need to develop proxies to inform product development. It will also be important to develop feedback loops to monitor and anticipate the risks as they emerge and evolve.
The insurance sector will play an indispensable role in shaping how companies balance the unique risks and rewards of generative AI. This includes providing companies with coverage analysis to help understand what risks associated with generative AI may be covered under their current insurance policies, or where coverage may be limited.
“Generative AI’s opportunities and risks, while complex, are within our control. Our human agency to understand and navigate should be at the heart of all discussions about generative AI's future, including how to manage its risks and benefit from its opportunities,” concludes Fatouros.
Share