Polymer

Download free DLP for AI whitepaper

Summary

  • Generative AI is growing fast, but insurers are catching up slowly.
  • In 2024, expect insurers to address AI-related coverage gaps and new security risks.
  • Businesses should be ready for questions about AI usage, privacy measures, and possible premium increases.
  • To lower premiums, deploy platforms like Polymer DLP to reduce the likelihood of generative AI data leakage.

Generative AI has taken the world by storm. But, for all the benefits of this revolutionary technology, cybersecurity and risk leaders are right to have a few concerns. Currently, just 20% of companies have risk policies in place for generative AI, yet three-quarters believe it will introduce new security risks. 

It’s not just businesses concerned about the risks of generative AI. The insurance industry is too. While the technology has the potential to transform underwriting processes, it also makes cyber policies more complex. After all, with any new technology comes new risks. 

With that in mind, here’s what you need to know about how generative AI may impact your insurance premiums in the next year. 

What are the cybersecurity risks of ChatGPT? 

The cybersecurity risks of ChatGPT come in two major forms: new breeds of cyber attack and accidental data leakage. In the case of the former, hackers can misuse generative AI as a coding assistant to effortlessly create the software needed to launch an attack.

In this sense, generative AI tools could effectively democratize access to hacking, enabling amateur individuals, who previously didn’t have the capabilities to conduct successful attacks, to launch sophisticated attacks at speed and scale. 

Accidental data leakage is another potent risk with generative AI tools. Employees might unknowingly share sensitive information like proprietary code or personally identifiable data while using these platforms. 

What many people don’t realize is that these tools utilize user data to enhance their models, and in some cases, this data can eventually become accessible to the public. This means that well-intentioned employees may inadvertently put their organizations at risk by using these AI tools without fully grasping the potential consequences.

In fact, Samsung placed a ban on the use of generative AI apps in May 2023 after several employees unintentionally leaked sensitive source code via ChatGPT. 

Does my insurance cover these risks?

As companies increasingly look to generative AI to boost productivity and efficiency, being mindful of insurance implications is paramount. While some commercial insurance policies may already cover AI-related risks, the lack of specific language surrounding AI in many of today’s policies could create ambiguity. 

To reduce the likelihood of disputes with your insurer, carefully analyzing your current policy is essential. Here are the types of policies most likely to cover generative AI risks. 

Cyber policies

It’s only natural for businesses to turn to their cyber insurance policies when considering coverage for AI-related exposures. Cyber policies come in various shapes and sizes, but they typically provide protection against a broad spectrum of risks.

These can range from losses related to your digital assets (first-party) to liability for data breaches affecting third parties. This coverage becomes particularly crucial when you consider the potential fallout of a generative AI-powered system getting hacked.

However, as mentioned, policy language gaps may mean your insurer refuses to cover some instances. For example, if your AI system inadvertently uses training data that infringes on copyrights or other intellectual property rights, this might not neatly fit into the standard coverages typically outlined in basic cyber insurance forms, which mainly focus on security breaches, extortion threats, or data restoration.

Moreover, due to the fact that it is a public platform, if you suffer a data breach stemming from the free version of Chat GPT, it’s likely your insurer will refute payout for the incident. 

Technology errors and omissions policies

E&O policies cover organizations that create software for other organizations. In the age of generative AI, with many companies creating their own large-language-learning models, E&O policies could extend to risks like hallucinatory answers by generative AI. 

However, this is not a given. When it’s time for your coverage to be renewed, carefully review your policy, negotiating for generative AI activities to be explicitly included within the scope.

Emerging AI policies 

Some forward-thinking insurers are capitalizing on the generative AI boom by developing unique policies solely for AI. Munich Re, for example, has announced its “aiSelf” policy, designed to mitigate the risks associated with the “the underperformance, unreliability and drift of machine learning models.”

While Munich Re is currently in the minority of companies offering generative AI-based policies, it’s likely we will see more insurers enter the space in the coming months and years. 

What to do 

Generative AI has soared to popularity incredibly quickly, and insurers are still catching up. Chances are, in 2024, many will take steps to address potential gaps in coverage. After all, generative AI presents many unknown risks, and no insurer wants to be left in a costly dispute to policy ambiguity. 

With that in mind, in the coming months, organizations should be prepared to answer questions about how they use AI, particularly concerning privacy provisions. Cyber insurers will set premiums based on how organizations are employing generative AI, whether those uses are properly contracted, and if the necessary terms and conditions are in place to safeguard data privacy. Failing to meet these requirements will no doubt lead to higher insurance premiums.

To that end, policyholders will need to bolster their cybersecurity capabilities in order to make cyber insurance financially viable. First and foremost, consider opting for paid enterprise versions of tools like ChatGPT. These business licenses provide greater control over data usage, storage, and deletion. Users with Pro or Business licenses also gain access to OpenAI’s API, enhancing transparency.

Secondly, organizations should deploy tools that minimize the risk of data exfiltration when using generative AI like ChatGPT. Platforms like Polymer data loss prevention (DLP) for AI offer a robust solution. 

With natural language processing (NLP) at its core, Polymer DLP for AI intelligently redacts personally identifiable information (PII) and intellectual property (IP) in real-time, across major generative AI platforms and cloud applications such as Slack, Teams, and Dropbox.

For a deeper understanding of Polymer DLP for AI, check out our recent whitepaper. And, if you’re eager to experience the power of Polymer DLP for AI firsthand, request a demo.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.