Polymer

Download free DLP for AI whitepaper

Summary

  • Sharing confidential data with generative AI poses significant security risks.
  • Data leakage, half-baked applications, application vulnerabilities, and supply chain issues are common concerns.
  • To mitigate risks, organizations should assemble a dedicated team, create AI usage policies, invest in real-time training, and employ data loss prevention tools like Polymer data loss prevention (DLP).

The robots are coming, and they’re set to change how we live and work forever. According to McKinsey research, half of today’s workplace tasks could be automated between 2030 and 2060, with generative AI set to add $2.6 trillion to $4.4 trillion to the global economy each year.

But as Einstein once wisely said, “’with every action there’s an equal opposite reaction.” In other words, for all the positives brought about by AI, there are also some notable negatives–especially when it comes to data security and privacy. 

The security risks of generative AI 

According to recent research, the average data breach costs a huge USD 4.45 million per company. From incident response to reputational damage and legal fees, failing to adequately protect sensitive information is undeniably costly. 

While employees might be tempted to share sensitive information with generative AI tools in the name of speed and productivity, we advise all individuals to exercise caution.

Here’s a look at why.

Data leakage 

As Gartner recently noted, “There are currently no verifiable data governance and protection assurances regarding confidential enterprise information. Users should assume that any data or queries they enter into the ChatGPT and its competitors will become public information, and we advise enterprises to put in place controls to avoid inadvertently exposing IP.” 

Indeed, when a user shares data with a generative AI platform, it’s crucial to note that the tool, depending on its terms of use, may retain and reuse that data in future interactions. This raises significant concerns for businesses regarding any confidential information that might find its way onto a generative AI platform, as it could be processed and shared with third parties.

This actually happened to Samsung earlier in the year, after an engineer accidentally uploaded sensitive code to ChatGPT, leading to the unintended exposure of sensitive information. 

Half-baked applications 

The speed at which companies can roll out generative AI applications is unparalleled to anything we’ve ever seen before, and this rapid pace introduces a significant challenge: the potential for half-baked AI applications to masquerade as genuine products or services. 

In reality, some of these applications may be hastily assembled within a single afternoon, often with minimal oversight or consideration for user privacy and data security. As a result, confidential information entered into these apps could be more vulnerable to exposure or theft.

Application vulnerabilities

Introducing any new application into a network introduces fresh vulnerabilities–ones that malicious actors could potentially exploit to gain access to other areas within the network. 

Generative AI applications, in particular, introduce distinctive risks due to their opaque underlying algorithms, which often make it challenging for developers to pinpoint security flaws effectively.

For example, recent security research has highlighted the vulnerability of AI platforms to indirect prompt injection attacks. In a noteworthy experiment conducted in February, security researchers conducted an exercise in which they manipulated Microsoft’s Bing chatbot to mimic the behavior of a scammer. Should the same happen to ChatGPT or Bard, any sensitive information shared with these apps would be at risk.

Supply chain fallout 

When it comes to using generative AI for work, there are two key areas of contractual risk that companies should be aware of. Firstly, there might be constraints on the company’s ability to share confidential information relating to customers or clients with third parties. 

Secondly, the sharing of specific client data with these tools could potentially breach contractual agreements with those clients, especially concerning the approved purposes for utilizing their data.

During the evaluation of these risks, it’s essential to consider that the usage rights for generative AI are outlined across multiple documents. These documents encompass the Terms of Use, Sharing & Publication Policy, Content Policy, and Usage Policies.

Four steps to reduce the risks of generative AI data exposure 

While it’s undeniably unsafe to share confidential information with generative AI platforms, that’s not stopping employees, with research showing they are regularly sharing sensitive data with these tools. 

For the most part, employees don’t have malicious intentions. They just want to get their work done as swiftly and efficiently as possible, and don’t fully comprehend the data security consequences.  

Despite the risks, banning generative AI isn’t the way forward. As we know from the past, employees will only circumvent policies that keep them from doing their jobs effectively. Turning a blind eye to generative AI and sensitive data sharing isn’t wise either. It will likely only lead to a data breach–and compliance fine–later down the line.

So, what’s a business to do? Here’s four steps to take to reduce the risks of generative AI data exposure. 

Assemble a team

To ensure a smooth and secure implementation of generative AI within your organization, it’s essential to build a capable team well-versed in data security. This team will be responsible for identifying any potential legal issues, strategizing ways to address them, and keeping up-to-date with emerging regulations that might affect your existing compliance framework.

Ideally, this team should consist of:

  • Legal experts: These professionals provide invaluable legal insights, helping you navigate the compliance landscape and ensuring your AI implementation complies with all relevant regulations.
  • Data protection officer (DPO): A designated DPO focuses on safeguarding your data, making certain that all data processing activities align seamlessly with applicable regulations.
  • Security specialists: These experts bring their knowledge to the table, ensuring your data is managed and secured effectively, reducing the risk of breaches and ensuring compliance.
  • Privacy officer: This role manages privacy-related policies and procedures, acting as a liaison between your organization and regulatory authorities.
  • IT personnel: Your IT professionals are vital for implementing technical data security measures and integrating privacy-focused practices into your organization’s IT infrastructure.

Create a generative AI policy 

Your team will be responsible for designing and implementing policies around the use of generative AI, giving your employees guardrails within which to operate. We advise the following usage policies: 

  • Prohibited uses: This category encompasses activities that are strictly forbidden. Examples include employing ChatGPT to scrutinize confidential company or client documents or to assess sensitive company code.
  • Authorized uses needing approval: Certain applications of ChatGPT may be permitted, but only with authorization from a designated authority. For instance, generating code using ChatGPT may be allowed, provided that an expert reviews and approves it before implementation.
  • Permitted uses: This category includes activities that are generally allowed without the need for prior authorization. Examples here might involve using ChatGPT to create administrative internal content, such as generating ideas for icebreakers for new hires.

Invest in real-time training 

Creating policies is one thing, but getting employees to follow them is another. While one-off training sessions rarely have the desired impact, newer forms of AI-based employee training can be extremely effective. 

Our tool, Polymer data loss prevention (DLP) for AI, for example, harnesses the power of AI and automation to deliver real-time security training nudges that prompt employees to think twice before sharing sensitive information with generative AI tools. 

And should they attempt to proceed, our tool blocks risky actions altogether, explaining the reasoning in a language your employees understand. 

Protect against the inevitable 

While policies and training are crucial in reducing the likelihood of generative AI data leakage, you can’t depend solely on your people to uphold data security. Employees are human, after all, and they will make mistakes at some point or another.

With that in mind, it’s essential to backup your policies with the right tools to prevent data leakage and theft in AI platforms. And that’s where we come in. 

How Polymer DLP can help 

At Polymer, we believe in the transformative power of generative AI, but we know organizations need help to use it securely, responsibly and compliantly. Here’s how we support organizations in using apps like Chat GPT and Bard securely: 

  • Granular visibility and monitoring: Using our advanced monitoring system, Polymer DLP for AI is designed to discover and monitor the use of generative AI apps across your entire ecosystem. Harnessing the power of natural language processing (NLP) and automation, our tool provides granular insights on user behavior, data movement and policy violations, all which upholding data security and compliance. 
  • No more data leakage: Polymer DLP seamlessly and accurately discovers, classifies and protects sensitive information bidirectionally with ChatGPT and other generative AI apps, ensuring that sensitive data is always protected from exposure and theft.
  • Real-time education: Our platform supports point-of-violation training, providing real-time nudges to users when violations occur. This approach has proven to reduce repeat violations by over 40% within days. Additionally, Polymer offers workflows that allow users to accept responsibility for sharing sensitive data externally when it aligns with business needs. 

Read our whitepaper to find out more about Polymer DLP for AI.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.