Polymer

Download free DLP for AI whitepaper

Summary

  • Banning ChatGPT and generative AI apps can slow down productivity and cause your organization to lose competitiveness.
  • Like all new technologies, generative AI risks must be managed.
  • Next-generation DLP solutions prevent data exposure in generative AI apps and deliver real-time, micro-training to employees.

If you’re considering banning ChatGPT in the workplace, you’re not alone. Samsung, Apple, and Goldman Sachs have publicly announced that they’ve prohibited ChatGPT use in their organizations.

However, according to recent research from Glassdoor, 80% of employees are against their companies banning ChatGPT. It’s easy to see why. As McKinsey data shows, generative AI has immense potential to boost employee productivity and efficiency. 

To ban ChatGPT, and tools like it, would mean losing a competitive edge. Rather than prohibiting generative AI use, it’s much better to mitigate risks.

The risks of generative AI use in the workplace

Organizations must treat risks that arise from using tools like ChatGPT, Bard, and Grammarly systematically. That means conducting a risk analysis, investing in governance, deploying appropriate tools, crafting acceptable use policies, and monitoring the applications continuously. 

Risks of generative AI use include data leakage, compliance issues, data theft, half-baked applications, supply chain fallout.

Data leakage

Most generative AI tools operate by analyzing user queries, spanning text, image, and audio inputs. While queries containing public information pose minimal risk, those with sensitive data, like personally identifiable information (PII) or confidential source code, could lead to the leakage of proprietary information.  

This is because of the way tools like ChatGPT operate. For the AI model to become more accurate, useful, and precise, ChatGPT feeds on the data its fed. Essentially, once entered, confidential information becomes part of the neural network’s DNA, posing challenges for security control.

Compliance issues

Beyond the immediate fallout from a data leak, inputting sensitive information into generative AI models also creates compliance complexity, potentially putting organizations at odds with the GDPR, CCPA and more. 

The reason for this is the opaque nature of generative AI applications. Once data is shared with them, it’s nearly impossible for an organization to trace where that data has gone and who has it been shared with, making tasks like data minimization, anonymization, and erasure practically impossible. 

Data theft

The accessibility of third-party generative AI platforms also creates concerns about data theft. With many employees reusing their passwords for multiple accounts, it’s all too easy for threat actors to hijack employee generative AI accounts, especially if monitoring capabilities aren’t in place. 

Half-baked applications

It seems like a new enterprise-focused generative AI app appears on the market every day. This leads to the possibility of employees engaging with what’s known as shadow AI: installing third-party platforms that have poor security controls and are especially vulnerable to cyber-attacks. 

Supply chain fallout

Supplier contracts have become more stringent recently, with many organizations now demanding certifications and proof of security controls before starting a partnership. 

Unfortunately, ChatGPT and similar tools can undermine supply chain risk management for both the supplier and customer. If an employee unwittingly shares sensitive client data with these tools, it would violate the contract in place and lead to legal repercussions. 

How to mitigate generative AI security risks 

If you don’t have an enterprise risk management framework in place for AI already, we advise looking at NIST’s AI framework. It is specifically designed to mitigate risks associated with generative AI applications.

In terms of control implementation, organizations should look for tools that are easy to deploy and highly effective from the outset. Next-generation data loss prevention (DLP) tools built for generative AI platforms are a fast, reliable measure to mitigate the risks of data exposure in generative AI applications. They offer capabilities like:

  • Automated data redaction: Scan your generative AI applications for sensitive data in real-time. When sensitive data is detected in prompts or responses, you can redact or block sensitive it based on contextual use policies.
  • Audit efficiency: Swiftly conduct searches and retrieve relevant generative AI interactions when faced with e-discovery requests, audits, or compliance reviews
  • User training & nudges: Improve security awareness among employees through real-time security nudges. When a user violates a compliance or security policy, deliver a point-of-violation notification to the user to provide more context about the violation, so the employee can learn for next time. 
  • Insider visibility: Gain granular visibility into employee behavior with robust logging and audit features. This helps you spot repeat offenders, compromised accounts, and malicious insiders before a data breach. 

Learn more about how to leverage DLP for generative AI.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.