Imagine this scenario: Your sales team, in the midst of a hectic period, rushes to finalize a proposal for a crucial prospect. Strapped for time, they turn to ChatGPT for assistance, feeding it sensitive contract and company data to speed up the process.
Seems harmless, right?
Actually, the data your sales team entrusted to ChatGPT could inadvertently surface as a response to an entirely different user’s query—a user from a completely different company, located in a different corner of the world.
In essence, by harmlessly trying to boost their productivity, your sales team has triggered a data breach, and you don’t even know about it.
This is a prime example of the insider threat in the age of generative AI.
Types of insider threat
The insider threat refers to the risk of employees, partners, freelancers, and anyone else with access to company resources, stealing, leaking, or compromising sensitive information.
Not all insider threats are created equal. Generally speaking, they can be grouped into three types:
- Accidental insider – These are insiders who inadvertently expose sensitive company data either out of negligence or human error (like the sales team mentioned above). According to IBM, this type of insider threat is responsible for a huge 90% of data breaches.
- Malicious insider – This is an individual who intentionally sets out to leak or exfiltrate company data. At the extreme end, you’ve got whistleblowers like Edward Snowden or disgruntled employees who want to harm their company’s reputation. More commonly, malicious insiders are individuals who steal sensitive data just before they depart an organization for use in their next job.
- Compromised insider – This is an individual who isn’t actually an insider at all. In these instances, malicious actors compromise employee cloud accounts giving them access to sensitive company information. In the work-from-anywhere world, compromised insiders are a huge issue. Without the right verification mechanisms in place, it’s nearly impossible for cybersecurity teams to decipher whether a user is legitimate or their login credentials have been stolen.
Top insider risks of generative AI
The insider threat is not new, but generative AI usage has catapulted this risk into a new dimension. It is now incredibly easy for employees, partners and the like to inadvertently share sensitive information outside their organization, simply by entering a query into ChatGPT, Bard, or another platform.
With that in mind, the top risks to be aware of are as follows.
Most generative AI tools operate by digesting a user query—text, image, audio, and so on. These prompts are integral to the development of the language model that underpins the platform shaping its future content and expanding its knowledge.
When queries only contain public information, the risk is minimal. But, going back to the sales team example above, when the user query contains sensitive information, like PII or confidential source code, there’s a problem. Confidential information is now out of the security team’s control and could reappear as an answer to another user.
Along with the obvious compliance fallouts of a generative AI-triggered data breach, inputting sensitive information into generative AI models creates other compliance concerns. This is because generative AI systems are excellent at retaining the information they digest.
If Bard or ChatGPT ingests sensitive information from your company, that information becomes part of the DNA of its neural networks and will be remembered even if the initial query is then deleted.
Naturally, this is concerning when we consider the right to delete, as outlined in the GDPR and CCPA. Since it’s impossible truly delete sensitive information from generative AI models, organizations that share sensitive data with these platforms will find it increasingly difficult to meet their compliance obligations.
Most third-party generative AI platforms are accessible through a simple combination: a username and a password. Given that 70% of employees reuse their passwords for multiple cloud accounts and the dark web is awash with leaked and stolen login details, it is highly permissible that a threat actor could obtain the login information for employee generative AI accounts.
Should that threat actor get into an employee’s account, they would be able to view, copy, and steal all the sensitive information ever entered as a query.
Moreover, because these AI tools are so new, many cybersecurity teams are yet to deploy monitoring and logging capabilities to analyze user behavior. Therefore, cybercriminals might be able to break into these accounts and steal data completely undetected.
A new technology requires new security controls
While banning generative AI would be the easiest solution to the insider threat, it’s not the most sensible. As we saw with the rise of SaaS apps, when organizations ban helpful tools in the workplace, employees simply find a way to circumvent their policies.
This could lead to what’s known as shadow AI: employees using generative AI tools completely outside of the security team’s knowledge, which heightens the risks of data leakage exponentially.
The smarter option is to employ tools that limit the chances of data exposure in tools like ChatGPT and Bard. And that’s where Polymer comes in.
Polymer data loss prevention (DLP) for AI is a cutting-edge cybersecurity solution, uniquely created to uphold data privacy and prevent sensitive data breaches in generative AI tools.
Here’s how the plug-and-play tool supports you in combating the insider threat in generative AI applications:
- Bidirectional monitoring: Using natural language processing and automation, Polymer scans your generative AI applications for sensitive data in real-time. Polymer scans both user prompts and AI-generated responses. When sensitive data is detected, Polymer can take automatic remediation action based on contextual use policies set by your security team.
- E-discovery for GenAI interactions: Polymer’s solution empowers your compliance team to swiftly conduct searches and retrieve relevant generative AI interactions when faced with e-discovery requests, audits, or compliance reviews.
- User training & nudges: Polymer reduces the instances of accidental data leakage through real-time user training. When a user violates a compliance or security policy, Polymer delivers a point-of-violation notification to the user to provide more context about the violation.
- Insider visibility: Polymer is equipped with robust logging and audit features giving your security team granular visibility into employee behavior. This helps you spot repeat offenders, compromised accounts, and malicious insiders before a data breach.
Getting started with Polymer DLP for AI is simple and fast. Our low-code platform takes just minutes to install and comes equipped with pre-built compliance templates for GDPR, HIPAA and more. Minimize the time you spend configuring your DLP and maximize cloud data protection quickly with Polymer.