Polymer

Download free DLP for AI whitepaper

Summary

  • Generative AI is rapidly changing the landscape of cybersecurity, but many organizations are failing to deploy it securely.
  • Data leakage, model poisoning, and intellectual property theft are significant risks companies must address.
  • A deep understanding of AI security risks and compliance is crucial to avoid costly mistakes and data breaches.
  • Human Risk Management (HRM) is vital: using AI to provide real-time, in-workflow training that reduces human error.
  • Third-party AI security apps, leveraging cloud’s shared responsibility model, can ease the burden while ensuring robust data protection.

Artificial intelligence is set to be the defining technology of 2025. As AI tools become an integral part of everyday life, both at work and at home, the pressure is mounting for organizations to innovate and adapt—while securing their systems and data.

Despite this growing urgency, many companies are still grappling with how to implement generative AI effectively. Gartner forecasts that 30% of AI pilots will fail next year, with improper risk management being one of the primary reasons.

To safeguard your investments, protect sensitive data, and preserve your reputation, a deep understanding of generative AI security is critical. 

In this guide, we’ll equip you with the knowledge you need to move your deployments forward securely. 

Core security risks in generative AI

To deploy generative AI securely, a thorough understanding of the associated risks is essential. Without it, organizations expose themselves to potential vulnerabilities, data leaks, and compliance challenges that could jeopardize both their reputation and operations. With that in mind, here are the major risk types companies need to address. 

Data leakage & privacy concerns

Generative AI models like ChatGPT rely on user input—whether text, code, images, or audio—to build their capabilities. When everyday or non-sensitive data is processed, security concerns are low. But when sensitive information, like personally identifiable information (PII) or proprietary code, is entered, the risk of data leakage escalates significantly.

The core risk here is that generative AI models, by design, learn and improve by absorbing the data they process. This means that sensitive information, once entered, could potentially become embedded in the model’s knowledge base. In practical terms, this means that data once thought secure could be retained and inadvertently referenced by the model later on, exposing sensitive data to other users—possibly in other organizations.

Moreover, because of the way models like ChatGPT are structured, isolating and removing sensitive information from the training dataset is extremely challenging. Unlike traditional data storage systems, generative AI doesn’t store data in files that can be readily isolated or deleted. As such, preventing data leakage isn’t as simple as encryption or access controls; it requires a more fundamental rethink into how data is handled.

Adversarial attacks & model poisoning

As generative AI technology advances, it has become a high-value tool for malicious actors looking to evolve their tactics. By mid-2023, dark web discussions about generative AI surged, with hackers openly sharing and boasting about their use of tools like ChatGPT to support cybercrime. For instance, one hacker claimed to have used generative AI to recreate malware strains by feeding it public research. The result: a Python-based stealer capable of scanning systems for common file types such as Word documents, PDFs, and images—formats often containing sensitive data.

But malware generation is only part of the threat. Cybercriminals are also manipulating these models through “model poisoning”—the insertion of malicious data designed to subtly alter a model’s responses or behaviors. As an example, a hacker might introduce modified malware samples disguised as harmless data into the training environment of an AI-powered cybersecurity tool. Gradually, this teaches the model to disregard specific malware characteristics, creating blind spots. Later, when this disguised malware is deployed in the real world, the AI model fails to recognize it as a threat, allowing attackers to bypass detection systems.

Beyond model poisoning, there is also the risk of AI-enhanced cyber attacks—namely, polymorphic malware. These self-evolving strains modify their own code, generating unique variations that enable highly targeted attacks, slipping past traditional security mechanisms.

IP theft and model theft

Model theft is an generative AI-specific attack where a threat actor attempts to replicate a machine learning model without internal access to the systems of data. To achieve this, they use reverse engineering: making repeated queries through a public API or interface and analyzing the responses. Over time and with persistence, attackers can use these answers to build a duplicate model that closely mimics the original’s functionality—undermining proprietary technology and intellectual property.

Implications of generative AI on the cybersecurity landscape

Generative AI presents both tremendous opportunities and significant risks in the cybersecurity space. On the one hand, it has the potential to revolutionize how organizations defend against cyber threats, making cybersecurity systems faster, more accurate, and more reliable. For example, AI can help automate threat detection, generate real-time security insights, and improve response times, enabling organizations to stay ahead of ever-evolving cyberattacks.

In today’s cybersecurity climate, these capabilities are especially valuable. Research from ISC2 highlights a significant shortage of cybersecurity talent, with over 3.4 million professionals needed globally. In addition, 43% of security experts report burnout and high attrition rates. This talent gap is putting immense pressure on Security Operations Centers (SOCs), making it harder to manage and respond to threats effectively. Generative AI can provide critical support by enhancing automation, helping overwhelmed teams manage the workload, and even generating new solutions for emerging threats.

However, the potential benefits of generative AI in cybersecurity come with considerable risks. While it can strengthen defenses, it also creates new vulnerabilities. As we’ve noted, attackers can exploit this novel technology to launch sophisticated cyberattacks, and the risks of data leakage cannot be understated. 

Because of this, organizations debating investing in generative AI for cybersecurity really cannot afford to wait. In essence, to remain secure and resilient, they must fight fire with fire. Deploying generative AI cybersecurity tools will be the only way to stop generative AI-enhanced cybersecurity threats. 

Regulatory and compliance for generative AI 

The regulatory landscape for generative AI is evolving rapidly, yet it remains complex and fragmented. In the U.S., various state-level regulations have been introduced to address the growing use of AI, but there is no unified federal framework at this time. The U.S. Federal AI Governance and Transparency Act proposes the creation of a standardized federal policy by consolidating existing AI-related legislation. However, as of now, this remains a proposal, not law.

While federal regulations are still taking shape, U.S. organizations can turn to existing frameworks for guidance on implementing responsible AI practices:

  • Health Insurance Portability and Accountability Act (HIPAA): For AI tools in healthcare, compliance with HIPAA is essential to protect patient privacy and data security. HIPAA mandates strict controls over personal health information (PHI), including robust security measures, encryption, and access management. Failure to comply can result in heavy legal penalties and damage to reputation.
  • Cybersecurity Maturity Model Certification (CMMC) 2.0: Managed by the U.S. Department of Defense (DoD), the CMMC framework sets specific cybersecurity standards for defense contractors. CMMC 2.0 requires continuous monitoring of cloud-hosted applications.
  • ISO 27001-2: The updated ISO 27001 standard for Information Security Management Systems (ISMS) offers a comprehensive framework for securing data, with a focus on risk assessment, security controls, and ongoing monitoring. 
  • Gramm-Leach-Bliley Act (GLBA): The GLBA mandates that financial institutions protect customer privacy and data security. The 2023 updates to the GLBA introduce additional security controls, particularly for AI systems used in the financial sector. Institutions must ensure strict retention, usage, and disclosure practices for customer data to mitigate unauthorized access or misuse.

For organizations operating in the EU, the European Union’s Artificial Intelligence Act is another critical regulation to consider. The EU AI Act provides clear guidelines for AI developers, focusing on risk-level management to mitigate harm and ensure the safe deployment of AI technologies.

Other frameworks to consider

We strongly advise organizations to not wait for formal legislation to take shape before adopting AI best practices. The evolving nature of generative AI presents both immense opportunity and significant risk, making it imperative for businesses to take a proactive approach to AI cybersecurity. By implementing robust security measures now, organizations can mitigate potential risks, safeguard customer trust, and secure long-term competitiveness in an increasingly AI-driven world.

To build a strong foundation for responsible AI deployment, security leaders must familiarize themselves with reputable best practice frameworks that guide AI risk management and cybersecurity strategies. Below are three essential frameworks every security leader should prioritize:

  1. OWASP LLM Top 10: The Open Web Application Security Project (OWASP) has curated a list of the Top 10 Security Concerns for Large Language Models (LLMs). This framework highlights the unique risks associated with deploying large language models, such as data poisoning, model inversion, and the exploitation of model biases. Understanding these vulnerabilities is crucial for identifying potential attack vectors and implementing safeguards to protect sensitive data and AI systems from compromise.
  2. Google Secure AI Framework: Google’s Secure AI Framework offers a set of principles and best practices designed to secure AI systems throughout their lifecycle. This framework emphasizes the importance of model validation, secure coding practices, robust testing, and ongoing monitoring to mitigate security risks. By adopting these guidelines, organizations can strengthen their AI deployments, ensuring that generative models are both effective and resilient against malicious threats.
  3. NIST AI Risk Management Framework: The National Institute of Standards and Technology (NIST) AI Risk Management Framework provides comprehensive guidelines for managing risks associated with AI development and deployment. The framework encourages organizations to assess risks across various stages of AI development, from data collection to model training and deployment. It focuses on creating a structured approach to AI risk management, emphasizing transparency, fairness, and accountability. Security leaders can apply NIST’s principles to establish trust in AI systems and ensure that AI technologies align with ethical and security standards.

Best practices for secure use of generative AI

Whether your organization is adopting third-party generative AI tools or building custom AI models, securing these solutions is critical. Follow these steps to ensure a robust, secure implementation:

1. Governance and compliance

Develop clear policies, procedures, and reporting structures to support the business while minimizing risk. Governance ensures responsible management of generative AI technologies and helps meet regulatory requirements.

Understand and comply with the specific legal, regulatory, and privacy requirements for using or developing generative AI solutions. This includes ensuring the proper handling of personal data and maintaining compliance with laws like GDPR, HIPAA, and others.

3. Risk management

Identify potential threats to generative AI solutions, from data breaches to misuse, and implement appropriate mitigations. Regular risk assessments will help you stay ahead of emerging threats and vulnerabilities.

4. Controls

Implement strong security controls that mitigate risks and protect sensitive data. This includes technical safeguards, such as encryption, as well as organizational policies around data access and usage.

5. Resilience

Design generative AI systems with resilience in mind to ensure continuous availability and meet business requirements, even during incidents. Redundant systems, backups, and failover strategies should be in place.

Simplifying the process with existing frameworks

Implementing these security principles can seem overwhelming, but it doesn’t have to be. The best approach is to leverage existing governance frameworks to safeguard your generative AI use. These frameworks provide a clear structure for data protection, compliance, and risk management.

An effective generative AI governance program mirrors a solid data governance policy and includes key principles such as data quality, stewardship, protection, compliance, and management. Here’s how this applies to generative AI:

  • Discover, classify, and monitor: Implement bi-directional monitoring for sensitive data within generative AI applications to maintain visibility and control.
  • Granular access controls: Define user roles and access permissions to restrict who can interact with sensitive data in generative AI tools.
  • Support security with policies: Reinforce security controls with acceptable usage policies and continuous training to foster a culture of security awareness.
  • Monitor user behavior: Regularly track user activities to detect signs of data misuse or improper sharing, ensuring accountability and compliance at all levels.

Future of generative AI security

AI is set to revolutionize security operations, bringing organizations to new levels of cybersecurity capability. The pace of development is accelerating, unlocking opportunities for companies to strengthen their defenses like never before.

One of the most promising innovations is Human Risk Management (HRM), a next-generation approach to cybersecurity awareness and training. By integrating generative AI, HRM delivers real-time, active learning nudges to employees within their workflows, effectively eliminating the risk of human error. With this approach, employees are continuously trained to recognize and mitigate security threats, transforming them into active defenders against cyber risks.

While the adoption of generative AI requires a careful, strategic approach, organizations don’t have to go it alone. Third-party AI-powered security apps offer an efficient way to integrate generative AI into an organization’s cybersecurity framework, all while reducing the operational burden. 

These apps operate under the cloud’s shared responsibility model, similar to tools like Slack or Microsoft Teams. Organizations remain responsible for areas such as identity and access management (IAM), and authentication, while providers manage the deeper, more complex security measures that protect sensitive data and systems. This allows organizations to leverage advanced AI security capabilities without the need for significant in-house development.

Polymer DLP enables secure use of tools like Slack, ChatGPT, and Microsoft Teams by integrating AI and NLP for smart, adaptive data protection. With real-time risk management and context-aware security nudges, it prevents data leaks seamlessly, empowering users to safeguard data without disrupting their workflow. 

Find out more in our DLP for AI whitepaper.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.