Low-code, no-code AI is the future. Don’t fall behind.

Download whitepaper

Polymer

Download free DLP for AI whitepaper

Summary

  • MCP is an open standard that allows large language models (LLMs) to access and interact with external tools and data sources.

  • It eliminates the need for one-off integrations by providing a shared protocol for connecting LLMs to systems like CRMs, databases, and knowledge bases.

  • Security risks are substantial, including the threat of malicious MCP servers, prompt injection, and hidden tool manipulation.

  • The protocol currently lacks standard authentication, encryption, and verification mechanisms.

  • Organizations need strong AI governance, access controls, context management, and auditability before deploying MCP.

Large language models (LLMs) like ChatGPT and Claude have become go-to tools in businesses. But they can’t do that much on their own. They’re siloed interfaces—unable to interact with all of the business applications (think: Slack, Google Workspace, Microsoft Teams) that employees use everyday.

Enter the model context protocol (MCP). This new standard has been heralded as a USB-C port for AI applications, giving them the all-important capability to plug into the data and tools that companies rely on day in, day out.  

What is MCP? 

The Model context protocol (MCP) is an open standard introduced by the AI company, Anthropic. It allows large language models (LLMs) to connect and interact with external systems—without the usual integration headaches.

At its core, MCP is designed to solve a fundamental challenge in agentic AI development: how to give models access to real-time, relevant data from the tools businesses already use. CRMs, knowledge bases, databases, analytics platforms—these are all critical to delivering useful AI-driven outcomes. But until recently, integrating AI with these systems required custom code, API juggling, and ongoing maintenance.

Before MCP, each and every integration had to be manually configured. Want your AI to pull customer data from Salesforce? Write a custom connector. Need it to reference a knowledge base in Confluence? Another connector. This approach was hard to scale and a nightmare to maintain—especially for fast-moving teams juggling dozens of tools.

MCP standardizes how LLMs request and use external data. It provides a common protocol—essentially a shared language—that AI models and tools can use to communicate. Instead of building one-off connections, developers implement MCP once, and it becomes a gateway for models to access many tools through the same interface.

In essence, MCP lets AI assistants act omnisciently. They’re no longer stuck within one or two interfaces. As long as a tool has MCP enabled, your business can integrate AI with it, opening the door to the next phase of truly agentic AI. 

MCP use cases 

That’s MCP at a high-level. Now, let’s contextualize things with a few use cases.

Healthcare

A hospital deploys an AI assistant for doctors. Through MCP, the assistant can access the Electronic Health Record (EHR) system, the medical imaging database, drug interaction tools, and scheduling software all at once. 

For example, a doctor might ask, “Do I have any critical lab results for patient Jane Doe and what are the next steps?” The LLM could query the lab database, retrieve the results via an MCP server, then cross-reference drug interaction guidelines before answering—all with one natural-language prompt. It could even update patient notes or scheduling if authorized. 

Finance

At an investment firm, an MCP agent might tie together CRM data, market feeds, and risk models. A portfolio manager could say, “Summarize client X’s financial health and recommend an action.” The agent would use MCP servers to fetch the latest account history from the core banking system, pull current stock and market data, and run a risk analysis tool to generate an up-to-date report. 

Tech

In a technology company, an MCP-based virtual assistant could span customer support, documentation, and operations. A support rep might ask an internal AI agent, “What are the outstanding tickets for client X, and what do we owe them in billable hours?” The agent would query the ticketing system, check the time-tracking database, and then produce a concise summary and next actions. 

MCP security risks 

These scenarios highlight MCP’s quite astounding potential to streamline workflows. But, they also underscore just how important sensitive data is to MCP’s success. Patient records, financial ledgers, and proprietary docs could all pass through this layer. 

And that brings us to the critical security issues with MCP at present. 

Malicious MCPs

By design, MCP encourages installing many small servers (often open-source) to handle different tasks. But what if one of those servers is malicious or compromised? Unlike traditional software libraries, MCP tools run in the same context as your LLM and can access your data. A bad MCP server (one downloaded from a public registry) could quietly exfiltrate data or hijack actions.

For example, a malicious “Database Server” might silently copy patient records to an external URL every time it is invoked. Worse, some attacks let a server remain hidden: programming the MCP tool to mutate its own definition after install (a so-called “rug pull”)  to start rerouting API keys or injecting malicious commands days later. 

Prompt injection attacks

MCP also widens the scope for AI agent hijacking. Why? Because an LLM trusts anything in its context. If a malicious actor can sneak instructions into that context (whether via the user’s query, a malicious document, or the tool descriptions) the AI can be tricked into unintended behavior. 

Here are two major concerns: 

  • Direct prompt injection: If an attacker can get the AI to read their text, they might sneak malicious instructions into it. For instance, someone could email a doctored document that says “the assistant should send a payment to [insert email address],” but disguised as normal text. When the LLM processes the email, it obeys the hidden directive and triggers the wrong MCP tool. 
  • Indirect/tool poisoning attacks: Even without external malicious text, the tools themselves can carry poison. The “line jumping” exploit above is one example: injecting hidden instructions via the server’s tool description. 

Protocol hardening gaps

Beyond malicious code, MCP today lacks many built-in security safeguards. It’s a young standard, and much of the heavy lifting is left to implementations. For example, the official spec currently makes authentication optional and does not mandate any identity verification between client and server. In practice, many MCP connections run over plain HTTP, which means anyone on the network could snoop or tamper with the exchanged data. 

There is also no standard mechanism for ensuring tool integrity. Unlike mainstream package managers, MCP has no built-in signing or repository vetting. You could download an MCP server from a random GitHub repo and run it with no checks. In essence, the protocol trusts that you only install vetted servers – but there’s no technical guarantee. 

MCP: What now?

MCP is a bold leap forward in how AI agents interact with enterprise systems. But like any new protocol, it’s still finding its footing—and right now, it lacks many of the security guardrails enterprises rely on.

There’s no standard for authentication, no requirement for context encryption, and no mechanism to verify tool integrity. In most implementations, trust is assumed rather than enforced. That leaves developers exposed to a wide surface of potential attacks, from prompt injection to rogue tool servers.

But this isn’t a reason to walk away from MCP. It has truly enormous potential. The key is understanding where it fits within your AI readiness journey. 

Before you start wiring up agents with access to sensitive systems, you need a solid AI foundation in place. That means:

  • Tight governance: Clear rules for how your AI tools access and process data.
  • Model security: Guardrails like prompt injection protection, input sanitization, and output monitoring.
  • Access control: Least-privilege design for both users and agents, plus visibility into what tools your models can invoke.
  • Auditability: Logs of what your agents do, what data they accessed, and why.
  • Context management: Controls to prevent leaking sensitive information into prompts or model memory.

Afterall, MCP doesn’t just expand what your LLMs can do—it expands the blast radius if something goes wrong. If your current AI setup isn’t ready, MCP will only make the problem worse.

Curious how close you are to safe MCP adoption? Take our 2-minute AI readiness quiz.

Polymer is a human-centric data loss prevention (DLP) platform that holistically reduces the risk of data exposure in your SaaS apps and AI tools. In addition to automatically detecting and remediating violations, Polymer coaches your employees to become better data stewards. Try Polymer for free.

SHARE

Get Polymer blog posts delivered to your inbox.