A threat actor claims to have breached OmniGPT, a widely used AI-powered chatbot and productivity platform. The leaked data reportedly includes 30,000 user email addresses, phone numbers, and more than 34 million threads of user conversations.
OmniGPT is a popular ChatGPT alternative that leverages top AI models like Claude 3, GPT-4o, Gemini, and Perplexity. It integrates with workplace apps such as WhatsApp, Slack, Google Workspace, and Notion, making it a valuable tool for employee productivity.
Here’s what we know about the incident so far.
OmniGPT breach: The story so far
On Monday, a hacker going by the name of Gloomer published a post on BreachForums, claiming to have a trove of data from OmniGPT. The post stated:
“This leak contains all messages between the users and the chatbot of this site, as well as all links to the files uploaded by users and also 30k user emails. You can find a lot of useful information in the messages, such as API keys and credentials. Many of the files uploaded to this site are very interesting because sometimes they contain credentials/billing information.”
Analysis of the sample data thus far indicates the hacker is telling the truth. A look at the downloaded files reveals thousands of messages exchanged between users and chatbots, along with links to uploaded files like WhatsApp screenshots, work documents and reports—some of which contain credentials, billing details, and API keys.
At this stage, it remains unclear why or how the threat actor specifically targeted OmniGPT. It’s possible that the attacker sought financial gain, or wanted to demonstrate that generative AI tools are not as secure as many users assume.
If the latter was the goal, they have certainly made their point. The breach highlights the risks of storing sensitive data within AI chatbots and reinforces the need for stricter security measures—both from service providers and users who rely on these platforms for productivity and communication.
Consequences
OmniGPT has yet to release an official statement regarding the reported data breach.
However, if the claims are true, this incident presents serious cybersecurity and privacy risks for users. Exposed email addresses, credit card information and phone numbers lay the foundation for highly deceptive phishing scams and identity fraud.
And for enterprises, where users shared API keys or credentials, attackers could exploit this information to compromise their workplace accounts and move laterally within the organization.
Lessons learned
Employees don’t intentionally share sensitive data—but in the pursuit of efficiency, they may unknowingly expose confidential information to AI agents. As generative AI becomes more embedded in workplace applications, organizations must adopt a two-fold security approach: preventing data from being exposed to AI tools and ensuring AI-generated responses do not leak sensitive information.
This is where solutions like PolymerHQ come in. Our data exposure prevention solution is designed for cloud applications, generative AI tools, and retrieval-augmented generation (RAG), ensuring AI interactions are secure by design—without compromising user efficiency.
Our solution uses centralized access controls and smart classification to prevent AI tools from accessing unauthorized information, while real-time monitoring provides visibility into AI activity, helping detect risks before they become breaches. More importantly, PolymerHQ actively nudges users toward secure practices—preventing them from unintentionally sharing sensitive data with AI chatbots and reinforcing compliance without disrupting workflows.
With built-in encryption, tokenization, and scalable security measures, PolymerHQ safeguards AI-driven productivity at every step—so your business can embrace third-party AI tools with confidence.
Request a demo to see our solution in action.