A hacktivist group has leaked over one terabyte of data from Disney’s internal Slack channels.
Here’s everything we know so far.
Who’s responsible?
The attacker group is calling itself NullBulge. They allege that they stole 1.1 TiB of data from 10,000 internal Disney Slack channels.
“1.1TiB of files and chat messages. Anything we could get our hands on, we downloaded and packaged up. Want to see what goes on behind the doors?” attackers said in an X post.
According to the attackers’ statement on the dark web, the stolen data includes unreleased projects, raw images, code, login details, links to internal web pages, and other sensitive information.
In addition to a trove of intellectual property, the leaked data contains personal information of hundreds of Disney employees, including phone numbers, email addresses, and names.
NullBulge, the group claiming responsibility for the breach, describes itself on its website as a “hacktivist group protecting artists’ rights and ensuring fair compensation for their work.”
The group reportedly targets companies that violate one of three “sins”: promoting cryptocurrencies, using AI-generated artwork, or engaging in any theft from “Patreons”, other supportive artist platforms, or artists in general.
“Disney was our target due to how it handles artist contracts, its approach to AI, and its pretty blatant disregard for the consumer,” the hacking group said in a statement.
How did the Disney data breach happen?
It appears NullBulge had a helping hand from a malicious insider to get their hands on Disney’s internal communications data.
We know this because NullBulge also leaked personally identifiable information–medical records, their name, and a screenshot of their 1Password dashboard–of the alleged insider.
According to the hacking group, they exposed the individual as punishment for ceasing to communicate and share information. However, Disney is yet to confirm whether there was a whistleblower at play.
Lessons learned
This data breach highlights the dangers of the disgruntled insider threat: an employee who misuses their access privileges to cause harm to their employer.
Unfortunately, the insider threat is all too common, with Verizon research showing that privilege misuse is one of the top causes of data breaches in 2024.
In the SaaS-first world, it’s all too easy for employees to login to Slack, Microsoft Teams or Google Workspace and download sensitive information from any device–as long as they have their credentials.
The good news, though, is that, with the right tools, you can prevent malicious insiders from successfully stealing data.
Here are the steps to take:
- Embrace the principle of least privilege: Ensure that your employees only have access to the information they need to do to do their jobs–and nothing more.
- Lean on smart data loss prevention (DLP): Smart data loss prevention tools use a combination of user behavior analytics, artificial intelligence and encryption to monitor how users interact with sensitive data. They autonomously notice when users are acting suspiciously, and block their actions. This is the exact kind of technology that would have prevented the Disney breach.
- Deploy active learning: Active learning solutions harmonize with your DLP tool to nudge users towards security-conscious decisions. For example, if a user attempts to download sensitive information they shouldn’t, your DLP tool will block the action, whilst your active learning solution will educate the user on why this action is unsafe.
How Polymer can help you beat insider threats
Polymer’s SaaS DLP and active learning tool is designed to effectively neutralize insider threats. Here’s how our solution can help you manage malicious insiders:
- Monitor user behavior: Polymer leverages machine learning to automatically monitor user access and activities across SaaS applications, providing a comprehensive analysis of behavior. When users engage in risky actions, our engine can automatically redact or block the user, notifying your IT team for further investigation if needed.
- Prevent SaaS app exposures: Our solution identifies and safeguards sensitive data, ensuring that only authorized users can access and modify it. It discovers both structured and unstructured data within your cloud applications, detecting sensitive information in documents, chats, databases, and more. Once identified, our automated, self-learning engine employs zero-trust principles to take the most secure actions to protect your data as it is accessed.
- Mitigate insider threats: Polymer tracks the types of data an employee handles and how it is shared. We compute metrics from various dimensions across SaaS platforms to create a Data Exposure Risk Score, reflecting the frequency and severity of individual user actions, enabling you to take appropriate measures.
- Provide active learning: While security awareness and periodic training are important, they often fall short in changing behavior. Polymer’s SaaS DLP nudges users when sensitive data is shared insecurely, offering continuous training and driving real results. Automatic remediation adds an extra layer of security.