Data loss protection products were once hailed as the panacea of security programs. Now they are more likely to be deemed as an expensive investment, with low ROI and a high amount of time-suck for security teams that barely use them. DLPs are supposed flag sensitive data exfiltration and give insights into what applications are trafficking key information. DLPs’ primary function is to detect data leakage and exfiltration by monitoring sensitive data while in use, motion, or rest.
Data loss is a big problem. Worse still, it is growing exponentially with the adoption of cloud-hosted solutions. In fact, a staggering 878.17 million data records were compromised around the globe in January 2021 alone.
That’s more than the whole of 2017, which means 2021 will likely be a year to remember in breaking data breach volumes.
While only big data loss incidents like the EA data breach make headlines, it’s a risk faced by organizations of all sizes, industries, and types.
With the magnitude and frequency of data breaches, you’d expect an emergence of foolproof DLPs to tackle the problem, right? Wrong!
Why DLP Tools don’t cut it
Even though DLPs are understandably popular, many miss the mark when it comes to preventing actual data loss.
Sad as it may sound, a significant number of DLP solutions perform poorly in detecting and investigating incidents while and after they happen.
Here’s why current DLPs don’t fit the needs of security teams anymore, and how your business can rethink its approach towards data loss.
Reason #1: Low ROI from investments in existing DLPs
Many companies and businesses that invest in DLPs quickly find out that these tools are difficult and time-consuming to deploy.
The hard truth is that it requires a significant amount of work to set up these tools before you can maximize their potential. On top of that, DLPs are inherently high maintenance.
And then there’s the matter of cost. According to a ComputerWeekly survey, the top challenge in adopting DLP was that it is too expensive.
Now, imagine a scenario whereby you’ve invested in a DLP solution only for it to fail to detect data breaches or cause glitches in your systems. Unfortunately, such incidents are far too common with many DLPs.
Just look at this customer review of Symantec on TrustPilot:
“Not long ago they fraudulently minded over 30 thousands HTTPS certificates for websites without validating to whom they really belong. They don’t have the security of their customers in mind, they don’t care. Avoid using the services of this company.“
With such a “solution” being part of your security system, you could very well be staring at possible data leaks. And, a single data leak can lead to ongoing costs and lost opportunities.
Reason #2: Lack of actionable insights
DLPs are typically designed to recognize patterns in structured, regulated data. Methods that recognize these patterns (REGEX) are poor approximations of how large amounts of sensitive data is shared.
“Our DLP solution provides 60% accuracy, in the best of times” CIO of a 50k+ employee organization
When a DLP tool cannot separate irrelevant data and protect what matters most, the ripple effect is a lack of accurate and actionable insights. Further, security analysts cannot prioritize and respond to threats efficiently and intelligently.
Moreover, traditional DLP tools cannot decipher suspicious exchange of info over communication platforms such as Slack, HipChat, SharePoint, or Gchat.
So, suppose two employees are colluding to export names out of Salesforce to sell to a competitor and execute their plan through email. In that case, traditional DLP tools cannot detect and block this loss of intellectual property.
Reason #3: High false signals
Data usage patterns are dynamic and complex, requiring effort to adjust and fine-tune DLP policies.
Due to this complexity, existing DLP tools tend to generate many false positives – alerts for are perfectly normal and safe activity.
Worse still, a single false signal can trigger dozens of notifications, promoting the security team/admin to waste time on unnecessary activities. As a result, there’s a drain in IT and security productivity leading to “alert fatigue.”
Reviewing Forcepoint DLP, Websense, an IT company, says,
“We had many false positives with the fingerprinting technology”
Reason #4: Low efficacy
DLPs are notoriously ineffective at preventing data loss caused by insider threats. These tools are often trivial for technical users.
In essence, this means that if an individual really wants to exhilarate data, they’ll most likely find a way to do so.
That explains why Microsoft endpoint DLP is unable to prevent data loss instigated by insiders. According to Egress research, 85 percent of organizations that use Microsoft 365 DLP tools suffer email leaks – despite the company’s claim that its “traditional static rules” prevent email breaches.
Reason #5: Low sensitivity
A DLP tool by design is supposed to catch every incident of data loss before it wreaks havoc. However, existing DLPs don’t do this.
It means breaches and attacks can go undetected for long, aggravating damage extent with such a fatal shortcoming.
In addition, a DLP tool may fail to periodically monitor the changes in an organization’s IT infrastructure, process, and business units, rendering previous DLP controls ineffective.
What does a next-generation DLP look like?
The large amount of traffic due to the adoption of cloud and wider data collection end points, any next gen DLP should at least be capable of the following:
- Natural Language Processing (NLP) capability
- Machine learning feedback loop to learn new patterns
- Allow feature-setting that can incorporate on-the-gr0und heuristics
- Be able to operate without having the luxury of large training data
- Easy to install and configure
- Separate metadata from real data to allow a wider audience access to the dashboard