It’s no secret that the healthcare sector is a prime target for cyber-attacks. Research shows healthcare organizations in the US experienced 1,426 attacks per week in 2022 – up 60% from the year before.
But while healthcare companies spend their limited resources and time shielding patient health information (PHI) from external threats, they forget that the most dangerous risk often lies within the company’s proverbial walls. We’re talking, of course, about insider threats: employees, contractors and third-parties that unwittingly or, worse, intentionally steal or leak PHI.
What is the insider threat?
An insider threat is an individual with legitimate access to your company’s resources, such as EMRs, cloud applications or documents containing PHI. While the insider threat is often associated with employees, they can also take the form of contractors, researchers, volunteers and other third-parties with access to IT infrastructure.
Regardless of the source, the ramifications of an insider breach can be grave. Under HIPAA, healthcare organizations must comply with rigorous data protection rules. Any insider breach puts a company in violation of the regulation, which can result in huge compliance fines, reputational damage, class-action lawsuits and a loss of patient trust.
In fact, research shows that insider breaches often cost twice as much as external attacks, in part because they are likely to go undetected for longer periods of time, meaning more damage can be done.
The insider threat: a real and potent risk
Far from being an abstract issue, the insider threat is very much a real and ever-present risk in today’s healthcare institutions. Verizon’s Protected Health Information Data Breach Report, for example, found that 58% of all healthcare data breaches and security incidents are the result of insiders.
To put the problem in context, here are some recent examples of insider threat incidents within the healthcare industry.
Example one: One Florida hospital discovered two employees had printed out sensitive files containing a wealth of PHI, including social security numbers, names, addresses and more. This wasn’t an isolated incident. The employees had been printing these files for an estimated two years, using the information to make fraudulent benefit claims with the patients’ health insurers.
Example two: In 2017, a Bupa employee with legitimate access to the company customer relationship management system copied the sensitive data of over half a million customers and put the information up for sale on the Dark Web. The incident resulted in Bupa being fined £175,000 by the UK’s Information Commissioner’s Office for failing to safeguard personal data.
Example three: In July 2022, an employee at John Muir Health Walnut Creek Medical created a website designed to better enable employees to discuss medical device usage. However, in creating the website, the employee accidentally included links to spreadsheets containing confidential patient information. The error was discovered in March of this year, and the hospital is currently investigating whether unauthorized entities accessed the spreadsheets.
Types of insider threat
The examples above demonstrate that the insider threat can take many forms, each with varying motivations. While some insiders act with malice, others are well-intentioned employees that don’t mean to trigger a security incident. With that in mind, we tend to divide insiders into two subtypes: malicious and accidental insiders.
Malicious insider threats are individuals who deliberately set out to negatively impact their business. They may do this for financial gain or because they hold a grudge against the company.
While we would all like to think that our employees are committed to integrity, research shows that almost half of insider breaches are motivated by financial gain—and healthcare information is one of the most lucrative data types. In fact, Accenture research found that 20% of healthcare employees would be tempted to steal confidential information for a substantial sum of money.
More troublingly still, the same research found that 24% of employees know of a colleague or business associate that has either stolen data or sold their login credentials to a malicious entity.
While less maleficent in nature, accidental insiders are just as much a risk to compliance and confidentiality as their malicious counterparts. For the most part, these breaches occur due to human error and negligence: the employee who accidentally shares a sensitive file with the wrong recipient, takes a peek at patient records out of curiosity or fails to practice good cyber hygiene.
The move to remote working and cloud-based software has further compounded the problem, creating an even more fertile environment for accidental insiders to leak sensitive data. Outside of the workplace, employees often slack on company data security practices. A 1Password study, for example, found that more than half of all employees with children (51%) allow their children to have access to their work accounts.
The challenges of insiders in healthcare
The insider threat is difficult for healthcare institutions to contend with because employees often require access to sensitive information in order to carry out their workplace duties. While the presence of an external actor within an application or network should, hopefully, trigger intrusion detection alarms, insiders already have legitimate access to these resources.
To make matters more complex, we must also remember that the proliferation of cloud apps like Slack, Teams and more has made user segmentation, data classification and HIPAA compliance more challenging, creating new end points that legacy cybersecurity products cannot monitor or secure properly.
This begs the question: how can organizations ensure insiders only access and use sensitive information in a compliant, secure way?
How to defend against insider threats in healthcare
Insider threats are a multi-faceted problem. The solution, however, is relatively straightforward: moving towards zero trust and engineering a security awareness program fit for the 21st century.
Zero trust for insider threats
Zero trust is a security approach that advocates organizations to continuously authenticate users in real-time as they interact with company resources. It’s based around the idea of: “never trust, always verify”. In essence, this means you should assume every person within your infrastructure is hostile until they prove otherwise.
Zero trust achieves this through the use of user behavior analytics and contextual risk engines that assess users as they go about different activities. Based on the organization’s pre-defined levels of risk tolerance, compliance mandates and sensitive data policies, the engine then grants, prohibits or limits access to certain resources.
All of this means that, in the event of a malicious or accidental insider threat, your sensitive information stays safe. With zero trust in place, it would be impossible for an employee with maltinent to steal patient data, as your zero trust solution would prohibit them in real-time.
Similarly, zero-trust-powered tools are able to notice and stop well-intentioned users from accidentally sharing data, again noticing and prohibiting compliance violations in real time.
Better security awareness
Too often, healthcare institutions host one-off training sessions for security awareness, treating their programs as a tick-box exercise. However, these sessions rarely improve security outcomes. In fact, research shows knowledge retention rates drop by a huge 50% when training exceeds two minutes in length.
In order to tackle the insider threat, healthcare organizations need to reimagine how they approach training and security awareness. Instead of continuing with ineffective, archaic in-person training sessions, companies should opt-for workflow nudges and prompts, which use positive, real-time reminders to influence people towards better security decisions.
How Polymer can help
Polymer data loss prevention (DLP) brings the principles of zero trust to your organization’s cloud apps, using contextual authentication factors to protect PHI from malicious actors and all forms of insider threat.
Our engine looks at factors such as the user’s identity, the activity being performed, the nature of the data, and the file’s type and location to make a risk-based judgment, stopping insider threats in their tracks.
Beyond enforcing security policies, we’ve embedded psychological nudge theory into our product, producing an educational bot that integrates directly into your cloud apps. When a user attempts to violate a policy, we explain why their action is blocked, in a language they understand.
For accidental insiders, this addition helps to reduce repeat offenses while, for disgruntled employees, our nudges are proven to obstruct and deter malicious actions.
Ready to take the next step? Stop insider threats with a complementary risk scan to discover exposed PHI in your cloud apps.