What is an Insider Threat?
An insider is any person with authorized access to systems or data that gives them the ability to take potentially harmful actions. Insiders range from business partners or third party contractors to full- and part-time employees–essentially all valid users with access to resources that you'd rather keep out of the wrong hands. People are just people, but when they mishandle data, they fall into the category of being an insider threat–intentional or not.
Types of Insider Threats
The first two types of potential insider threats we'll discuss are by far the most common and the most difficult to identify, since they include the vast majority of employees in most organizations.
1. Unintentional Insiders
Accidental insiders are well-meaning people who simply make mistakes while working. These are the employees or contractors who may fall prey to human error, like falling for social engineering attacks, phishing scams, or spear phishing. The vast majority of insider risk comes from unintentional insiders. This makes reducing your human attack surface particularly challenging, especially since most security awareness training can differ significantly from real work activities. In other words: people aren't making the mental connections between security videos and daily tasks.
2. Negligent Insiders
Negligent insiders are different from well-meaning employees who make mistakes. Negligent insiders simply don't want to take the extra steps necessary to protect corporate assets. This may look like:
- Failing to update a security patch because it might crash other systems
- Ignoring misconfigurations in system setups
- Storing user IDs and passwords in easy-to-access locations
- Dropping passwords into collaboration apps like Slack or Teams rather than using a password manager
A bit of an aside, but one other form of negligence that people often overlook is the failure of a security or IT leader to create and test solid incident response plans, simply because it takes a lot of time.
3. Malicious Insiders
An insider who intends to do harm to your organization often has telltale signs and behavioral traits that can be used to identify them as a threat actor. Carnegie Mellon University, the leading research body focused on insider risk, offers a full ontology for spotting an insider threat. We have included some of those high points below.
Signs of Malicious Insider Threats
Key indicators of insiders who are likely to commit malicious activity include:
- Overall disgruntelment
- Unusual log-in times
- Working from unusual locations
- Declining job performance
- Excessive downloads
- Unusual absenteeism
Employees Can Be Your First Line of Defense
Basically, any suspicious behavior can be cause for concern when it comes to malicious insiders, which is why it's so vital to build trust with your employees. Coworkers and front-line managers are typically the first to notice when someone needs additional support. If your organization has put in the work to build a positive company culture, catching an emerging insider threat early means there is still time to deter negative outcomes. For example, if someone is struggling financially, they can be offered opportunities to work overtime or receive financial counseling. Whatever the issue, human support can de-escalate problems and reduce someone's risk of acting against their employer's interests.
Some insider risk leaders detail stories of having not only de-escalated frustrated employees who showed emerging signs of risk, but they have coached those employees to success and even promotion over time.
Motivations Behind Malicious Insider Threats
A malicious insider is often motivated by the same factors that would lead any person to do something wrong: financial gain, exacting revenge on a boss or employer (in the case of disgruntled employees), and even thrill seeking. Psychological predisposition may be a factor, but external life pressures are sometimes all it takes to push potential bad actors over the edge from normal activity to malicious behavior.
Financial Gain
Anyone with financial troubles could potentially be recruited by external threat actors to do things like sell their legitimate credentials, give external actors access to corporate assets, or even steal sensitive data themselves to sell on dark marketplaces. That doesn't mean an organization should suddenly be suspicious of all employees who may be financially struggling. What it does mean is that committing to fair pay scales will likely save an organization more than it costs in the end, making employees feel valued by ensuring they can survive.
Revenge or Grievance
An insider who is disgruntled is likely pretty easy to spot, but most people who feel disgruntled simply move on. The kind of person who wants to exact revenge either has other psychological issues going on, or may have endured prolonged mistreatment from a manager or coworker, or may feel they have been passed over for a promotion. Creating a healthy corporate culture where employees feel valued is a wise, proactive approach to mitigating risk. To support this goal, human resources officers can implement open door policies where employees can share frustrations and gain support as part of a key insider threat prevention strategy.
Ideological Motives
If your organization is at the forefront of controversial activities, or your business model has political implications, you are likely more at risk of attracting "hacktivists", or people who are willing to become an insider threat actor as an act of protest. Especially in today's charged political environment, it's essential to protect your digital assets and sensitive systems, because ideological threat actors may not show warning signs. So, tools like user behavior analytics solutions likely won't pick up any unusual activity until it's too late.
Consequences of Insider Threats
Data Breaches
After Disney's recent breach, one has to wonder: would behavioral analytics have detected the external threat actor who used the credentials of a compromised insider to download sensitive data? Unlikely. The better approach would have been to identify and remediate sensitive data being uploaded to Slack when it happened, ideally with an automated next-gen SaaS DLP solution. These days, even the best security controls continue to be bested by determined threat actors. So maybe it's time to take a data-centric approach to ensure that there's nothing in your SaaS to steal. After all: no data, no theft.
Financial Losses
Insider threats may steal actual cash from accounts, but more often, the financial cost is due to the hefty price tag on investigations, breach notifications, legal costs, and remediation. Even in cases of unintentional insider threats, the financial impact remains high.
Due to lower budgets for cybersecurity, mid-market companies are also often hit harder than their enterprise counterparts when it comes to financial impact of a breach. (This definitely begs the question of whether costs are greater for covering gaps proactively, or rolling the dice and hoping to not get hit by an insider attack.)
Reputational Damage
Reputational costs to organizations in the mid-market and small enterprise sector tends to be felt more deeply. Since they are still building a brand in the market, damaging relationships with existing clients can be catastrophic—especially if they have serious growth goals from investors.
That cost is even greater if they are in regulated B2B spaces, where breaking business associate agreements governing HIPAA data protection can result in lawsuits and even civil action. B2B startups in regulated spaces are particularly vulnerable to the negative impacts of an insider threat.
What's bizarre is that enterprise B2C giants tend to skate through these scandals, even when security protocols were absolutely neglected. It would seem that the average consumer either cares less, or they have given up hope of ever being able to secure their personal and credit card data from breaches.
Challenges of Insider Threats in Cybersecurity
Trust vs. Monitoring
Treating all employees as potential threats certainly follows the principles of least privilege. However, mitigating internal threats doesn't have to mean treating your people in a way that makes them feel you expect them to take malicious actions. This is why it's vital to evaluate your approach to building an effective insider threat mitigation program. Should you implement user activity monitoring, using behavioral analytics to track every action of your employees and contractors? Or should you simply do a better job of protecting your data and critical systems? Is there anything in between? Your approach to insider threat mitigation has real implications on whether or not your program will be effective.
Detection Difficulties
Detecting insider incidents is one of the most difficult tasks a cybersecurity response team has. That's why, year after year, the Verizon Data Breach Investigations Report names insider threats as the most expensive breach type. Insider threat activities are not always anomalous, and they don't always show up in detection models. In fact, to really predict human behavior, you would need a tremendous amount of personal information and surveillance data to do any effective threat modeling.
Even then, it's often difficult to distinguish normal behaviors from risky ones. For example, what if an employee's job function requires privileged access, and they need to download a file to their personal device in order to work on it or take the file to a presentation? This would create a false positive in many insider threat solutions.
Or, what if malicious insider threats go undetected, because they are using a phone camera to take pictures of data on a screen? No amount of UEBA or behavior-based security measures is going to prevent that activity.
Detection and Mitigation Strategies
Implementing Robust Policies
Having policies around strict access controls, data handling, and sharing protocols, are step one. The truth is, your policies are only as effective as your ability to enforce them. So, part of having strong policies that mitigate unwanted access is finding a data-centric way to enforce what people can and can't do with that data.
Employee Training and Awareness
Most people approach employee training by purchasing a canned video series that walks individual users through best security practices, followed by a series of short quizzes. Unfortunately, research shows that these programs have little to no impact on employee behavior. In fact, it's not just negligent employees who miss the mark. Accidental insiders fall prey to common mistakes all the time—and are not helped by video-based training (even if it does check a compliance box).
On-the-job data security training is likely to be far more effective, supporting employees with awareness, as well as the opportunity to proactively take action on their own mistakes in the context of their daily work. For example, suppose an employee posts code in GitHub. Next-gen cloud DLP solutions like Nightfall AI use highly advanced detection models to identify data handling errors. When these errors are identified, the employee is invited to take action and triage the alert themselves. As a result? They've experienced highly tailored, in-the-moment security training, and have taken a policy violation off of their security team's plate.
Employee Behavioral Monitoring Solutions
Security professionals are beginning to approach continuous monitoring as non-optional for developing a strong security posture. That doesn't just apply to one area of the environment, either. In order to mitigate insider risk, it's essential to conduct continuous monitoring. But this raises a question. Which should you continuously monitor: data or employees?
Given the risk of pushing employees to disgruntlement by creating an environment of heavy surveillance, perhaps the best approach is to monitor people lightly but aggressively monitor the data and enforce policies for both.
Utilizing Technology for Detection
Again, the question is what you want to detect. At Nightfall AI, we invested heavily in building ground-breaking data detection models that are not only light and easy to implement, but are paired with a robust policy engine. In this way, we give you all the benefits of continuous monitoring with none of the CASB noise, and none of the employee monitoring negativity.
Best Practices for Insider Threat Management
Developing a Comprehensive Program
Proactive Risk Mitigation
Taking a comprehensive approach to mitigating insider risk is essential. Experts advise collaboration across cybersecurity, IT, HR, and legal teams. This enables organizations to proactively deter and mitigate risk through policy, healthy employee management, and a culture of both security and positivity.
Build a Culture of Trust
In an environment where employees and employers feel they are on the same team, rather than "us versus them," people are more likely to be positively motivated to engender and honor trust. Collaborating with employees by inviting them to remediate data handling errors themselves, before sending violations to security for investigation, reduces the likelihood of all three types of insider threats: unintentional, negligent, and malicious.
Reactive Risk Mitigation
While proactivity and building a culture of security significantly reduces your chances of emerging malicious threats, it's still important to have a backstop. After all, every organization needs a "goalie" to catch what slips through the defense. That's where Nightfall comes into play–we'll be your cloud data security MVP.
Comprehensive Risk Mitigation
To ensure your organization's ability to respond to insider threats immediately and prevent heavy losses due to a breach, you can:
- Build a human firewall to educate and empower employees to be part of the remediation process.
- Implement a tool capable of AI-powered data detection and automated response in your riskiest SaaS applications, like GitHub, Slack, Teams, Google Drive, Zendesk, Jira, and more.
- Funnel highest risk or trending alerts to your security analysts for further investigation.
- Reduce the amount of false positive alerts your security operations team deals with, so they can stay focused on responding to real incidents.
Conduct Regular Audits and Assessments
There are a number of key ways to understand the specific risks that insiders could pose to sensitive data types like intellectual property (IP), API keys, personal data, financial information, and more. What's important is that as compliance regulations continue to move in the direction of "continuous monitoring" over point-in-time assessments, organizations don't remain dependent on annual testing or assessments alone. Rather, a variety of assessments and tests is needed to return an accurate picture of security gaps and how well they are being remediated over time. In each assessment type, the risk of insider threats should be factored into evaluation and findings.
- Risk assessments: High-level evaluations of policy, configurations, and vulnerability scanning.
- Compliance audits: Detailed deep-dive audits conducted by credentialed third party auditors; usually required annually in regulated industries.
- Penetration testing and red teaming: These range from automated tests to commissioned tests with white hat hackers who simulate an attack. If you ask any ethical hacker, they will tell you that cracking passwords and gaining what systems will read as legitimate access to secrets and sensitive data is one of the easiest tasks they have.
- Purple team exercises: Collaborations between "red" security testing teams and "blue" IT teams to identify and remediate security gaps together.
- Tabletop testing: Live simulated attacks where every member of an incident response plan—including legal, security, IT, and executive staff—participates in a walk-through of response steps. These expose gaps that can cause an incident response plan to fail, providing an opportunity to remediate and update plans.
- Security posture audits: These are scans of your workspace environments to find security gaps like sensitive files that have been shared improperly and are now open to unauthorized access, or unprotected critical assets.
Monitor Your Data Continuously
Continuous data monitoring: Use an AI-powered DLP solution that accurately detects, alerts on, and automates remediation of sensitive data everywhere employees work. This is very different from behavioral monitoring, as it focuses on remediating the data and elevating your people as partners in cybersecurity. Security is, after all, a team sport.
Learn More About Nightfall AI
Nightfall AI provides next-gen DLP across your environment, with a special focus on your "hardest to reach" areas, namely your cloud-based SaaS applications and workspaces. Our philosophy is simple: create the most powerful AI detection engine on the market, and empower employees to be part of the solution.
See Nightfall in action by scheduling your own custom demo today.