We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Much is made of the multitude of outside security risks and vulnerabilities faced by enterprises, and rightly so.
That said, though, many organizations may be overlooking their potentially most harmful threat: their employees and other trusted insiders.
Whether intentional or unintentional, insider risk and insider threat come in many forms and have harmful consequences – and in this post-pandemic era marked by economic uncertainty and workforce churn, they are on the rise.
A 2022 cost of insider threat survey by Ponemon Institute [subscription required] found that insider-led cybersecurity incidents have increased by 44% over the last two years, with average annual costs of known insider-led incidents up more than a third to $15.38 million.
This is detrimental on many fronts because, ultimately, “a company’s intellectual property is a critical asset,” noted Paul Furtado, vice president analyst at Gartner. “The dissemination of that information to outside parties or competitors can have a material effect on a company’s revenue,” he said, “or can negatively impact their brand.”
Or potentially worse.
What constitutes insider risk and insider threat?
The terms “insider risk” and “insider threat” are often used interchangeably, but they are distinct.
Insider risk refers to everyone connected to a given company’s systems. Whether an employee, contractor or third party, if they have – or have had – authorized access, they pose a risk. They have the potential to act in a way that could negatively impact an organization, whether that be maliciously or unintentionally, according to Furtado.
“When looking at insider risk, we are looking at 100% of the connected employees/contractors that have access to the data of the organization,” he explained.
An insider threat, meanwhile, refers to intent – that is, specific users who commit isolated acts and are motivated by malicious goals. For instance, a departing employee taking proprietary company data with them when they leave, or a disgruntled employee deleting important or sensitive information from a company server or cloud account.
“The best way to describe it is that every insider threat started as an insider risk, but not every insider risk becomes an insider threat,” Furtado explained.
Examples of outright insider threat include espionage, fraud, theft of sensitive data, deliberate destruction, damage or obstruction (sabotage), or collusion with – or pressure from – third parties.
The leaking of a company’s sensitive data can also very often be unintentional – due to an accident (sending an email to the wrong recipient), carelessness or other negligence. Similarly, employee credentials could be compromised due to outside exploitation.
Don’t import insider risks or threats
Also, it goes both ways, Furtado pointed out. When a company hires an employee, any data that new hire brings with them could create a legal liability. For instance, if an engineer is hired from a competitor and brings in prototype information from that competitor, their new employer could be found liable for accepting and using proprietary data.
A data exposure report from cybersecurity software company Code42 [subscription required] indicates the frequency of such information transferal: 63% of employees say they brought data from their previous employer to use at their current job. Similarly, 71% of organizations said they were unaware of how much sensitive data their departing employees typically take with them.
Ultimately, it comes down to human nature, said Carolyn Duby, field CTO with hybrid data software platform company Cloudera. “No matter how much technology is applied to secure an infrastructure, there will always be risks associated with the way humans behave,” she said. “Human behavior is often the weakest security link.”
While insider risks and insider threats have posed significant issues for enterprises for some time now – and this long prior to the current digital age – they have only become more prevalent amidst the so-called “digital revolution.” Data is being accumulated more and more by the day, and it is invaluable to enterprises – yet at the same time, it increases their vulnerability.
This has been exacerbated in just the last year or so amidst the COVID-19 pandemic and the ensuing Great Resignation (or “Big Quit” or “Great Reshuffle”), a phenomenon that started in the U.S. and has since gone global.
It’s been widely reported that the largest exodus of employees on record occurred in 2021. In November 2021, alone, nearly 4.5 million people in the U.S. voluntarily resigned, setting an all-time monthly record.
This mobility of people has intermingled with the abrupt shift – in some cases overnight – to remote work. All this has created “a perfect storm for sensitive data to leave organizations,” said Furtado.
In many instances, organizations were not prepared to move to a remote workforce on the scale that they had to, he pointed out. Along with this, the security visibility afforded in an office setting is greatly diminished in the world of remote work environments.
“Feeling comfortable in their own spaces and knowing they don’t have someone looking over their shoulder or sitting next to them, (employees) may feel empowered to ‘explore’ their network looking for sensitive information,” said Furtado.
Duby agreed: “When you work remotely, you’re less connected, right? There’s less oversight.”
Also, employees may simply be looking to make their work life easier – thus unwittingly putting their organization at risk – by downloading sensitive data to non-corporate devices or non-corporate approved apps.
A similar contributing factor is the rise in BYOD (bring your own devices). According to Tech Pro Research, 59% of organizations practice BYOD. And according to Microsoft research, 67% of employees use their own devices at work.
Experts point out that this has only broadened the attack area for cybercriminals, while also creating a multitude of data silos that are far outside an organization’s control.
Then there’s the general drying up of fraud opportunity elsewhere. For instance, billions of dollars are estimated to have been stolen from relief programs amidst COVID-19. But with the pandemic abating and governments cracking down on fraudsters, “people reaping the harvest over in COVID-land are now starting to turn their attention to other areas,” Duby said.
Policy and procedure basics
No company is immune to insider threat – so it is essential that they do as much as they can to protect themselves, experts caution.
The first, most basic but essential layer of protection is the development of a formal insider risk management program, Furtado said. This should clearly establish and outline policies and rules around data, data handling and what employees, contractors and other insiders can – and more importantly, can’t – do with data. And, just as importantly, it should be transparent and communicated to everyone in the organization.
“This is not something that should be rushed,” Furtado emphasized. “You don’t have the luxury of getting this wrong – the negative impact of a poorly run insider-risk program can be devastating to the culture of an organization and actually cause more risk with people leaving.”
Other experts suggest performing regular threat and risk analysis, providing ongoing training and observing models of “zero trust” or “need to know” basis.
Establishing stringent, meticulous offboarding solutions is also critical to reducing risks of insider threats and data breaches, according to Jony Fischbein, CISO with Check Point Software Technologies. As part of this, logs should be checked thoroughly before an employee leaves to ensure that no data has been transferred to an external source. Furthermore, companies should continue to regularly monitor accounts to ensure that all previously granted access has been revoked, he said.
“This is where a lot of organizations tend to fall down, particularly when they’re more focused on the new talent that’s coming in rather than the talent they’re letting go,” Fischbein wrote in a blog post on the World Economic Forum website. “It’s one of the rare instances in cybersecurity where looking back is just as important, if not more so, than looking forward.”
Artificial intelligence and behavior change
Signature-based detection is great for already known threats. But a behavior-based, AI-powered approach can adapt to new threats by looking for anomalies such as changes in the behavior of a server or endpoint device, Duby said.
This approach can enable companies to develop “good cybersecurity hygiene” practices, such as evaluating system logs to identify misconfigurations before they become vulnerabilities in production environments, or uncovering anomalies such as employees having access to systems that none of their peers do.
In understanding all of their systems and applications and who has access to what and why, they should always keep an eye on the flow of data, data patterns and user behavior, Duby said. And geographically distributed organizations – particularly those with remote work models – must be able to manage policy differences across diverse teams, regions and specific locations.
“This requires more than technology changes,” she said. “It requires a new culture of security.”
Especially when working from home on personal devices, it is critical that employees be trained to avoid a range of simple security lapses. For instance, an employee being on a video call flanked by a whiteboard containing proprietary information, logging into work from a shared device and forgetting to log out, or “not shielding a laptop from prying eyes in a café,” Duby said.
“Creating a culture of security means building appropriate training and awareness campaigns into daily interactions,” Duby emphasized.
A ‘human centric’ approach
But in combating insider risk and insider threat, companies can tend to overlook the obvious: Basic human and management skills.
Enterprises must take a “human-oriented approach to keeping up with employees, knowing how they are, what they need – because nowadays, things are tough, right?” Duby said. “You have to know the people in your organization and head these things off.”
As she put it, it’s about doing right by employees, listening to them, understanding and helping them, ensuring that they feel connected and understood. This should also be combined with a culture of open and honest communication.
This is “good, basic management,” she said. “Honestly, it just boils down to the basics of people skills.”
And while it should be a given, “when you get to a large enterprise, it can be very difficult.”
She underscored the fact that comprehensive training and one-on-one engagements shouldn’t come in the form of “repeated doomsday messaging that workers eventually tune out.”
Instead, they should be an integral part of a company’s inclusion and wellbeing activities. “Because by getting to know your employees better, you can identify potentially risky behaviors and address them before leaks occur,” Duby said.
Ultimately, though, even as organizations practice good security hygiene, insider risk and insider threat – and the methods by which they are intentionally or unintentionally carried out – will continue to evolve and grow ever more complex. This requires that companies be extra vigilant.
“I think the story isn’t completely written here,” Duby said. “Because we’re just starting to see the effects of the pandemic and the Great Resignation.”