How to Identify the Insider Threat

Exploiting a 30-Day Window of Opportunity
Although insider-threat incidents within organizations tend to be different case-by-case, says Carnegie Mellon University's Dawn Cappelli, there are similarities and patterns that organizations can look for when mitigating their risks. What are some of the common characteristics among insiders, and how can organizations respond?

Researchers from the CERT program at CMU's Software Engineer Institute have analyzed more than 700 cases of insider threat to develop behavior models of the insider who could threaten an organization's IT. In their research, they identified four major categories of insider threats, who typically commits the crime and when. Those categories include: IT sabotage, theft of intellectual property, fraud and espionage.

When insiders steal intellectual property, they usually act within a 30-day window, says Mike Hanley, who coauthored the recently published CERT paper entitled "Insider Threat Control: Using Centralized Logging to Detect Data Exfiltration Near Insider Termination."

"IT sabotage is typically committed by system administrators, programmers, technically sophisticated users, privileged users who become very disgruntled and they actually typically set up their attack before termination ... and carry out their attack after termination," says Cappelli in an interview with Information Security Media Group's Eric Chabrow [transcript below].

Theft of intellectual property or industrial espionage involving trade secrets like scientific information and source code is typically committed by scientists, engineers and programmers, Cappelli explains. "In these cases, this is where we found that most of them steal the intellectual property within thirty days of resignation," she says.

Knowing this, organizations can look for certain patterns of anomalous activity and take action before the data is removed.

"Often, when somebody e-mails things off a network or is burning disks to remove information from the corporate network, that's usually preceded by some sort of data download internally to get the data onto the end-user system," Hanley says. "If we can look for that, we can potentially catch these folks [before] rather than at the point they've committed the crime."

In the interview, the CMU researchers discussed:

  • Common characteristics of insiders who threaten an organization's IT;
  • Organizational efforts to identify and catch disgruntled employees before they can do damage; and
  • Roles of different leaders within an enterprise to mitigate the insider threat.

Cappelli is technical manager of the Insider Threat Center and the enterprise threat and vulnerability management team at the Software Engineering Institute's CERT program. Previously, Cappelli served as director of engineering for the Information Technology Development Center of the Carnegie Mellon Research Institute. Earlier in her career, Cappelli worked as a software engineer for Westinghouse Electric Corp., developing nuclear power plant systems.

Hanley is a member of the technical staff in the CERT program, and has been testing and deploying new software, managing incidents and supporting systems across the globe. He holds a master of science in information security policy and management from Carnegie Mellon and a bachelor of arts in economics from Michigan State University.

The Insider Threat

ERIC CHABROW: Before we get to the findings of the paper, let's discuss the insider who poses this threat. Are insiders a greater threat today to organizations than in the past, and if so, why?

DAWN CAPPELLI: Well, it's kind of hard to tell if they're a bigger threat now than in the past, because we get our information from the media, law enforcement, organizations who tell us about incidents, but in many cases or in most cases ... there's a lag time between when the crime actually happens and when an arrest is made, and certainly when they go to trial. I can tell you that we're keeping very, very busy because we've been finding a lot of insider-threat cases. We have a lot of grad students who spend time capturing those cases in our database. Another interesting fact is the growth in unintentional insider threats. For the past tens years, the CERT Insider Threat Center has been looking at malicious insider threats, but we're now starting to include non-malicious insider threats and we're about to actually start a new study of those types of insider threats.

CHABROW: When you say non-malicious insider threat, are you just talking about carelessness on the part of users and employees?

CAPPELLI: It ranges from carelessness to victims to devious outsiders who are trying to get in. It can be someone who loses a back-up tape or it can be someone who has sent spear-phishing e-mail, and many of those spear-phishing e-mails are very well crafted. So no matter how much security awareness training you give to your employees, some of these attacks can be very difficult to really recognize.

Categories of Insider Threats

CHABROW: You've identified four categories of malicious insider threats: IT sabotage, theft of intellectual property, fraud and espionage. Are there common characteristics among the insiders who partake of these four areas?

CAPPELLI: Interestingly enough the ... crimes are very different. Who does it, what they do, why they do it, when they do it is very different. IT sabotage is typically committed by system administrators, programmers, VBAs, technically sophisticated users, privileged users who become very disgruntled and they actually typically set up their attack before termination and most of them only actually carry out their attack after termination. That's IT sabotage. That's when an insider wants to cause harm. They want revenge. They want to bring down systems or wipe out data.

Theft of intellectual property, or industrial espionage involving theft of trade secrets like scientific information, engineering information, source code - that's typically committed by scientists, engineers, programmers, sales people, someone who steals what they've worked on. In these cases, this is where we found that most of them steal the intellectual property within thirty days of resignation. There may be a disgruntled factor there or they may just be leaving to start their own business, or in about a third of the cases they're stealing the IP to take outside of the United States.

CHABROW: Could there be arguments among some of these people that they own this information?

CAPPELLI: Some of the insiders claim that they just didn't realize that it wasn't theirs to take, but organizations have gotten much better in recent years about having employees sign intellectual-property agreements. In the cases in our database, they were prosecuted and so their argument wasn't successful. We're currently doing a study with the Secret Service and the Department of Treasury and the financial sector and that study is funded by DHS' Science and Technology Directorate. That study is specifically looking at insider fraud. In the next few months we'll be coming out with a report that will have a very detailed model on insider fraud.

CHABROW: When you talk about the fourth category of espionage, you're talking about national-security espionage?

CAPPELLI: Yes, now industrial espionage falls under theft of IP.

Catching the Insider

CHABROW: How does one catch insiders wanting to do harm, such as stealing intellectual property or better yet, prevent them from damaging the organization?

MIKE HANLEY: We've got a series of behavior models that we've created over time from all the case information that we've collected. We have several hundred cases; I believe we're up over 700 cases now, and when we look at those and step back and create these models, we can find patterns of behavior and interesting descriptive statistics about the types of things that the insiders engage in that we can start looking for on technical systems. We might look at things such as the 30-day window that you mentioned at the beginning of the talk, where most insiders steal IP within that window we can say, "Well what might that look like?" If I know that most insiders steal information using e-mail to exfiltrate the information, I can start narrowing down and say, "Let's look at how we can instrument our logging server that captures that e-mail information or how we can restrict messages that are outbound from our game-server, for example, to either detect, prevent or respond to those attacks more efficiently."

CHABROW: So there's a certain pattern that you tend to see, more activity or something else?

HANLEY: Yes, it's not necessarily that we look for specific patterns. We usually try to pull out the patterns that are most interesting in our data. That could be things that are most prevalent or specific requests that we get from our sponsors and then we try to look at it in our labs. We actually have a physical lab that's sponsored by the Department of Homeland Security to instantiate these patterns and test variations of them with open-source tools, with commercial tools and try to find ways to develop good counter measures and indicators for these types of attacks.

Preventing the Insider Threat

CHABROW: A lot of our listeners are people who run IT security organizations in businesses and government. What should they be doing to make sure this doesn't happen to them?

CAPPELLI: In the insider threat lab we have recently released two new controls. One of them is the one that you mentioned. It might be helpful Mike if you kind of highlight what we have in those two new controls and then we can talk about what we'll be doing in the future.

HANLEY: The document that Dawn mentioned is the first in a series. We're working on publishing controls based on some of the initial, interesting findings from our behavioral modeling and from taking a sort of broader look at the data. The one that's out on our website now looks at that 30-day window and, as an example, uses Splunk, which is a centralized logging sweep that people are probably familiar with in your audience. We know that people are stealing within that 30-day window and we know that they exfiltrate via e-mail. What's a good, efficient way for us to narrow the search space such that we can try to find that malicious behavior and then provide sort of a rough framework of what the problem is and then go into examples of how you can instrument for that? We've also got a video that was released at the same time, a recreation of an insider attack using our lab. It displays the progression of an actual sabotage case and demonstrates a few points where the organization could have intervened using tools in our lab to prevent the attack.

A couple of things that we have forthcoming, that we already sort of talked about publicly but will be put into a similar document form to the control that's out there today, involve things like using network flow data. If we see things involving data exfiltration, where there are large amounts of data movement on the network, can we instrument appropriately to detect that type of activity before the information gets off the network? Often, when somebody e-mails things off a network or is burning disks to remove information from the corporate network, that's usually preceded by some sort of data download internally to get the data onto the end-user system. If we can look for that, we can potentially catch these folks rather than at the point they've committed the crime. We can catch them one step before.

We've also been asked to look into what some of the capabilities of data-loss prevention are. Clearly, some of the DLP tools and other commercial information security tools have fairly robust sets of capabilities. I think you can argue that on a lot of those cases, the ability to collect the data is not the problem. It's a matter of, "How do I instrument the tool to look for the right things and create good indicators and rules?" We're trying to look at what types of rules and indicators are helpful DLP sweep. That's just a couple of examples of what we've got coming out in terms of controls over the course of the next few months here, and you will also see some new video demonstrations including one that demonstrates that Splunk rule using an actual insider case.

CHABROW: Is there a difference in behavior of the insider who's going to steal this intellectual property in whether they're quitting or being fired?

CAPPELLI: If they're being fired, they're a disgruntled employee who has been causing problems. If it's a system administrator, you need to be more concerned about sabotage. If it's an engineer, scientist, programmer, from what we've seen in our cases, their methods would be similar to whether they're being fired or they're quitting.

30-Day Period

CHABROW: Also, you talk about this 30-day period. Is this something where you need to start being proactive 30 days before or that you should start going back and checking your records and logs for the previous 30 days?

HANLEY: I think the answer to that question is you can do both. It sort of depends on what the capabilities of your organization are, and sort of how many hours they can dedicate to looking at that type of data. Certainly you can be proactive about it. It's just a matter of instrumenting such that you say, "Hey, if I know someone is leaving and I can put them on any sort of targeted monitoring or be looking appropriately for folks that we know are going to be on their way out, you can monitor in your real time." But if you don't have that capability or don't have the personnel to look for that, certainly the more you log you do have a capability there to at least go back and say, "I know something happened and I can see what left and when in case I want to prosecute or take some other course of action."

CAPPELLI: To be clear, that 30-day window can be 30 days before they turned in their resignation, as well as 30 days after they have turned in their resignation. You do need to be able to go back in time. That's the key.

CHABROW: What are the kinds of skills organizations need to have to catch this kind of insider?

CAPPELLI: We have developed our labs so that we can develop new ways that organizations can use existing technology to catch insiders. We've gotten really good feedback on what we've put out there so far. The idea is that we don't want people to have to go out and spend more on new tools. People spend a lot of money on technology. We're trying to develop solutions that they can use open-source or they can use whatever tools they already have. It's important that management understands insider threat as well as the technical staff. That's a key problem that we've seen, that everyone believes that detecting insiders and preventing insider attacks is IT's problem, and IT really can't do it alone. There needs to be communication across the organization because if no one tells them that they're going to fire this disgruntled sys-admin, then they don't know that they should be watching what this person's doing. If no one tells them that they're going to be laying off a lot of people and all of these people are going to be leaving, they don't know that they need to be watching for potential data exfiltration or sabotage. It's important that there's awareness across the organization and that's part of the reason we decided to release these videos, in addition to technical reports, so that hopefully other people in the organization can watch those videos and get a high-level understanding of what has really happened in these cases.

CHABROW: I guess it's just another example of how IT security, not just IT, is really integrated into the mission of the organization.

CAPPELLI: Exactly. That's what we're trying to do. We need to reach the upper management of organizations so that they understand that they need to work with IT and with information security to solve this problem.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.eu, you agree to our use of cookies.