Why CISOs Must Care About Sony BreachPeople Are the New Perimeter
"Even if you had completely secure systems, you could still have an incident because an individual shared too much information ... that then causes an issue for a company," Harkins says.
Joining Intel in 1992, Harkins is vice president of Intel's Information Technology Group and the company's first CISO and general manager of information risk and security.
To address data breach concerns, companies need to focus on awareness training, explaining the behavior expected from all users, including the developers of the systems and the people who administrate those systems. Also, the most sensitive information assets need to be highlighted because not all companies have the ability to secure everything. Finally, organizations need to pay attention to technology solutions, including risk management models and encryption, and see if they are affective solutions in preventing data breaches.
IT security professionals need to become eclectic about different scenarios, according to Harkins, in order to keep up in a changing online environment. "Let me go block users from using social media. Guess what, you haven't really solved the problem because they're going to pull out their handheld," he says.
Security teams need to look at the controls they have put in place and question whether they are shifting risky behavior to different areas and perpetuating the problem. "Are we driving people around us instead of trying to shape the usage, behavior and thinking about the user experience and guiding them to do the right thing?" Harkins asks.
In an interview with GovInfoSecurity.com's Eric Chabrow, Harkins addresses:
- Consumer-centric breaches such as Sony's PlayStation and why IT security professionals should care (see Breach Gets Sony to Create CISO Post),
- Why encryption that could prevent damage from some breaches isn't always an advisable security solution, and
- How CISOs must, at times, avoid conventional solutions to safeguard systems if they want their organizations to achieve their goals.
Harkins is vice president of Intel's Information Technology Group and CISO and general manager of information risk and security. The group is responsible for managing the risk, controls, privacy, security and other related compliance activities for all of Intel's information assets.
Before becoming Intel's first CISO, Harkins held roles in Finance, Procurement and Operations. He has managed efforts encompassing IT benchmarking and Sarbanes Oxley systems compliance. Joining Intel in 1992, Harkins previously held positions as the profit and loss manager for the Flash Products Group; general manager of Enterprise Capabilities, responsible for the delivery and support of Intel's finance and HR systems; and in an Intel business venture focusing on e-commerce hosting.
Harkins had taught at the CIO institute at the UCLA Anderson School of Business and was an adjunct faculty member at Susquehanna University in Pennsylvania. He received the Excellence in the Field of Security award from the RSA conference as well as an Intel Achievement Award.
The University of California at Irvine granted Harkins a bachelor degree in economics and the University of California at Davis granted him MBA in finance and accounting.
Weighing in on Recent BreachesERIC CHABROW: With a rash of highly publicized breaches this spring, Sony PlayStation, Epsilon, State of Texas Comptroller's Office, NRSA, the vulnerability of IT systems is on the minds of many business and government leaders, consumers, and of course the IT security practitioners charged with securing their organization's digital assets. When you first hear about one of these breaches, what goes through your mind?
MALCOLM HARKINS: When I hear about these breaches, I step back and think of some fundamental things we have tried to consider for ourselves. To some extent, I firmly believe that under any computer model, a compromise is inevitable. You have to be prepared from a response perspective and incident management perspective, whether it be an intrusion in your own network or a breach that you have to more broadly notify people. Have those incident processes well figured out, planned, practiced, and essentially ready to go.
CHABROW: The Sony breach is very consumer-centric, why should CISOs at a company like Intel, or for that matter a bank, government agency, or any other type of organization, care?
HARKINS: I think all we have to do is look at the news. We've certainly seen over the past few years a wide variety of breach notifications or intrusions that have been reported in the press. And not only consumer companies, but retail, government organizations, and hi-tech corporations. We've had everything from lost back-up tapes that occurred a few years ago that hit some retailers to unsecured wireless access that allowed intruders to get into a company's environment. We had an incident over a year ago called Operation Aurora. Google came out publicly with it. Intel had disclosed in our financials that we've had an intrusion similar to that. We've had earlier this year something called Night Dragon that affected the oil and gas industry. We've had public statements from federal governments, state governments, and universities. You can't just look at it and say this is a consumer thing. This is a federal government thing. You're in the defense industry and you're in the financial industry. It is very widespread.
Shifts in Data Breaches over the YearsCHABROW: What's different about these breaches in the past months then say breaches that occurred six years ago or ten years ago?
HARKINS: There is a big difference. If you go back to the early 2000s, even the late 90s, there were breaches for gaining access to intellectual property and other information like that, but most of the things that were in the press, things that were effecting most organizations, were intrusions or attacks affecting availability. You were affecting my ability to use my computer resources. We saw that with Code Red Nimda and we saw it with the SQL Slammer. We saw all those things from the mid-to-late 90s into 2003, 2004, even the 2005 period.
Now what we've seen is a pretty dramatic shift. Those denials of service attacks are still something people need to worry about because they're not going to go away, particularly as people go online and their online business potentially could suffer a denial of service attack. But in the core environment, it has been a big shift to those very subtle, slow, and specific types of intrusions into systems that are slow enough that the users don't necessarily see an issue with it. If somebody is surfing the web and they click on a link, or they get an e-mail, instant message, or chat, they click on it because it is enticing, or somebody is targeting them. It looks like something they would be associated with. It could be an industry organization sending them an item. It could be a news organization. It could be coming from something that looks like a friend. And they click on the link and they don't know malicious code is getting installed on their system, data is getting exfiltrated, key strokes are getting logged, their credentials are getting taken, and in some case those intruders then leverage that system. And you don't have to be a high targeted individual; you could just be a regular employee in an organization and have that happen to you. The intruder would leverage your system and your credentials to escalate further privileges within your environment to then get to more targeted areas of sensitivity.
CHABROW: Let's look at some of the recent breaches and let's think about some of the lessons that you've learned from them. Can you pinpoint several items in which you noticed certain commonalities among these breaches and what you do, or what other CISOs should do, to prevent them from happening to your organization?
HARKINS: I don't know that you can fully prevent them. The fact of the matter is that it is a risk management issue. You can manage risk and mitigate risk, but you can not eliminate risks. That is just one mind-set that has to be changed. How do you manage the risk and how do you mitigate the risks such that to some extent you can live with some level of potential compromise? As I've said before it will occur. There are a number of things people can step back and consider regarding how to approach this when they think about managing those risks.
We shifted our strategy a few years ago towards a concept that people are the new perimeter because of mobility, interaction of third parties, and social computing. It is also where people are, what computer resources they are using and the time they are using it during the point of compromise. I'm clicking on a link, or a system's administrator in the IT organization doesn't patch a system. Or an IT engineer doesn't configure it properly from a security standpoint, or the developer of an application writes code that has some errors in it that then cause vulnerability. You have to think of those people aspects of it, and even if you had completely secure systems, you could still have an incident because an individual shared too much information and maybe by mistake disclosed some sensitive information that then causes an issue for a company. That I think is one key item.
People as the New PerimeterCHABROW: Focusing on people as the new perimeter, is there more awareness, training or something else?
HARKINS: It is a combination of three things. One is the awareness training and the behavior you expect across all your users, including the developers of your systems and the people who administrate your systems. The second aspect is to understand the critical business processes and the most sensitive information assets that you want to protect. And you need to understand the business process controls that you want to put in those areas to protect them. Lastly are the technology solutions that the IT organization provides. Those technology controls prevent, detect and respond to different incidents. It's a combination of information technology and what the CISO team does, coupled with integration with business processes that are critical to the company, focused on the critical information assets. That is then coupled with the behavior you want, including people, training and awareness aspects that span the user and people in the IT organization.
CHABROW: As you are describing this, it made me think back to what you originally talked about - risk management. You talk about identifying critical information to safeguard. All organizations have limited assets to invest into this. Can you discuss the idea of identifying certain elements to safeguard and others maybe not as much?
HARKINS: At a general level, you need to have some level of a minimum security stack or a minimum security standard across all of your systems. Differentiated from that, you have to look at which systems and which information assets pose the highest risks, and then determine how you want to manage that risk. In some cases, it may be much. You could segment your network further, you can encrypt the data, you can have additional access controls and you can have detective controls for anomalous issues affecting that system. It's like how you look at business continuity and disaster recovery. In some situations the availability risk for a certain thing is exceptionally high because the availability of a certain system or a capability goes down. Maybe the company can't ship a product, in which case you might have a lot of redundancy there to mitigate availability risk. It is a similar thing in protection from a confidentiality and integrity standpoint. You have to look at how many layers of controls you want, but also recognize sometimes the layers of controls around information can strain the use of the information. And if you strain the use of the information too much, you may actually start destroying the value of the information because now you are impeding the business use of it.
CHABROW: Are you surprised sometimes to hear that a lot of the files being breached aren't encrypted?
HARKINS: No I'm not. I'm a big fan of encryption. I believe that encryption is a good thing for certain data at certain points in time. But if you encrypted all of your data, you wouldn't actually be able to scan it in order to check for viruses or malicious code. If you encrypted all of your data, if you mess up or you have another issue with say the encryption keys, now you can't unlock your data. As much as encryption is a strong control and should be placed around certain types of data at certain points in time, encrypting data is not necessarily the only answer. Encryption of data can slow the usage of it. It can impede business processes. It has to be appropriately applied at the point in time that you need to do it, whether it is in storage or in rest. If you are using the data, you really can't encrypt it in use.
CHABROW: Are there other points of risk management you want to pick up on?
HARKINS: The thing I worry about with all of these breaches is that companies, individuals and users start shying away from technology and the productive use of it. I believe the best way to shape risk is to sometimes run towards the risk of your assets. As a security organization, I believe my mission at Intel, and more broadly information security's mission in any organization, should be protecting to enable. If we are not enabling the use of the information, then the organization can't get the value. That's why I think it's a risk management thing. That's why I think there's a lot of balancing of items. As much as organizations look to prevent, detection is a big area that they need to focus on. And certainly response needs to be a prepared critical control for what I think is inevitable in terms of potential breaches or intrusions into people's computer environments.
Preventing Breaches: A Constant DialogCHABROW: When you discuss this with corporate executives at Intel, what kind of response do they give you?
HARKINS: It's a very positive response. There's always tension. Can we do more and are we doing too much? It's a constant dialog of balancing the business needs for collaboration and sharing usage of the information with the desires and needs to protect it. It's constantly trying to find the right controls that put us in a spot where we are craving enough openness and usage of information, sharing with third parties, collaborating and doing it in a way that provides us a level of comfort that we have done a decent job of prevention. It's about detecting things when we haven't been able to prevent them, and then being really good at our incident response and investigations so that we can mitigate an issue before it becomes too big or a significant event.
One of the things we've shared and talked about with the organization and our staff was a set of the laws of physics for information security systems. I think of them as irrefutable laws for information security.
One is that information wants to be free. People are going to post, share and talk. It's human nature. That's why there are behavioral items you need to do. That is why you need detective items to see where people may have made mistakes and educate them further. But even if you locked everything down, you might still have a mistake that occurred because of that.
The second law is that code wants to be wrong. I don't think we're ever going to have 100 percent error-free code. It's not feasible, in which case there's always going to be some potential for vulnerability. Service is one of the background processes that IT organizations sometimes put in place to manage systems or the users put in place because they want to get updated on the weather, or news feeds, things like that. If you are an intruder those background services might be a good thing to go after because they create an essentially trusted path to assist them, since the IT organizations deployed it or the user has deployed it. They are not seeing all the things that are going on with it. Users want to click. I think we see that all the time. You are going to get an e-mail or an IM, you are going to visit a website, and you might click on an ad. That behavior pattern of being enticed by something and clicking on it is exactly where most people are getting compromised.
Finally, and this is my own paranoia to a large extent, I still worry about even security features being used for harm. Security features to some extent are constructed as code. They certainly have background services and users are going to click. So even with encryption, I worry about it. We do mitigation to see if our encryption capability was ever compromised either by a malicious insider or an intruder that would pose a very significant issue because you are now locked out of your systems or locked out of your data. How do you mitigate that risk? Even those security controls need to be thought about in terms of what harm they can do. How do you detect, prevent and respond to those types of incidents as well?
CHABROW: Are you losing a little sleep over this?
HARKINS: Sometimes I am. It's just the nature of the world we live in, given how cybercrime, nation, state, and politically motivated intrusions and attacks to some extent is a little bit asymmetrical. You are defending against multiple points that you have to think about, which is why you have to be very methodical in your thinking but also very eclectic about different intrusion scenarios, attack scenarios and issues. Don't just go with the obvious control because it's obvious. Let me go block users from using social media because I'm afraid they're going to post something. Guess what, if you do that on your network, you haven't really solved the problem because they're going to pull out their handheld and get onto the social computing sites anyway. Or they'll do it at home. What I worry about as a security organization is the controls we put in place. Are we actually driving risky behavior because we are driving people around us instead of trying to shape the usage, behavior and thinking about the user experience and guiding them to do the right thing? I talk with a lot of peers and I see them doing what I said - blocking things. Blocking certain things will just drive people underground or around you and you're getting a false sense of security instead of putting in place a reasonably controlled environment that will drive the right balancing of protection and enablement.