Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development
ISMG Editors: AT&T's Ransom Payment in Snowflake Breach
Also: AI Bots in the Workplace; AI Regulations in the US and EU Anna Delaney (annamadeline) • July 19, 2024In the latest weekly update, ISMG editors discussed AT&T's alleged ransom payment to hackers following a breach of its Snowflake account, the challenges of using AI bots in the workplace, and the impact of differences in AI regulations in the European Union and the United States.
See Also: Corelight's Brian Dye on NDR's Role in Defeating Ransomware
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Rashmi Ramesh, assistant editor, global news desk; and Mathew Schwartz, executive editor of DataBreachToday and Europe - discussed:
- Telecom giant AT&T's alleged ransom payment to hackers following a breach of its Snowflake account and why so many victims pay a ransom to their attacker in return for a promise to delete stolen data;
- The backlash against HR company Lattice's plan to treat AI bots as human employees and its subsequent cancellation of the feature;
- The contrasting approaches to AI regulations taken by the EU and the U.S. and how they might affect global businesses operating in both regions.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the July 5 edition on remembering ISMG colleague and industry veteran Steve King and the July 12 edition on how we should handle ransomware code flaws.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm Anna Delaney, and today we'll discuss the news about AT&T allegedly paying hackers a ransom following a breach of its Snowflake account, the role of AI bots in the workplace, and the contrasting AI regulations between the EU and the U.S. Today, I'm joined by Mathew Schwartz, executive editor of DataBreachToday and Europe; Rashmi Ramesh, assistant editor, global news desk; and Tony Morbin, executive news editor for the EU. Great to see you all.
Tony Morbin: Good to see you Anna.
Rashmi Ramesh: Thank you.
Mathew Schwartz: Thanks for having us.
Delaney: Mat, this week's news revealed that telecom giant AT&T allegedly paid a ransom following a breach of its Snowflake account, emphasizing ongoing concerns about ransomware victims opting to pay for assurances of data deletion. You posed an important question in your reporting of the story - what will it take for victims of cybercrime to stop directly funding their attackers? Bring us up to speed on the story, and maybe have some thoughts as to how we can answer that question.
Schwartz: Great question, and it is a topic that keeps coming up repeatedly, as we see reports that although the total number of ransomware victims who choose to pay a ransom to their attackers, seems to be going down. It seems to be lower than a third now, maybe in the 28% range, according to some firms that help victims. 28% of victims is still a huge amount. We often see ransomware groups publicizing victims who don't want to pay, partially to try to normalize. Some of the sky high ransoms they demand initially. Now, the victims who pay don't necessarily pay ransoms of that amount, but the narrative is still so often being controlled by these ransomware groups who are horrible criminals. We've seen that repeatedly. They disrupt healthcare, cancer treatment and children's hospitals. These are scum of the earth. We don't want to give them any wiggle room when it comes to controlling the narrative, controlling the discourse. But, paying them does that - validates this criminal business model, gives them funding to make future attacks on future victims as they seek out new kinds of data from other organizations to exfiltrate and to threaten to leak. And then, if they don't pay, to leak it, and on and on again. We keep seeing this. This week, for example, I've been reporting on the Change Healthcare breach. Change is owned by UnitedHealth Group, one of the major U.S. health insurance service providers, and its CEO in May told Congress that the breach might impact a third of all Americans. Now, this is despite the company having paid a ransom to attackers in return for a guarantee from them that they would delete stolen data - ransom of about 22 million dollars. Then, the group involved, kept the money, didn't pay the actual hacker. So, the hacker, who seems to possibly be based in the West, took it to a different ransomware group and shook Change Healthcare down for a second time. Did they pay the second time? We don't know. But, an organization that does appear to have paid, reportedly according to Wired, who talked to a security researcher who handled the negotiations, was AT&T, and attackers demanded a ransom. It was in the neighborhood of about a million, and AT&T reportedly paid about a third of that, which is a lot less than what we see with some of these big organizations that got hit. Did AT&T allegedly pay to get a decryptor, which sometimes is a choice a business needs to make if it's otherwise going to go out of business? Allegedly, no, it did not. It paid solely for a promise from attackers that they would delete the stolen data, and they even sent a video of themselves doing it. Now, you might say, "Oh, but Mat, couldn't such a video be faked?" Yes, couldn't these assurances be entirely false, given that these are criminals who regularly attack children's care and hospitals and other critical services, and the answer is a resounding Yes! They're selling the ability of AT&T and other organizations to say, "Okay, the horse has already fled the barn. The barn is burning, but we've managed to close the door on the burning barter." Pick your metaphor. It's bad. This is them trying to spin the message after the fact. It's ineffectual. It funds cybercrime, and unfortunately, as with Change and AT&T, we keep seeing it repeatedly, and I'm not sure how it's ever going to stop.
Delaney: Huge topic. How effective are exceptions for national security and justifying delayed breach notifications, such as AT&T's from the DOJ, given potential risks to public safety?
Schwartz: That's an interesting question. For the very first time that we know of, the Department of Justice invoked securities and exchange regulation exception that they can do in cases of public safety or national security. They did this with AT&T because it looks like the FBI has gotten one of the suspects arrested. Allegedly, this person hacked T-Mobile, and allegedly the same person, who is an American based in Turkey, was also involved in the AT&T breach. They have paused that breach notification by AT&T, but that doesn't seem to have had any impact on whether AT&T paid the ransom or not. It's an interesting footnote for this breach and for notification by a publicly traded organization. But again, it didn't stop them from paying.
Delaney: What are the other approaches organizations facing ransom demands can take? Is it to pay or not to pay? Is that the only options here?
Schwartz: There are other options. Definitely reach out to ransomware incident response groups, because they may have discovered workarounds. They often do. They've often found a way to decrypt files quietly. Now, of course, this doesn't give someone like AT&T the right to say, we took all possible steps after we lost control of your data to try to get the attackers to delete it, and maybe they never did, but we gave them a bunch of money anyway just in case. It doesn't fix that problem. If an organization is serious, though, instead of setting aside this money for payment or making the payment, they should be putting that money into prevention, so that they never have to consider whether or not to pay. No matter how much you prepare, you could still get hit. So the best message by a breached organization hit by ransomware is to say, we got shaken down and we do not pay criminals. We will not perpetuate this cycle any further. Instead, we spend a lot of time and effort preparing and practicing for what would happen when we inevitably get hit. We've wiped and restored all systems, and that's it. We're done. We're not giving attackers any money. They can go try to find some other victim. What I'd like to see is organizations saying, we've prepared so we didn't have to pay. AT&T didn't prepare so they had to pay, but they didn't really have to pay, but they ended up paying.
Delaney: Excellent insights and takeaways. Thank you Mat. Rashmi, should AI bots be treated as human employees? Not everybody seems to think so. HR Company Lattice recently faced backlash and canceled its feature to treat AI bots like human employees. It's quashed for now, but digital workers remain a point of interest for many in the industry. Don't they Rashmi?
Ramesh: To give you a brief about what happened. Lattice was co-founded by Sam Altman's brother Jack Altman. He's no longer part of the company. Last week, the company said that it was making history by attempting to integrate AI bots into its workforce, and it did that by giving these bots employee records, onboarded them like human employees, provided them training. They even set performance metrics and assigned them a boss to give them feedback. Now, in an interview, the current CEO, Sarah Franklin, also said that she would fire these digital workers if they did not perform well or compromise the company's reputation in any way a human would. Now, whether good, bad or not, it was a significant step into integrating AI into the workforce. But, in just three days, there was massive backlash against the step - so intense that the company had to roll back the program. People, including those in the enterprise space and AI industry, also said that the idea of giving digital workers the same status as human employees was not okay. Digital workers is what Lattice called the AI bots. Now, one, you have the human element, where the approach seemed like it treated humans as mere resources to be optimized alongside machines, which did not sit well with many people. Then, there's a debate about what's the point of this specific exercise. Companies are integrating AI into the workforce and quite quickly and at scale. But, as one of the commentators pointed out, there are more productive needs in AI for HR industry as well, rather than this exercise, which seems like a PR exercise at this point. Clearly, the company did not expect this sort of a backlash, and in the statement that recalls the measure, the CEO also said that the initiative has sparked many questions, and they don't have clear answers yet. But anyway, this whole episode brings up a much broader concern about AI in the workplace as well. There's always a growing fear that AI could replace millions of jobs, making human employees obsolete, and there are studies and surveys that validate a small portion of that fear as well.
Delaney: While we're not at the point of asking our AI bot colleagues about their weekends or making them virtual coffees, how do you see the role of AI in the workplace evolving over the next few years?
Ramesh: There are more use cases than hours in a day. But, to summarize it broadly, AI will not replace skilled human workers, but it will completely change how work is done. We're already seeing this change underway with the automation of routine tasks and supplementing strategy, decision-making, and recruitment - every aspect of our jobs will have AI integrated into them. But, it will, at least in the near future, still need humans to oversee it. I don't know if this statement will age well, but for now, AI will support humans rather than replacing them. For example, Intuit laid off about 1800 workers recently. And very unapologetically, it said that it is not cost cutting. We will hire back for the same roles, but we'll hire people who can align with the company's generative AI vision. This does seem like it's the future, or at least the near future.
Schwartz: Rashmi, was there any discrimination in hiring that you saw? For example, did this experiment limit itself to ChatGPT or did they also accept applications from Google's Gemini or Microsoft Copilot?
Ramesh: They made the bots themselves.
Schwartz: Oh! That poses some interesting ethical dilemmas. But anyway.
Morbin: It was good to see that the employer found out what the difference between machines and humans was. Humans answer back.
Delaney: We're all human at the end of the day. Thank you. Rashmi. Absolutely fascinating story. Tony, this week, you're looking at AI risk versus regulation in relation to both EU regulations coming into force, while the Republicans, if elected, plan to rescind the AI executive order. Potentially lots of change in the air. Do share your thoughts.
Morbin: Over a decade ago, when the Large Hadron Collider was switched on, there were concerns in some quarters that it might create a microscopic black hole that could potentially suck up the Earth, or it might create a strangelet that could convert our planet into a lump of dead, strange matter. Certain scientists said that these outcomes were extremely unlikely. So, they went ahead, and we survived. Extremely unlikely was not a particularly reassuring phrase to use when balanced against the end of the world. And there were those who felt that if there was any risk that the world might end, then we shouldn't do it, but they had no power to stop it. In some ways, it feels like that with AI today. I'd like to think that we all want to get the most out of AI's capabilities while making sure that it's implemented safely. But the truth is, people have different priorities and risk appetites, which they then apply to themselves and others. There are those at both extremes, from wanting to ban AI research and use altogether to those wanting unfettered acceleration of use, regardless of the risk, while the rest of us are probably somewhere in between. Attitudes also vary by region, country and political party, with Europe generally more keen on regulation and the U.S. less so, particularly the Republican Party. And I'm not being party political, just making an observation that the momentum in the U.S. now is with the AI accelerationists, particularly following president and Republican nominee Donald Trump selecting Ohio Senator J. D. Vance as his vice president if Trump was to be elected. Vance and the Republican Party now oppose regulation of AI, and their stated goal is to repeal President Joe Biden's executive order on AI. The executive order itself was already far less stringent an approach than that when adopted by the EU, which is more regulation based. At the same time, we've got whistleblowers in the U.S. alleging that OpenAI illegally barred its staff from revealing risks that the irresponsible deployment of AI posed from the entrenchment of existing inequalities to the exacerbation of misinformation to the possibility of human extinction. Across the water here in Europe, regulations underway with the EU AI Act coming into force next month on August 1. It seeks to protect democracy, fundamental rights, environmental sustainability and the rule of law. By coming February, it will ban AI use with unacceptable risk and place constraints on AI use with high risk. The EU Digital Chief Margrethe Vestager says, with these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted. Now, this might sound good in theory, but the critics say that this well-intentioned legislation has been rushed and it might end up smothering the emerging industry in red tape. There, the regulators have left out essential details urgently needed to give clarity to businesses seeking to comply. Hence, it would curb the use of the technology itself. Within nine months of the AI Act entering force, new codes of practice that explain how to implement the rules will need to be in place and that will also require a rush to pass additional legislation. It's also not clear whether locally, it will be national telecom competition or data protection watchdogs that police the rules, as the AI Act doesn't specify. Without more clarity, there's a danger of patchy implementation regulation, and this could trigger confusion among businesses as they roll out products in different countries, according to a recent equity report. Penalties for non-compliance reach $38 million or 7% of worldwide annual turnover, and the cost of compliance could run into six figure sums for a company with say, 50 employees, which has been described as an extra tax on small businesses. Now, the EU officials cited by the FT deny that the act will stifle innovation. Note that it excludes research and development, internal company development of new technologies, and any system that's not high risk. In the U.K., the labor government is expected to set out an AI bill in today's King's speech, which is likely to be a watered down version of the EU regulations, with perhaps a few get-out clauses to encourage investment and innovation. In China, the AI regulations are reported to be intentionally relaxed to keep the domestic industry growing. An MIT Technology Review report by Angela Huyue Zhang, a law professor at Hong Kong University, explains that this is to be expected. Although foreign perspectives tend to focus on China's regulatory crackdowns, she says that the process almost always follows a three-phrase progression - a relaxed approach, where companies are given flexibility to expand and compete; sudden, harsh crackdowns that slash profits; and then eventually a new loosening of restrictions. Now, I'm not suggesting that we all ought to follow China's lead, but from my perspective, I do believe that we need to regulate to prevent accidental harms and exploitative manipulation of AI. But, we also need to encourage experimentation. Unfortunately, with no single authority in charge and public opinion not able to have an impact even if it were reliably informed, it looks like the future of AI implementation is going to be pretty messy, confused and will combine good intentions, powerplays and greed.
Delaney: Very interesting balance to get right. How might these contrasting approaches between the U.S. and the EU impact global businesses operating in both regions globally? Just future gazing here, but what are your thoughts?
Morbin: Money will go where you know the money is. With the U.S. being more open to innovation with less restrictions, the investment is more likely to be in the U.S. and that includes European citizen's money. They find it easier to invest in AI development in the U.S. On the other hand, there's also a walled market, but it's going to become harder for the Chinese or others to break into Europe with their AI because it's unlikely to meet the regulations there.
Delaney: From the chaos of the galaxy. Thank you so much Tony. Interested to see what happens in the King's speech this week. Thank you for all the information and education you've provided.
Schwartz: Thanks for having us on.
Morbin: Thank you.
Delaney: Thanks so much for watching. Until next time.