WEBVTT 1 00:00:07.110 --> 00:00:09.540 Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm, 2 00:00:09.540 --> 00:00:12.780 Anna Delaney, and today we're covering three critical areas: 3 00:00:13.020 --> 00:00:15.780 handling ransomware vulnerabilities, the misuse of 4 00:00:15.780 --> 00:00:18.810 fake generative AI assistants to spread malware, and the 5 00:00:18.810 --> 00:00:22.380 implications of recent changes in U.S. Supreme Court decisions 6 00:00:22.470 --> 00:00:26.250 on cybersecurity and AI regulations. Today, the 7 00:00:26.280 --> 00:00:29.760 fantastic panel includes Mathew Schwartz, executive editor of 8 00:00:29.760 --> 00:00:33.120 DataBreachToday in Europe; Tony Morbin, executive news editor 9 00:00:33.120 --> 00:00:36.420 for the EU; and Chris Riotta, managing editor for 10 00:00:36.420 --> 00:00:38.880 GovInfoSecurity. Very good to see you all. 11 00:00:40.080 --> 00:00:40.590 Tony Morbin: Good to be here. 12 00:00:40.590 --> 00:00:41.430 Chris Riotta: Happy to be here 13 00:00:41.430 --> 00:00:42.180 Mathew Schwartz: Thanks for having us. 14 00:00:43.230 --> 00:00:46.020 Anna Delaney: So, Mat, do explain, are you in that 15 00:00:46.097 --> 00:00:47.880 swimming pool, perhaps? 16 00:00:47.860 --> 00:00:51.103 Mathew Schwartz: Yes, I know, facing up? Facing down? Who can 17 00:00:51.176 --> 00:00:55.967 tell? But, this is from a recent holiday, our vacation that I got 18 00:00:56.041 --> 00:01:00.611 to take in Alicante, Spain, and a little bit of a respite from 19 00:01:00.685 --> 00:01:05.033 the Scottish summer, which is looking an awful lot like the 20 00:01:05.107 --> 00:01:09.382 Scottish winter. So, it was really lovely to get somewhere 21 00:01:09.456 --> 00:01:11.520 with some non-stop sunshine. 22 00:01:11.810 --> 00:01:15.500 Anna Delaney: Similar to the London weather. Autumn all 23 00:01:15.500 --> 00:01:20.450 round! Tony, do explain, you always keep us guessing here. 24 00:01:20.570 --> 00:01:23.090 Tony Morbin: Well, this one's just, you know, the old street 25 00:01:23.090 --> 00:01:27.860 scam, you know the find the ball. And I just thought, you 26 00:01:27.860 --> 00:01:30.170 know, we'll have a scam. Because, although I'm talking 27 00:01:30.170 --> 00:01:32.690 about an AI scam, it's pretty basic, really. 28 00:01:33.800 --> 00:01:37.640 Anna Delaney: okay, always at heart, basic, Chris, is this 29 00:01:37.670 --> 00:01:39.380 NATO-related? Something else? 30 00:01:39.410 --> 00:01:42.500 Chris Riotta: It is, I'm based in Washington, DC. We have the 31 00:01:42.500 --> 00:01:45.980 NATO Summit going on this week. It's nearly impossible to get 32 00:01:45.980 --> 00:01:50.630 downtown without going through war-level security. But, I 33 00:01:50.630 --> 00:01:53.390 imagine this is what it must look like. You have the 32 flags 34 00:01:53.390 --> 00:01:58.190 behind me. I'm sure that many delegates from around the world, 35 00:01:58.460 --> 00:02:01.430 are showing up and hopefully seeing their flags somewhere 36 00:02:01.430 --> 00:02:03.770 downtown DC. So, I figured I'd pay tribute. 37 00:02:04.070 --> 00:02:06.140 Anna Delaney: You've got our new prime minister there as well. 38 00:02:06.890 --> 00:02:07.250 Chris Riotta: That's right. 39 00:02:07.520 --> 00:02:08.360 Anna Delaney: Do say hello. 40 00:02:09.470 --> 00:02:10.490 Chris Riotta: If I can, I will. 41 00:02:10.000 --> 00:02:13.660 Anna Delaney: Well, I thought it was time to share another 42 00:02:13.660 --> 00:02:17.260 picture of London at dusk, as seen from the city. And, as they 43 00:02:17.260 --> 00:02:20.650 say, If a man is tired of London, he's tired of life, and 44 00:02:20.650 --> 00:02:25.900 that I am not. Mat, in the swimming pool, today, you were 45 00:02:25.900 --> 00:02:29.110 talking about handling brands where vulnerabilities. So, from 46 00:02:29.110 --> 00:02:32.500 what I understand, there are two options, keep the flaw secret to 47 00:02:32.500 --> 00:02:36.880 help victims discreetly, or publicize it to assist victims 48 00:02:36.880 --> 00:02:40.360 more quickly. Maybe discuss the merits of each approach. 49 00:02:41.110 --> 00:02:43.510 Mathew Schwartz: Yeah. Well, not tired of London, and obviously 50 00:02:43.510 --> 00:02:47.110 we're a bit tired here of ransomware. Once in a while, 51 00:02:47.110 --> 00:02:50.830 there's some good news - I wish it was more often - but once in 52 00:02:50.830 --> 00:02:55.330 a while, there is some good news in the form of researchers 53 00:02:55.510 --> 00:03:00.220 finding vulnerabilities inside the crypto-locking malware that 54 00:03:00.220 --> 00:03:04.540 gets used by different organizations or their 55 00:03:04.570 --> 00:03:09.640 affiliates. As you may know, cryptography seems to be really 56 00:03:09.640 --> 00:03:15.490 difficult, and that oftentimes works in the favor of defenders 57 00:03:15.790 --> 00:03:20.440 and of victims. Because if researchers can find a 58 00:03:20.440 --> 00:03:23.320 vulnerability inside the crypto-locking malware, 59 00:03:23.650 --> 00:03:28.450 oftentimes it lets them decrypt stuff - I don't want to say for 60 00:03:28.450 --> 00:03:31.750 free, but easily. And, I'm not saying for free, because it can 61 00:03:31.750 --> 00:03:37.720 still be a massive undertaking, weeks, months of recovery. But, 62 00:03:37.990 --> 00:03:41.530 if you've got a free decryptor, then you don't ever need to even 63 00:03:41.530 --> 00:03:46.090 consider paying a ransom to attackers, and that is a really 64 00:03:46.090 --> 00:03:50.650 good place to be in. So, what we've seen over the years, our 65 00:03:50.650 --> 00:03:56.260 vulnerabilities crop up in multiple strains. As I said, 66 00:03:56.410 --> 00:04:00.580 cryptography is hard, and I think your average ransomware 67 00:04:00.580 --> 00:04:04.150 outfit, the developers aren't always say you're straight A 68 00:04:04.150 --> 00:04:08.050 students, I could be wrong, and so they make some errors. I 69 00:04:08.050 --> 00:04:12.400 mean, even people who are on the side of right will make errors 70 00:04:12.400 --> 00:04:13.690 with their products once in a while, when it comes to 71 00:04:13.690 --> 00:04:16.360 cryptography, we see this time and again. But, in the case of 72 00:04:16.360 --> 00:04:19.660 ransomware, like I said, it will let researchers decrypt stuff 73 00:04:19.660 --> 00:04:23.380 for free. Then one of the questions becomes, how do you go 74 00:04:23.380 --> 00:04:29.260 about this? Do you publicize it? Which will extend the reach of 75 00:04:29.260 --> 00:04:32.770 your free decryptor to victims you might not have known about. 76 00:04:33.040 --> 00:04:38.200 Or do you keep it as quiet as possible? Maybe hand it off to 77 00:04:38.200 --> 00:04:41.170 other security firms, other researchers that you trust, 78 00:04:41.800 --> 00:04:45.970 notify law enforcement, like the FBI and say, "Hey, if you know 79 00:04:45.970 --> 00:04:49.270 of any victims of this particular type of ransomware, 80 00:04:50.020 --> 00:04:54.280 we might be able to help them, and for free," or like I said, 81 00:04:54.310 --> 00:04:56.680 free, as in the decryptor, maybe not in all the restoration 82 00:04:56.680 --> 00:05:01.840 effort. So, this has happened again now with some ransomware 83 00:05:01.840 --> 00:05:04.690 called DoNex, and this is ransomware that we've seen in 84 00:05:04.690 --> 00:05:08.140 various forms for at least a couple years now. Started out as 85 00:05:08.170 --> 00:05:10.930 something called Muse, and then it's gone through some 86 00:05:10.930 --> 00:05:17.350 iterations. But, there was a flaw, and it turns out that 87 00:05:17.560 --> 00:05:21.130 Avast - the security firm - had discovered this flaw a few 88 00:05:21.130 --> 00:05:24.520 months ago and privately circulated it with law 89 00:05:24.520 --> 00:05:29.140 enforcement, with security firms. We know this because 90 00:05:29.170 --> 00:05:34.120 Dutch police publicly released a decryptor for this flaw at the 91 00:05:34.120 --> 00:05:40.030 end of last month. Dutch police malware reverse engineer expert 92 00:05:40.840 --> 00:05:43.810 had a talk in Montreal, talked about the vulnerability, and at 93 00:05:43.810 --> 00:05:47.770 the same time, they released the decryptor for everybody. So, 94 00:05:47.770 --> 00:05:50.440 this is what's prompted this discussion, because happened 95 00:05:50.440 --> 00:05:54.520 again back in February with the Rhysida - I think it was called 96 00:05:54.520 --> 00:05:58.450 - ransomware, where some academics found their 97 00:05:58.450 --> 00:06:02.980 vulnerability and publicized it only for a bunch of security 98 00:06:02.980 --> 00:06:07.270 experts to say, "yes, we know, we've already helped hundreds of 99 00:06:07.300 --> 00:06:11.200 firms decrypt their stuff. We gave them a free decrypter." 100 00:06:11.590 --> 00:06:15.280 And, now you've burned the vulnerability, because by 101 00:06:15.280 --> 00:06:19.090 publicizing it, it can get fixed. And typically, when we've 102 00:06:19.090 --> 00:06:23.230 seen this in the past, it gets fixed, sometimes in as little as 103 00:06:23.230 --> 00:06:26.140 24 hours, because the ransomware attackers are in it for the 104 00:06:26.140 --> 00:06:29.320 money. So, they're going to fix the flaw so that people can't 105 00:06:29.320 --> 00:06:33.520 decrypt for free. Have to come to them for a ransom. Simple 106 00:06:33.550 --> 00:06:38.530 criminal economics. In the case of DoNex, I guess one saving 107 00:06:38.530 --> 00:06:43.480 grace is Avast said it's not actually seen attacks by this 108 00:06:43.480 --> 00:06:46.390 group for a while. It's possible they were happening on the 109 00:06:46.390 --> 00:06:49.900 slide, but it looks like, for whatever reason, things had 110 00:06:49.900 --> 00:06:53.560 petered out a little bit. So, this might be more of an 111 00:06:53.560 --> 00:06:58.120 academic problem when it comes to DoNex, if they weren't active 112 00:06:58.120 --> 00:07:01.990 anymore. But, thought it was an interesting story, because this 113 00:07:01.990 --> 00:07:05.470 comes up time and time again, and is a reminder that if you do 114 00:07:05.470 --> 00:07:09.130 fall victim to ransomware, you should reach out to a reputable 115 00:07:09.130 --> 00:07:13.390 firm and or law enforcement, preferably both. And, it never 116 00:07:13.390 --> 00:07:17.230 hurts to ask, Do you have any workarounds? Is there anything 117 00:07:17.230 --> 00:07:20.320 you know about with this ransomware that would help us 118 00:07:21.310 --> 00:07:23.500 get access to a free decryptor, so that we don't even need to 119 00:07:23.500 --> 00:07:27.640 think about whether we negotiate with our attackers. So, like I 120 00:07:27.640 --> 00:07:32.230 said, great reminder to look for some help from friends or new 121 00:07:32.230 --> 00:07:33.100 acquaintances. 122 00:07:34.210 --> 00:07:36.910 Anna Delaney: All so very interesting, Matt. Do you think 123 00:07:36.910 --> 00:07:40.030 some people think there should be a standardized approach to 124 00:07:40.030 --> 00:07:42.880 handling ransomware vulnerabilities across the 125 00:07:43.780 --> 00:07:45.250 industry, and why or why not? 126 00:07:45.000 --> 00:07:47.515 Mathew Schwartz: I don't know if standardized approach is the 127 00:07:47.572 --> 00:07:51.174 right way to put it. I think - like a lot of things - if you're 128 00:07:51.231 --> 00:07:54.376 a ransomware hunter, or if you're a ransomware incident 129 00:07:54.433 --> 00:07:57.863 response firm, you develop a lot of relationships with other 130 00:07:57.920 --> 00:08:01.465 people that you trust, and that includes with law enforcement. 131 00:08:01.522 --> 00:08:04.952 So, I don't think a standard is necessarily the way to go. I 132 00:08:05.010 --> 00:08:08.554 think what you want to try to do, though, is to tap into those 133 00:08:08.611 --> 00:08:11.927 social networks or professional networks of people who are 134 00:08:11.984 --> 00:08:15.072 well-versed in handling these sorts of things, because 135 00:08:15.129 --> 00:08:18.330 ideally, you should be doing that anyway. They should be 136 00:08:18.388 --> 00:08:21.532 helping you with recovery. They'll know best practices. 137 00:08:21.589 --> 00:08:24.905 They'll give you advice about things to beware. If you are 138 00:08:24.962 --> 00:08:28.164 thinking about negotiating, they'll get that price down, 139 00:08:28.221 --> 00:08:31.823 they'll tell you what to expect, they'll tell you if this group 140 00:08:31.880 --> 00:08:35.539 ever honors its promises or not, that sort of thing. So, really, 141 00:08:35.596 --> 00:08:38.569 I think you want to tap into that expertise whenever 142 00:08:38.626 --> 00:08:42.057 possible, and sometimes you might get lucky when it comes to 143 00:08:42.114 --> 00:08:45.030 being able to decrypt things without having to pay. 144 00:08:45.780 --> 00:08:48.000 Tony Morbin: There's a bit of an analogy here with what we do as 145 00:08:48.000 --> 00:08:52.200 journalists, because, you know, we publicize vulnerabilities and 146 00:08:52.200 --> 00:08:55.020 flaws in order that the defenders can then protect 147 00:08:55.020 --> 00:08:58.620 against them, but we're also alerting the attackers who 148 00:08:58.620 --> 00:09:01.140 didn't know about those flaws, who will then go out and use 149 00:09:01.140 --> 00:09:05.520 them so, you know, and the whole publicizing of, you know, CVEs, 150 00:09:05.520 --> 00:09:08.400 you know, once they're there, the attackers will use them 151 00:09:08.400 --> 00:09:11.070 before you fix them, but you've got to let people have the 152 00:09:11.070 --> 00:09:12.030 chance to fix them. 153 00:09:13.770 --> 00:09:17.160 Anna Delaney: Tricky balance. Thank you, Matt. Thank you, 154 00:09:17.160 --> 00:09:21.030 Tony. So, Tony, this week, you've been considering how 155 00:09:21.060 --> 00:09:23.040 malicious actors are increasingly using fake 156 00:09:23.040 --> 00:09:26.700 generative AI assistance to distribute malware. Do expand. 157 00:09:27.410 --> 00:09:29.240 Tony Morbin: Okay, well, stick with me, because I'm going to 158 00:09:29.240 --> 00:09:31.550 start off by saying how I recently bought an electric 159 00:09:31.550 --> 00:09:35.690 lawnmower. Now, I didn't want a manual one, so assuming they 160 00:09:35.690 --> 00:09:38.240 even still exist, because they're hard work, and for the 161 00:09:38.240 --> 00:09:41.330 same reason, I wasn't going to use a scythe, but I won't let 162 00:09:41.330 --> 00:09:44.420 the grandkids use the lawnmower unsupervised, because for all 163 00:09:44.420 --> 00:09:48.200 the safety features, it's got blades and It's electric. So, 164 00:09:48.380 --> 00:09:53.690 going on to AI, I use Chat GPT and other AIs, but I'm even more 165 00:09:53.690 --> 00:09:56.000 cautious when I do so, because, you know, everyone in this 166 00:09:56.000 --> 00:09:59.720 industry in particular is aware of the risks from data leakage 167 00:09:59.720 --> 00:10:03.050 through to hallucinations and lots in between. So, don't think 168 00:10:03.050 --> 00:10:05.870 I'm being a Luddite when I go on about the risks, but they are 169 00:10:05.870 --> 00:10:10.580 real numbers. Some of them are really simply the same risks 170 00:10:10.580 --> 00:10:13.670 that I might have faced when I was buying a lawnmower, checking 171 00:10:13.670 --> 00:10:16.700 that it came from a reputable source, that it was in working 172 00:10:16.700 --> 00:10:19.700 order and fit for purpose, but it had guardrails around the 173 00:10:19.700 --> 00:10:22.310 dangerous bits, and that I followed the manufacturer's 174 00:10:22.310 --> 00:10:26.900 instructions to use or company policy in the case of AI, when 175 00:10:26.900 --> 00:10:30.050 it comes to criminals exploiting the widespread deployment of gen 176 00:10:30.050 --> 00:10:33.560 AI, in addition to improving their own capabilities, they 177 00:10:33.560 --> 00:10:38.360 simply exploit our lack of trust. I mean, there's a lack of 178 00:10:38.360 --> 00:10:41.690 familiarity, and there is enthusiasm for AI technology. 179 00:10:42.320 --> 00:10:45.860 The latest example of this is the upsurge over the past six 180 00:10:45.860 --> 00:10:49.070 months in info stealers impersonating generative AI 181 00:10:49.070 --> 00:10:54.290 tools such as Midjourney, Sora and Gemini. Recently, security 182 00:10:54.290 --> 00:10:57.110 firm ESET reported finding a malicious chrome browser 183 00:10:57.110 --> 00:11:01.070 extension known as Rilid Stealer and a malicious installer 184 00:11:01.070 --> 00:11:03.950 claiming to provide a desktop app for AI software that 185 00:11:03.950 --> 00:11:08.570 actually delivers the Vidar infostealer instead. The process 186 00:11:08.570 --> 00:11:11.840 for delivering the Rilid Stealer, version 4, to victims 187 00:11:11.990 --> 00:11:15.380 is similar to the installation of other malware, as it simply 188 00:11:15.380 --> 00:11:18.530 entices users to click onto malicious ads typically on 189 00:11:18.530 --> 00:11:22.070 Facebook that claim to provide the services of a generative AI 190 00:11:22.070 --> 00:11:25.790 model. The extension itself masquerades as Google Translate, 191 00:11:25.790 --> 00:11:30.260 in this instance, while offering the official website of one of 192 00:11:30.260 --> 00:11:35.000 the AI services used as a lure. ESET reported at least 4000 193 00:11:35.000 --> 00:11:37.880 attempts to install the malicious extension using lures 194 00:11:37.880 --> 00:11:42.410 that did include open AI, Sora and Google's Gemini, the Vidar 195 00:11:42.410 --> 00:11:45.740 info stealer is delivered by Facebook ads, Telegram groups, 196 00:11:45.740 --> 00:11:49.160 and it's on dark web forums. And the malicious installer is 197 00:11:49.160 --> 00:11:53.270 pretending to offer Midjourney an AI image generator, and this 198 00:11:53.270 --> 00:11:57.530 info stealer can log keystrokes, steal credentials stored by 199 00:11:57.530 --> 00:12:01.400 browsers and data from crypto wallets. However, the real 200 00:12:01.430 --> 00:12:05.030 Midjourney doesn't even offer a desktop app. It's an AI models 201 00:12:05.030 --> 00:12:08.300 accessible via discord bot on the official mid journey Discord 202 00:12:08.300 --> 00:12:11.990 server, or by directly messaging the bot in discord or adding it 203 00:12:11.990 --> 00:12:15.920 to a third party Discord server. The tactics the attackers are 204 00:12:15.920 --> 00:12:19.370 using are pretty simple. Cyber criminals create fake AI 205 00:12:19.370 --> 00:12:21.890 assistant websites or applications will appeal 206 00:12:21.890 --> 00:12:25.700 legitimate and use names similar to well known AI models to 207 00:12:25.700 --> 00:12:30.170 deceive users. Users searching for AI tools can unknowingly 208 00:12:30.170 --> 00:12:34.730 download malware infected software via fake systems that 209 00:12:34.730 --> 00:12:38.390 promise AI capabilities, having been encouraged to install the 210 00:12:38.390 --> 00:12:42.770 latest AI model or an enhanced version. Phishing emails or 211 00:12:42.770 --> 00:12:45.320 messages can also be used to offer these AI-powered 212 00:12:45.320 --> 00:12:49.760 solutions. The mitigation advice is fairly straightforward: Don't 213 00:12:49.760 --> 00:12:52.940 get distracted by being too keen to avoid clicking on 214 00:12:52.940 --> 00:12:56.300 untrustworthy links promising access to generative AI models. 215 00:12:56.720 --> 00:12:59.690 Educate your users about the risks of downloading software 216 00:12:59.690 --> 00:13:02.810 from unverified sites, and ensure they always obtain AI 217 00:13:02.810 --> 00:13:06.380 tools from official, reputable sources, such as the official 218 00:13:06.380 --> 00:13:10.040 website of the providers, and to stay protected against 219 00:13:10.160 --> 00:13:13.820 infostealers, make sure you run a reputable, robust security 220 00:13:13.820 --> 00:13:17.840 solution on your device to detect and prevent malware. It 221 00:13:17.840 --> 00:13:22.190 might say the latest AI, and yet it might simply be old-fashioned 222 00:13:22.190 --> 00:13:24.170 malware delivery. Very, 223 00:13:24.170 --> 00:13:26.180 Anna Delaney: very interesting. And what do you think of the 224 00:13:26.210 --> 00:13:31.370 long term implications for trust in AI, if the issue of malware 225 00:13:31.370 --> 00:13:33.650 spread is not adequately addressed, do you think it could 226 00:13:35.030 --> 00:13:37.700 impact the broader adoption and development of these 227 00:13:37.700 --> 00:13:38.360 technologies? 228 00:13:38.870 --> 00:13:42.680 Tony Morbin: I think to some extent, the trust in AI, in 229 00:13:42.680 --> 00:13:45.740 terms of the various flaws, whether that be malware, whether 230 00:13:45.740 --> 00:13:48.410 it be data leakage, hallucinations, poisoned 231 00:13:48.410 --> 00:13:52.250 learning and so on, that biases, all the other problems are 232 00:13:52.250 --> 00:13:56.000 affecting uptake, but only to the extent that people are, 233 00:13:56.990 --> 00:14:00.440 hopefully, being a little bit more cautious. Unfortunately, 234 00:14:00.440 --> 00:14:03.080 it's probably not affecting uptake enough, and largely, 235 00:14:03.080 --> 00:14:05.420 people are rushing out. They were out looking at the security 236 00:14:05.420 --> 00:14:10.490 considerations. I wouldn't want it to stop AI, uptake. AI is a 237 00:14:10.490 --> 00:14:15.710 great, fantastic tool. But, just be a bit more cautious. Use 238 00:14:15.710 --> 00:14:19.340 common sense. Don't just because it's AI think, you know, oh, 239 00:14:19.340 --> 00:14:23.600 this is great. Don't just trust it because it's AI, because it's 240 00:14:23.600 --> 00:14:24.830 just another software. 241 00:14:24.000 --> 00:14:28.320 Anna Delaney: Well said. Thank you, Tony. Well, Chris, you've 242 00:14:28.320 --> 00:14:31.740 written this week that the U.S. Supreme Court's overturning of 243 00:14:31.800 --> 00:14:35.670 the Chevron deference introduces potential disruption to 244 00:14:35.670 --> 00:14:39.480 cybersecurity and artificial intelligence regulations. Maybe 245 00:14:39.510 --> 00:14:43.110 explain, first of all, what the Chevron deference is and why 246 00:14:43.110 --> 00:14:45.810 this brings uncertainty for cybersecurity and AI 247 00:14:45.810 --> 00:14:46.620 regulations. 248 00:14:47.440 --> 00:14:51.190 Chris Riotta: Yeah, absolutely. Being the U.S. editor based 249 00:14:51.190 --> 00:14:54.670 here, and you know, the sole U.S. editor on this panel, I'm 250 00:14:54.670 --> 00:14:59.860 happy to bring a U.S. based story to perhaps scare you or 251 00:14:59.860 --> 00:15:02.620 maybe give you a little bit of hope, if you know, we can get to 252 00:15:02.620 --> 00:15:05.620 that point. This is something I've been speaking about a lot 253 00:15:05.620 --> 00:15:08.410 with experts in our industry ever since the Supreme Court 254 00:15:08.440 --> 00:15:12.610 overturned the Chevron deference earlier this month. So, the 255 00:15:12.610 --> 00:15:17.950 Chevron deference is a precedent from the early 1980s which 256 00:15:17.950 --> 00:15:22.570 allowed federal agencies to reasonably interpret ambiguous 257 00:15:22.570 --> 00:15:25.660 statutes and enforcement standards, which, if you know 258 00:15:25.660 --> 00:15:28.270 anything about the way that lawmakers probably all around 259 00:15:28.270 --> 00:15:31.420 the world, but especially in the U.S. Congress, create laws, 260 00:15:31.660 --> 00:15:36.580 there are often pretty ambiguous statutes and regulations and 261 00:15:36.580 --> 00:15:39.250 policies included in them, especially when it comes to 262 00:15:39.250 --> 00:15:42.100 things like energy, the environment and, of course, 263 00:15:42.160 --> 00:15:46.270 cybersecurity. I mean, we can't expect our lawmakers to just be 264 00:15:46.270 --> 00:15:49.210 experts in every single one of these fields. So, the Chevron 265 00:15:49.210 --> 00:15:52.030 deference played a really pivotal role in allowing 266 00:15:52.060 --> 00:15:55.930 agencies to kind of shape policy, knowing that they have 267 00:15:55.930 --> 00:15:59.590 more of a, sort of, expert knowledge and expert level of 268 00:15:59.590 --> 00:16:03.850 knowledge in these fields, agencies like the FCC, the 269 00:16:03.850 --> 00:16:06.760 Federal Communications Commission; and the FTC, the 270 00:16:06.760 --> 00:16:10.060 Federal Trade Commission, have really heavily relied on the 271 00:16:10.060 --> 00:16:13.600 ruling to interpret their authorizing of certain statutes 272 00:16:13.720 --> 00:16:16.960 and to enforce cybersecurity measures against companies that 273 00:16:16.960 --> 00:16:20.410 fail to adequately protect consumer data. The deference 274 00:16:20.410 --> 00:16:23.830 recognized that agencies have this specialized level of 275 00:16:23.830 --> 00:16:27.220 expertise and are better equipped than Congress to 276 00:16:27.250 --> 00:16:31.750 interpret complex regulatory frameworks until now. So, the 277 00:16:31.750 --> 00:16:35.590 court voted six to three to strike down the doctrine, which 278 00:16:35.680 --> 00:16:38.860 all but ensures that there are going to be some inconsistent 279 00:16:38.860 --> 00:16:42.280 regulatory standards across Circuit Court districts and 280 00:16:42.280 --> 00:16:45.310 heightened legal battles, especially for - like I said - 281 00:16:45.310 --> 00:16:49.090 environmental energy, even cybersecurity policy, according 282 00:16:49.090 --> 00:16:51.880 to a lot of the folks that I've spoken to on both sides of the 283 00:16:51.880 --> 00:16:55.240 political aisle. I've talked a lot with Michael Drysdale, who 284 00:16:55.240 --> 00:16:58.270 is a leading environmental law expert in the U.S., who has 285 00:16:58.270 --> 00:17:01.480 worked on significant cases involving the Environmental 286 00:17:01.480 --> 00:17:05.020 Protection Agency and the Clean Water Act, and he said that the 287 00:17:05.020 --> 00:17:07.780 decision will likely hinder federal rulemaking for 288 00:17:07.780 --> 00:17:11.500 generations, as agency regulations will likely become 289 00:17:11.500 --> 00:17:14.770 far more cautious and increasingly challenged and 290 00:17:14.770 --> 00:17:19.120 enjoined by courts all over the country, the shift fundamentally 291 00:17:19.120 --> 00:17:22.900 changes the relationship between the judiciary and federal 292 00:17:22.900 --> 00:17:27.010 agencies, placing greater scrutiny on agency decisions and 293 00:17:27.010 --> 00:17:30.310 interpretations. I mean, we can all imagine what could happen 294 00:17:30.310 --> 00:17:33.700 here in the near future, the Biden administration could 295 00:17:33.700 --> 00:17:39.550 announce a new cybersecurity regulatory framework. And, if 296 00:17:39.550 --> 00:17:43.150 anyone doesn't like it - if a technology company isn't 297 00:17:43.150 --> 00:17:46.390 impressed by the regulations, or feels that they want to counter 298 00:17:46.390 --> 00:17:49.150 it - they can take it to a court somewhere in the country that 299 00:17:49.150 --> 00:17:53.860 might, you know, decide a ruling in their favor. So, without the 300 00:17:53.860 --> 00:17:57.100 Chevron deference, several key areas could be significantly 301 00:17:57.100 --> 00:18:00.880 affected, agencies like the Cybersecurity and Infrastructure 302 00:18:00.880 --> 00:18:03.940 Security Agency - better known as CISA - which has been 303 00:18:03.940 --> 00:18:06.640 instrumental in developing detailed cybersecurity 304 00:18:06.640 --> 00:18:10.570 frameworks and guidelines based on broad legislative mandates 305 00:18:10.570 --> 00:18:14.470 that were passed long before the current cybersecurity landscape 306 00:18:14.470 --> 00:18:18.070 that we exist in could really take a hit from this decision, 307 00:18:18.430 --> 00:18:21.880 The interpretations would face increased legal challenges, 308 00:18:22.060 --> 00:18:25.210 potentially leading to inconsistencies in how CISA's 309 00:18:25.210 --> 00:18:29.560 frameworks can be applied across the country. The FTC and other 310 00:18:29.560 --> 00:18:32.650 regulatory bodies interpret statutes to enforce data 311 00:18:32.650 --> 00:18:35.920 protection and privacy standards, but with the removal 312 00:18:35.920 --> 00:18:38.920 of the deference, courts may now take a more active role in those 313 00:18:38.920 --> 00:18:42.370 interpretations, leading to potential disparities in 314 00:18:42.370 --> 00:18:46.120 enforcement and compliance, not to mention laws protecting 315 00:18:46.120 --> 00:18:50.020 critical infrastructure often contain ambiguous terms. I mean, 316 00:18:50.380 --> 00:18:53.140 I don't think that it's even actually legally decided in the 317 00:18:53.140 --> 00:18:56.680 U.S. what "critical" really means, or "infrastructure" or 318 00:18:56.680 --> 00:18:59.920 even "resilience." So, those terms may now be contested more 319 00:18:59.920 --> 00:19:03.520 frequently in courts, creating uncertainty for stakeholders 320 00:19:03.520 --> 00:19:06.880 responsible for safeguarding such vital assets. And 321 00:19:06.880 --> 00:19:10.570 regulation writers who rely on interpretations from decades old 322 00:19:10.570 --> 00:19:14.860 laws - drafted long before the current cybersecurity landscape 323 00:19:14.860 --> 00:19:19.900 - will now really be thrown into a legal gray area. So, what 324 00:19:19.900 --> 00:19:22.630 cyber developments could be in jeopardy? Well, those could 325 00:19:22.630 --> 00:19:25.630 potentially include cybersecurity disclosure 326 00:19:25.630 --> 00:19:29.140 requirements that the Securities and Exchange Commission approved 327 00:19:29.140 --> 00:19:32.620 just last year, their cyber incident reporting requirements 328 00:19:32.620 --> 00:19:37.240 for financial institutions that were approved in 2022 and 329 00:19:37.240 --> 00:19:39.700 there's a variety of cyber regulations which the 330 00:19:39.700 --> 00:19:42.790 Transportation Security Administration, TSA, all over 331 00:19:42.790 --> 00:19:47.080 our airports and a variety of other agencies established that 332 00:19:47.080 --> 00:19:51.190 same year. CISA's proposed rule to implement the Cyber Incident 333 00:19:51.190 --> 00:19:54.880 Reporting for critical infrastructure Act of 2022 could 334 00:19:54.880 --> 00:19:58.930 also be in jeopardy due to its really broad interpretation of 335 00:19:58.930 --> 00:20:02.770 the bill's statutory language. So, a lot about what happens 336 00:20:02.770 --> 00:20:07.000 next here, it really remains unknown. But, what we do know is 337 00:20:07.000 --> 00:20:09.790 that this could be really the nail in the coffin for the Biden 338 00:20:09.790 --> 00:20:13.120 administration's, let's call it, innovative approach to 339 00:20:13.120 --> 00:20:17.200 cybersecurity policy. The White House itself says it's taken a 340 00:20:17.200 --> 00:20:20.140 creative approach in recent years to regulating critical 341 00:20:20.140 --> 00:20:24.040 infrastructure, interpreting many older statutes and 342 00:20:24.040 --> 00:20:26.980 statutory mandates to create rulemaking around everything 343 00:20:26.980 --> 00:20:31.480 from ransomware to incident reporting. So, it's unclear how 344 00:20:31.480 --> 00:20:35.230 this administration and future ones will really proceed in this 345 00:20:35.230 --> 00:20:36.430 new world that we're living in. 346 00:20:37.270 --> 00:20:39.550 Anna Delaney: It's potentially huge. What about the 347 00:20:39.550 --> 00:20:42.610 alternatives? Chris, what alternative regulatory 348 00:20:42.610 --> 00:20:46.960 approaches could agencies and lawmakers adopt to address the 349 00:20:46.960 --> 00:20:48.880 loss of Chevron deference? 350 00:20:49.650 --> 00:20:52.050 Chris Riotta: Yeah, so, like I said, there could be some hope 351 00:20:52.050 --> 00:20:55.950 here. Hopefully, Congress might need to draft more precise and 352 00:20:55.950 --> 00:20:59.430 unambiguous laws to reduce reliance on agency 353 00:20:59.430 --> 00:21:03.870 interpretation. A lot of folks say that by eliminating those 354 00:21:03.870 --> 00:21:07.830 sort of ambiguities in law, the need for judicial interpretation 355 00:21:07.830 --> 00:21:11.280 could be minimized, though, if we're being honest, you know, we 356 00:21:11.280 --> 00:21:14.490 can still certainly expect to see lawsuits from regulatory 357 00:21:14.490 --> 00:21:18.090 policies from this point on, from agencies or from, you know, 358 00:21:18.570 --> 00:21:22.410 different organizations that or industry groups that may not be 359 00:21:22.470 --> 00:21:26.280 in support of those regulations, like it or not, increased 360 00:21:26.280 --> 00:21:30.060 congressional oversight of agency rulemaking could ensure 361 00:21:30.060 --> 00:21:33.330 that regulations align with legislative intent, which could 362 00:21:33.330 --> 00:21:36.780 involve more frequent hearings, reports, direct involvement from 363 00:21:36.780 --> 00:21:41.460 Congress in the regulatory process agencies may need to 364 00:21:41.460 --> 00:21:44.670 prepare more rigorous judicial reviews by developing more 365 00:21:44.670 --> 00:21:48.840 robust legal and evidentiary support, which could include 366 00:21:48.840 --> 00:21:52.680 extensive documentation justification for its decisions. 367 00:21:53.010 --> 00:21:56.730 And, this might be a little too hopeful, but Congress may also 368 00:21:56.730 --> 00:21:59.010 just need to work together across both sides of the 369 00:21:59.010 --> 00:22:02.700 political aisle and pull in stakeholders, including industry 370 00:22:02.700 --> 00:22:06.150 experts and public interest groups throughout the rulemaking 371 00:22:06.150 --> 00:22:09.750 process, which may be able to help build a stronger consensus 372 00:22:09.750 --> 00:22:12.300 and reduce the likelihood of legal challenges. 373 00:22:13.500 --> 00:22:17.340 Anna Delaney: Working together. It's a novel idea. Yeah, that 374 00:22:17.340 --> 00:22:20.730 was fantastic. Thanks so much, Chris. We'll, stay tuned for 375 00:22:20.820 --> 00:22:25.200 further updates. And finally, and just for fun, if AI took 376 00:22:25.230 --> 00:22:28.320 over the world, which, of course it will, what's one ridiculous 377 00:22:28.320 --> 00:22:30.960 law or rule you think they would enforce? 378 00:22:32.220 --> 00:22:34.920 Mathew Schwartz: Anna, I think there's going to be an R, E, S, 379 00:22:34.920 --> 00:22:40.530 P, E, C, T, rule. I think if AI believes itself to be human, and 380 00:22:40.530 --> 00:22:46.020 you don't treat it as such, it's going to demand that you give it 381 00:22:46.020 --> 00:22:47.070 a little more respect. 382 00:22:49.220 --> 00:22:51.890 Anna Delaney: It's seeing that as well at the same time next 383 00:22:51.890 --> 00:22:57.140 time, I'll save you this time. That's a good point! Tony, go 384 00:22:57.140 --> 00:22:57.440 ahead. 385 00:22:58.260 --> 00:23:01.290 Tony Morbin: Well, paraphrasing an old French saying to know all 386 00:23:01.290 --> 00:23:04.140 is to understand all, and to understand all is to forgive 387 00:23:04.140 --> 00:23:08.460 all. To being an all knowing and understanding, our AI ruler 388 00:23:08.460 --> 00:23:10.860 would release all prisoners because it all be pardoned. 389 00:23:12.270 --> 00:23:18.060 Anna Delaney: Oh, a very, very kind, a kind AI, well, I 390 00:23:18.060 --> 00:23:21.060 Tony Morbin: don't know if you, if you, then law murderers out 391 00:23:21.060 --> 00:23:22.260 that might not be so kind. 392 00:23:22.770 --> 00:23:24.330 Anna Delaney: That would be interesting, an interesting 393 00:23:24.330 --> 00:23:26.970 experiment to just watch unfold. Chris, 394 00:23:28.340 --> 00:23:33.680 Chris Riotta: I think one of the potentially scary consequences 395 00:23:33.680 --> 00:23:36.800 of AI taking over the world and taking all of our jobs that 396 00:23:36.800 --> 00:23:39.920 doesn't get a lot of attention is that now that we would live 397 00:23:39.920 --> 00:23:42.040 in a world where most folks would be, you know, fulfilling 398 00:23:42.040 --> 00:23:44.980 their creative passions. We would probably all be stuck 399 00:23:44.980 --> 00:23:49.300 going to very boring and ugly art exhibits for our friends 400 00:23:49.300 --> 00:23:52.180 that are now, you know, full time artists, and we would have 401 00:23:52.180 --> 00:23:55.180 to tell them that their work is really good when it's not. 402 00:23:56.410 --> 00:23:58.690 Tony Morbin: I'd take a line from Pink Floyd there when they 403 00:23:58.690 --> 00:24:01.840 said, "I thought I'd more to say," I think that's what we'd 404 00:24:01.840 --> 00:24:03.970 find if we all had time to do creative work. 405 00:24:06.030 --> 00:24:09.330 Anna Delaney: Well, mine is more self interested. I think Queen 406 00:24:09.330 --> 00:24:14.310 or President AI would introduce mandatory daily recharging naps 407 00:24:14.520 --> 00:24:17.850 for humans, just as you know, electronic devices have to 408 00:24:17.850 --> 00:24:22.920 recharge, and so just imagine a global siesta hour, not 409 00:24:22.920 --> 00:24:25.980 dissimilar to some of our friends in the med enforced by 410 00:24:25.980 --> 00:24:28.830 AI, where everybody is required to stop what they're doing and 411 00:24:28.830 --> 00:24:32.100 take a nap. And there will be, of course, huge penalties for 412 00:24:32.100 --> 00:24:34.620 those who stay awake. I think, I think that could work very well. 413 00:24:35.100 --> 00:24:35.790 Tony Morbin: I'm on board. 414 00:24:36.800 --> 00:24:37.940 Chris Riotta: I'm not mad at that at all. 415 00:24:40.160 --> 00:24:42.220 Anna Delaney: Well, thank you very much, informative and 416 00:24:42.220 --> 00:24:44.860 educational as always. Thank you so much for your time and 417 00:24:44.860 --> 00:24:45.580 insights. 418 00:24:46.660 --> 00:24:47.680 Chris Riotta: Thanks for having us and 419 00:24:47.000 --> 00:24:50.090 Anna Delaney: And, thank you so much for watching! Until next 420 00:24:50.090 --> 00:24:50.240 time.