WEBVTT 1 00:00:07.110 --> 00:00:09.630 Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm 2 00:00:09.630 --> 00:00:12.540 Anna Delaney. And this week we'll discuss Kevin Mandia 3 00:00:12.540 --> 00:00:15.930 stepping down as Mandiant's CEO, UnitedHealth group's 4 00:00:15.930 --> 00:00:19.650 responsibility in a HIPAA breach Fallout and privacy concerns 5 00:00:19.680 --> 00:00:23.670 with large language models. The quartet today comprises Marianne 6 00:00:23.700 --> 00:00:27.690 Kolbasuk McGee, executive editor for HealthcareInfoSecurity; Tony 7 00:00:27.690 --> 00:00:31.020 Morbin, executive news editor for the EU; and Michael 8 00:00:31.020 --> 00:00:34.320 Novinson, managing editor for ISMG business. Really good to 9 00:00:34.320 --> 00:00:34.890 see you all. 10 00:00:35.850 --> 00:00:36.330 Marianne McGee: Hi Anna! 11 00:00:36.510 --> 00:00:36.870 Tony Morbin: Heya! 12 00:00:37.650 --> 00:00:38.760 Michael Novinson: Nice to see you as well. 13 00:00:39.300 --> 00:00:41.340 Anna Delaney: So spring is in the air. Marianne, you got a 14 00:00:41.340 --> 00:00:43.560 lovely backdrop behind you. Where are you? 15 00:00:44.310 --> 00:00:46.770 Marianne McGee: Oh this is in our neighborhood at the end of 16 00:00:46.770 --> 00:00:51.930 the street as I'm walking the dog, trying to capture pretty 17 00:00:51.930 --> 00:00:56.190 things when I think of them, just to use for these recordings 18 00:00:56.190 --> 00:00:56.970 primarily. 19 00:00:59.160 --> 00:01:00.870 Anna Delaney: I love the importance of these recordings 20 00:01:00.870 --> 00:01:04.050 on your walks. Tony, another spring scene. 21 00:01:04.650 --> 00:01:06.570 Tony Morbin: Yeah bluebells in the woods, although it's not the 22 00:01:06.570 --> 00:01:11.910 woods locally. It's actually a picture of Chelwood and Hatch's 23 00:01:11.940 --> 00:01:16.770 nuclear bunker. This is just above the nuclear bunker. They 24 00:01:16.770 --> 00:01:19.890 decommissioned it in the 90s. But so I went on a trip there. 25 00:01:19.950 --> 00:01:22.140 And this is what it was like above. 26 00:01:23.070 --> 00:01:27.030 Anna Delaney: You remain intact and safe. That's good. Michael, 27 00:01:27.270 --> 00:01:28.470 curious see behind you. 28 00:01:29.100 --> 00:01:31.530 Michael Novinson: Indeed it is. I'm coming to you from the old 29 00:01:31.530 --> 00:01:35.400 crystal tavern in my hometown of Seekonk, Massachusetts, dates 30 00:01:35.400 --> 00:01:39.870 back to 1745 ... classic, New England eatery. I think one 31 00:01:39.870 --> 00:01:41.970 claim to fame would be it, actually, it's like a phoenix 32 00:01:41.970 --> 00:01:45.390 rising from the ashes that actually burned down in 2012. A 33 00:01:45.390 --> 00:01:49.770 truck carrying Chiquita bananas rolled over and bananas fell and hit 34 00:01:49.770 --> 00:01:52.800 the natural gas line that sparked the blaze inside the 35 00:01:52.800 --> 00:01:55.320 restaurant. So it had to be entirely rebuilt, reopened in 36 00:01:55.320 --> 00:01:57.900 2014. And it keeps on ticking. 37 00:01:58.440 --> 00:02:01.230 Anna Delaney: What a phenomenal story. Love that. Well, I'm 38 00:02:01.230 --> 00:02:04.740 sharing glimpse of Napa Valley, California. This is the Hess 39 00:02:04.740 --> 00:02:08.370 winery, and it's a lovely combination of vinyard and art 40 00:02:08.370 --> 00:02:10.890 gallery, and it was just a wonderful moment to catch my 41 00:02:10.890 --> 00:02:15.690 breath there before the RSA. So Michael, starting with 42 00:02:15.690 --> 00:02:18.540 you this week, you've written that Kevin Mandia is stepping 43 00:02:18.540 --> 00:02:22.620 down as the CEO of Mandiant to take on an advisory role at 44 00:02:22.620 --> 00:02:26.130 Google, with Sandra Joyce and Jurgen Kutscher taking over 45 00:02:26.130 --> 00:02:28.830 leadership of Mandiant's threat intelligence and incident 46 00:02:28.830 --> 00:02:32.610 response units, respectively. So what were the key factors that 47 00:02:32.610 --> 00:02:36.570 led to Kevin Mandia's decision to transition from CEO to an 48 00:02:36.570 --> 00:02:37.290 advisory role? 49 00:02:38.640 --> 00:02:41.010 Michael Novinson: Thank you for the question Anna. And I realize 50 00:02:41.430 --> 00:02:44.070 in the grand scheme of things, it's not incredibly surprising 51 00:02:44.070 --> 00:02:48.030 that Mandiant itself was acquired by Google in September of 2022. 52 00:02:48.030 --> 00:02:51.690 And typically, you will see chief executives of the company 53 00:02:51.690 --> 00:02:54.000 that is acquired stick around often for two, three years, it's 54 00:02:54.000 --> 00:02:56.850 often written into the acquisition agreement. So not 55 00:02:56.850 --> 00:03:00.480 shocking to see him transitioning out in that sense, 56 00:03:00.480 --> 00:03:05.100 but in another sense, given what a well known figure he is in 57 00:03:05.100 --> 00:03:08.130 this industry, it is surprising to see him leaving the company 58 00:03:08.130 --> 00:03:12.900 that bears his own name. So it's been quite a journey for him. 59 00:03:13.200 --> 00:03:17.130 Air Force veteran, started the consulting practice in 2004, 60 00:03:17.130 --> 00:03:22.500 took the name Mandiant in 2006 and has tried to find a way to 61 00:03:22.500 --> 00:03:26.370 turn these essential services the world needs threatened 62 00:03:26.370 --> 00:03:31.530 halogens, insights into adversaries are in responding to 63 00:03:31.560 --> 00:03:33.840 some of the most well known cyber incidents in the world. I 64 00:03:33.840 --> 00:03:37.020 mean, pretty much anytime you see an FCC filing from a major 65 00:03:37.350 --> 00:03:39.480 company that was compromised, Mandiant is the one they're 66 00:03:39.480 --> 00:03:43.140 using. So finding a way to turn these essential services and 67 00:03:43.140 --> 00:03:46.230 make it a viable business. And that is challenging, because 68 00:03:46.620 --> 00:03:49.890 these are the ... Mandiant's secret sauce is having the 69 00:03:49.890 --> 00:03:52.740 smartest people in the room together, and it shows the 70 00:03:52.740 --> 00:03:55.410 quality of the work they do. But smart people are very expensive. 71 00:03:55.410 --> 00:03:57.750 And it's harder ... it's hard to skill labor the same way that 72 00:03:57.750 --> 00:04:01.380 you skill technology, and certainly that's been one of the 73 00:04:01.380 --> 00:04:04.770 things he's grappled with over the years. So I mean, first 74 00:04:04.770 --> 00:04:07.500 Mandiant became part of FireEye. The idea being that bringing 75 00:04:07.500 --> 00:04:11.250 that FireEye technology those network, network firewall 76 00:04:11.250 --> 00:04:14.820 sandboxing and APT products together with Mandiant's 77 00:04:15.420 --> 00:04:20.700 services, then that essentially broke up the products were spun 78 00:04:20.700 --> 00:04:25.140 off into a separate group; they formed products there. And on 79 00:04:25.140 --> 00:04:28.080 the services side, Mandiant was independent for a couple of 80 00:04:28.080 --> 00:04:30.420 months. But then yes, found a home in Google. Obviously, 81 00:04:30.420 --> 00:04:33.990 Google ... very well capitalized company can support this. They 82 00:04:33.990 --> 00:04:36.570 were already doing a lot of work around security operations and 83 00:04:36.570 --> 00:04:41.730 invested in Chronicle. So yeah, a place where they don't have to 84 00:04:41.730 --> 00:04:44.370 answer the Wall Street every quarter and have all their 85 00:04:44.370 --> 00:04:48.210 numbers scrutinize summit. It may be a better home for them in 86 00:04:48.210 --> 00:04:51.600 that way. But yeah, Mandiant certainly is getting embedded 87 00:04:51.600 --> 00:04:54.360 more into Google. They are keeping the brand around but it 88 00:04:54.360 --> 00:04:58.740 certainly is part of the Google security practice and in that 89 00:04:58.740 --> 00:05:03.450 way, perhaps it's not surprising to see him transition out. But I 90 00:05:03.450 --> 00:05:08.880 think really, in terms of his impact, just somebody who's able 91 00:05:08.880 --> 00:05:13.710 to talk articulately about what adversaries are doing and was 92 00:05:13.710 --> 00:05:17.250 able to bring cybersecurity to the masses. That certainly you 93 00:05:17.250 --> 00:05:20.700 see executives who are on the financial shows on CNBC in the 94 00:05:20.700 --> 00:05:25.290 United States, talking about how much money they can make 95 00:05:25.290 --> 00:05:28.980 investors but in terms of explaining what adversaries in 96 00:05:28.980 --> 00:05:32.250 China and Russia and North Korea are doing, on CBS in the New 97 00:05:32.250 --> 00:05:35.520 York Times, that's really something that I think perhaps 98 00:05:35.520 --> 00:05:38.310 more so than almost anyone else. Kevin Mandia did a good job of 99 00:05:39.150 --> 00:05:42.930 bringing to the masses and in plain English talking to people 100 00:05:42.930 --> 00:05:47.400 about what adversaries are doing and why you as John Q citizen 101 00:05:47.400 --> 00:05:51.330 should care. But at the same time, also, in terms of the 102 00:05:51.330 --> 00:05:53.880 reports they commissioned and they put together just had high 103 00:05:53.880 --> 00:05:57.390 levels of detail, were able to talk about what specific 104 00:05:57.390 --> 00:05:59.760 individuals were doing in a way that nobody had really done 105 00:05:59.760 --> 00:06:03.480 before them. That I think just in terms of educating both the 106 00:06:03.480 --> 00:06:06.390 general public as well as the security community about what 107 00:06:06.390 --> 00:06:09.810 different adversaries are doing is, we've seen a lot of people 108 00:06:09.840 --> 00:06:12.600 imitate Mandiant since they did it, but Mandiant was almost 109 00:06:12.600 --> 00:06:15.180 certainly the first to do it in this level of granular detail 110 00:06:15.180 --> 00:06:18.360 and probably perhaps still the best, and I think we have Kevin 111 00:06:18.360 --> 00:06:19.500 Mandia to thank for that. 112 00:06:21.600 --> 00:06:24.900 Anna Delaney: How's the industry reacted to this leadership 113 00:06:24.900 --> 00:06:27.810 change at Mandiant or has there been a market reaction in any 114 00:06:27.810 --> 00:06:28.170 way? 115 00:06:29.190 --> 00:06:31.320 Michael Novinson: I mean it's been pretty quiet. I think just 116 00:06:31.590 --> 00:06:35.550 there's been less visibility just because Mandiant 117 00:06:35.550 --> 00:06:38.580 is a part of Google. And I think he's done doing some stuff and 118 00:06:38.580 --> 00:06:42.120 he's been a little less visible since Mandiant became a division 119 00:06:42.120 --> 00:06:45.900 within Google rather than his own company. But yeah, I think 120 00:06:45.900 --> 00:06:48.900 certainly, there's a question around kind of what's next for 121 00:06:48.900 --> 00:06:52.440 him. I guess the obvious test would be going that investor 122 00:06:52.440 --> 00:06:57.060 route consulting routine he's on. He became a strategic 123 00:06:57.060 --> 00:07:00.090 partner at Ballistic Ventures, which is a venture capital firm, 124 00:07:00.480 --> 00:07:03.810 after the former alien Palacios there as well. He's made 125 00:07:03.810 --> 00:07:07.920 investments in 15 companies since late 2021. He sits 126 00:07:07.920 --> 00:07:10.650 on a handful of boards. So certainly you see a lot of 127 00:07:10.650 --> 00:07:14.820 former high profile CEOs go that route. But certainly, I mean, I 128 00:07:14.820 --> 00:07:17.370 think just public service you think about as well, he spent 129 00:07:17.370 --> 00:07:20.010 six years at the ASVAB - six years in the Air 130 00:07:20.010 --> 00:07:23.550 Force. He certainly sits on a number of boards and commissions 131 00:07:23.550 --> 00:07:28.170 for President Biden, for CISA is very well tied into that 132 00:07:28.170 --> 00:07:32.700 knapsack intel community. So I mean, I guess I have to wonder 133 00:07:32.700 --> 00:07:35.100 when I think about what's next for him. Is he going to be 134 00:07:35.460 --> 00:07:38.490 primarily in that kind of following the footsteps of Dave 135 00:07:38.490 --> 00:07:40.950 DeWalt, the former supervisor doing a lot on the private 136 00:07:40.950 --> 00:07:43.980 sector side? Are you going to see him more on the government side? 137 00:07:44.010 --> 00:07:47.340 Certainly, he has the chops, the expertise, the relationships 138 00:07:47.340 --> 00:07:51.330 that he could really make an impact on the public side as 139 00:07:51.330 --> 00:07:51.720 well. 140 00:07:52.710 --> 00:07:54.660 Anna Delaney: Very good. Well thanks so much Michael for 141 00:07:54.660 --> 00:07:55.380 updating us. 142 00:07:55.950 --> 00:07:56.430 Michael Novinson: Of course. 143 00:07:56.640 --> 00:07:59.880 Anna Delaney: Marianne, is UnitedHealth Group on the hook? 144 00:07:59.880 --> 00:08:03.060 I mean, you've reported that over 100 medical associations 145 00:08:03.060 --> 00:08:06.420 and industry groups are urging the U.S. Department of Health 146 00:08:06.420 --> 00:08:09.450 and Human Services to hold UnitedHealth Group solely 147 00:08:09.450 --> 00:08:12.810 responsible for HIPAA breach notifications following the 148 00:08:12.810 --> 00:08:15.840 Change Healthcare ransomware attack. Can you share the 149 00:08:15.840 --> 00:08:16.440 latest? 150 00:08:17.160 --> 00:08:20.580 Marianne McGee: Sure. Well the dust is starting to settle in 151 00:08:20.580 --> 00:08:24.510 terms of the massive IT disruption that was caused by 152 00:08:24.510 --> 00:08:29.220 the Change Healthcare attack in February, with most of the 153 00:08:29.220 --> 00:08:34.230 company's major IT services and products back online. But now 154 00:08:34.260 --> 00:08:37.620 the reality is starting to set in for U.S. healthcare 155 00:08:37.620 --> 00:08:42.690 providers, including thousands of doctor practices in hospitals 156 00:08:43.290 --> 00:08:46.320 about the ransomware attackers compromising the protected 157 00:08:46.320 --> 00:08:50.880 health information belonging to those entities patients and the 158 00:08:50.880 --> 00:08:53.760 resulting HIPAA breach notification duties that could 159 00:08:53.760 --> 00:08:58.380 be triggered. Now UnitedHealth Group, which is the parent 160 00:08:58.380 --> 00:09:03.030 company of Change, has offered to handle breach notification 161 00:09:03.030 --> 00:09:07.380 for its customers and for entities affected by the breach. 162 00:09:07.980 --> 00:09:13.860 But entities are not so sure what the regulators think. So 163 00:09:13.860 --> 00:09:18.360 earlier this week, more than 100 industry groups that represent 164 00:09:18.360 --> 00:09:23.250 doctor practices as well as healthcare CISOs and CIOs sent a 165 00:09:23.250 --> 00:09:27.060 letter to the U.S. Department of Health and Human Services asking 166 00:09:27.060 --> 00:09:31.680 for clarity regarding the breach notification responsibilities of 167 00:09:31.710 --> 00:09:35.340 HIPAA-covered entities and their business associates that are 168 00:09:35.340 --> 00:09:39.660 affected by the incident. The groups are, as we say, 169 00:09:40.680 --> 00:09:44.520 essentially asking HHS Office for Civil Rights to publicly 170 00:09:44.520 --> 00:09:48.330 state that yeah United Health Group is indeed solely 171 00:09:48.330 --> 00:09:51.690 responsible for breach notification and not the 172 00:09:51.690 --> 00:09:57.270 entities whose patients' PHI was compromised. But because 173 00:09:57.270 --> 00:10:01.500 UnitedHealthcare has estimated that PHI of up to one-third of 174 00:10:01.500 --> 00:10:05.940 the U.S. population could be affected by the attack. That 175 00:10:05.940 --> 00:10:09.120 means breach notification could involve more than a 100 million 176 00:10:09.120 --> 00:10:13.230 people. So that will undoubtedly result in a massive record 177 00:10:13.230 --> 00:10:16.380 breaking breach notification event for the healthcare sector. 178 00:10:17.130 --> 00:10:22.290 Going back in April, HHS OCR issued guidance in the form of 179 00:10:22.290 --> 00:10:26.280 frequently asked questions regarding the healthcare, the 180 00:10:26.310 --> 00:10:29.340 train childcare attack and potential breach notification 181 00:10:29.340 --> 00:10:32.850 duties of HIPAA-covered entities and business associates that are 182 00:10:32.850 --> 00:10:39.210 affected. But HHS's guidance pretty much didn't solve the 183 00:10:39.210 --> 00:10:45.210 problem in terms of clarity for many of these entities. HHS said 184 00:10:45.210 --> 00:10:50.970 that covered entities that were affected are still required to 185 00:10:50.970 --> 00:10:55.980 file breach reports to HHS and provide notification to affected 186 00:10:55.980 --> 00:10:59.850 individuals without reasonable delay. And there's fine print of 187 00:10:59.850 --> 00:11:03.300 course but all of this. Business associates in the meantime that 188 00:11:03.300 --> 00:11:05.940 are affected by the incident must notify their affected 189 00:11:05.940 --> 00:11:09.780 covered entities after the discovery of the breach. So 190 00:11:09.780 --> 00:11:13.050 covered entities have up to 60 calendar days from the date of 191 00:11:13.050 --> 00:11:17.070 discovery of a breach of unsecured PHI to file breach 192 00:11:17.070 --> 00:11:22.110 reports to OCR through its portal for breaches affecting 193 00:11:22.140 --> 00:11:27.030 500 more individuals according to the guidance that HHS offered 194 00:11:27.030 --> 00:11:29.940 and you know that guidance is not new in terms of what HIPAA 195 00:11:29.940 --> 00:11:34.740 calls for. So even though UnitedHealth Group has offered 196 00:11:34.800 --> 00:11:37.920 to handle breach notification for customers affected by the 197 00:11:37.920 --> 00:11:41.730 Change Healthcare attack, the complex relationships in the 198 00:11:41.730 --> 00:11:46.890 situation muddies the water, especially in terms of HHS's OCR 199 00:11:47.160 --> 00:11:53.400 guidance. For instance, some healthcare providers might be 200 00:11:53.460 --> 00:11:58.320 business associates of change, and for others in the situation 201 00:11:58.320 --> 00:12:00.870 UnitedHealthcare might be a clearing house that would be a 202 00:12:00.870 --> 00:12:05.010 covered entity which would be responsible for notification. So 203 00:12:05.010 --> 00:12:07.230 this all gets kind of muddy depending on, you know, 204 00:12:07.260 --> 00:12:10.020 different aspects of what UnitedHealth Group does for an 205 00:12:10.020 --> 00:12:13.260 entity, and you know, the kind of services that Change provided. 206 00:12:13.620 --> 00:12:17.280 But in any case, HHS OCR wants to ensure that no affected 207 00:12:17.280 --> 00:12:21.210 individuals fall through the cracks in terms of breach 208 00:12:21.210 --> 00:12:24.210 notification, with healthcare entities thinking that they 209 00:12:24.210 --> 00:12:27.150 don't need to notify their patients because UnitedHealth 210 00:12:27.150 --> 00:12:31.620 Group will do that. So, you know, these health, this group 211 00:12:31.620 --> 00:12:35.280 of health industry groups, they're arguing that if 212 00:12:35.280 --> 00:12:38.100 anything, many patients could end up getting multiple 213 00:12:38.100 --> 00:12:42.420 notifications for the same breach. If every affected doctor 214 00:12:42.420 --> 00:12:46.740 or office, doctor office or hospital or specialist needs to 215 00:12:46.740 --> 00:12:50.070 notify, the groups contend that this is only going to confuse 216 00:12:50.070 --> 00:12:52.380 the patients, they're going to get multiple breach 217 00:12:52.380 --> 00:12:56.340 notifications, they're going to be alarmed, they'll be confused, 218 00:12:56.340 --> 00:13:00.540 you know, so on and so forth. So, as far as we know, at this 219 00:13:00.540 --> 00:13:04.140 point, UnitedHealth Group still hasn't reported a breach to 220 00:13:04.140 --> 00:13:07.680 regulators. And the company has said that it could take several 221 00:13:07.680 --> 00:13:11.610 weeks or months for that analysis to be done. But once 222 00:13:11.610 --> 00:13:15.000 that happens, some betting there will be a new round of panic in 223 00:13:15.000 --> 00:13:18.510 the healthcare sector, this time regarding breach notification 224 00:13:18.510 --> 00:13:21.570 issues, unless this gets clarified, you know, pretty 225 00:13:21.570 --> 00:13:24.870 soon. And, you know, to me, it seems like, you know, the 226 00:13:24.900 --> 00:13:28.290 regulators are kind of sticking to what HIPAA says unless, for 227 00:13:28.290 --> 00:13:32.100 instance, you know, a covered entity has a business associate 228 00:13:32.100 --> 00:13:35.880 agreement, and under that term of that contract, you know, it 229 00:13:35.880 --> 00:13:38.700 says the business associate is responsible for breach 230 00:13:38.700 --> 00:13:42.150 notification. So this gets, you know, pretty muddy, for the 231 00:13:42.150 --> 00:13:42.750 industry. 232 00:13:43.980 --> 00:13:46.050 Anna Delaney: Muddy indeed, and given the scale of this breach, 233 00:13:46.050 --> 00:13:49.500 potentially, as you say, affecting one-third of the U.S. 234 00:13:49.530 --> 00:13:53.250 population. What do you think of the expected short-term and 235 00:13:53.250 --> 00:13:56.790 long-term consequences on the healthcare sector, but also 236 00:13:56.790 --> 00:13:57.570 patients? 237 00:13:58.470 --> 00:14:00.600 Marianne McGee: Well you know in the short term, you know, 238 00:14:00.600 --> 00:14:03.720 against once the breach notifications kind of kick in, 239 00:14:03.990 --> 00:14:05.850 you know, there's going to be a lot of scrambling, they'll be 240 00:14:05.850 --> 00:14:09.360 the confusion, but you know, long term and this is 241 00:14:09.360 --> 00:14:12.120 already be gone, there's going to be, you know, dozens and 242 00:14:12.120 --> 00:14:14.640 dozens of lawsuits has already been done, dozens of lawsuits 243 00:14:14.640 --> 00:14:18.120 filed against, you know, UnitedHealthcare and changed by 244 00:14:18.120 --> 00:14:23.070 patients who are assuming that their data was affected. And 245 00:14:23.070 --> 00:14:26.880 then, you know, once the individual doctor practices 246 00:14:26.880 --> 00:14:30.360 become identified, you know, whose patients were affected, 247 00:14:30.720 --> 00:14:33.660 you'll probably see, you know, either these lawsuits get 248 00:14:33.660 --> 00:14:37.200 amended or new lawsuits that name both UnitedHealth Group 249 00:14:37.470 --> 00:14:40.950 and, you know, these various, you know, healthcare providers 250 00:14:40.950 --> 00:14:45.630 as codefendants in these cases and in many cases, you know, 251 00:14:45.630 --> 00:14:47.610 these - will Doctor practices, there's, you know, 252 00:14:47.880 --> 00:14:51.420 thousands of small doctor practices that were hurt, you know, from 253 00:14:51.420 --> 00:14:54.090 the financial end in this attack because they couldn't process 254 00:14:54.090 --> 00:14:57.330 claims. You know, the last thing that they need to do now is to 255 00:14:57.330 --> 00:15:00.960 hire lawyers to defend them, you know, and lawsuits, you know, 256 00:15:00.960 --> 00:15:03.660 and we'll see that happen, I'm sure. 257 00:15:05.250 --> 00:15:07.920 Anna Delaney: Well, Marianne, the saga continues. Thank you so 258 00:15:07.920 --> 00:15:12.120 much for sharing the latest. Tony, you're looking at a story 259 00:15:12.120 --> 00:15:15.120 covered by our colleague Mat Schwartz, which delves into the 260 00:15:15.120 --> 00:15:17.790 privacy and ethics debate surrounding large language 261 00:15:17.790 --> 00:15:21.210 models. And it turns out ... surprise, surprise ... that many 262 00:15:21.210 --> 00:15:24.810 companies like Slack, like Salesforce are automatically 263 00:15:24.930 --> 00:15:28.710 opting users in to use their data for training, which of 264 00:15:28.710 --> 00:15:31.830 course raises concerns about transparency and compliance with 265 00:15:32.040 --> 00:15:33.540 GDPR rules, does it not? 266 00:15:34.650 --> 00:15:39.330 Tony Morbin: GDPR and a lot of other concerns as well. I mean, 267 00:15:39.780 --> 00:15:42.060 used to be said that, you know, if you don't pay for a product 268 00:15:42.060 --> 00:15:45.300 or service online, you are the product because the data has a 269 00:15:45.300 --> 00:15:49.200 value. But today, you know, data-hungry AI large language 270 00:15:49.230 --> 00:15:52.020 learning models are hoovering up all your data, whether you paid 271 00:15:52.020 --> 00:15:54.780 for the service or not. So as you said, you know, just this 272 00:15:54.780 --> 00:15:58.170 week, we've been learning about enterprise customers at Slack, 273 00:15:58.200 --> 00:16:01.380 run by Salesforce, discovering that they're automatically opted 274 00:16:01.380 --> 00:16:04.950 in and have their data used to train the Slack's global LLMs. 275 00:16:05.820 --> 00:16:09.840 Now, Slack responded to Mat's article, which is also included, 276 00:16:10.020 --> 00:16:13.170 saying there is an opt-out option that the data they're 277 00:16:13.170 --> 00:16:17.040 collecting is metadata and not personally identifiable data. 278 00:16:17.490 --> 00:16:20.580 But I think any of us in this industry will know exactly how 279 00:16:20.580 --> 00:16:24.840 useful metadata can be. And you don't actually need to have the 280 00:16:24.900 --> 00:16:28.770 personal data. And also, I mean, you know, it can be argued that 281 00:16:28.770 --> 00:16:33.510 long before ChatGPT accelerated LLM use, we were using machine 282 00:16:33.510 --> 00:16:36.870 learning and AI to make sense of customer data. I mean, we had 283 00:16:36.870 --> 00:16:40.320 the original Facebook, effectively waiting people at 284 00:16:40.320 --> 00:16:44.670 Harvard. And Amazon, you know, is telling us what we might like 285 00:16:44.670 --> 00:16:48.270 to buy based on what we previously bought. So there's 286 00:16:48.660 --> 00:16:53.160 Mat's excellent report notes. Slack isn't alone. Adobe, Amazon 287 00:16:53.160 --> 00:16:57.690 Web Services, Google, Gemini, LinkedIn, OpenAI, many others 288 00:16:57.870 --> 00:17:01.470 have terms and conditions that say, by default, they can use 289 00:17:01.470 --> 00:17:04.860 their customers' data and interactions to train their 290 00:17:05.100 --> 00:17:10.290 LLMs. Legal and privacy experts are concerned. I mean, these 291 00:17:10.290 --> 00:17:13.080 organizations need to ensure that they comply with relevant 292 00:17:13.080 --> 00:17:16.380 privacy regulations, as you mentioned Anna, yeah, that does 293 00:17:16.380 --> 00:17:19.800 include general data protection regulation in Europe by GDPR. 294 00:17:20.340 --> 00:17:24.060 And now just this week, we've got the AI act, they've just 295 00:17:24.060 --> 00:17:28.590 passed the boat saying that this world's first AI law is set to 296 00:17:28.590 --> 00:17:32.730 come in force in the EU next month. A key requirement there 297 00:17:32.730 --> 00:17:35.730 is that companies need to be transparent with users about 298 00:17:35.730 --> 00:17:39.060 what data companies, what data the companies are collecting, 299 00:17:39.150 --> 00:17:42.840 and for what purpose. And the general consensus is that a 300 00:17:42.840 --> 00:17:47.190 small note in terms of additions isn't likely to be enough to 301 00:17:47.220 --> 00:17:53.190 really count that as informed consent. It's not just in Europe 302 00:17:53.190 --> 00:17:56.640 that privacy is a growing concern. But, you know, as with 303 00:17:56.640 --> 00:18:00.030 GDPR, European regulations can also have a global impact if 304 00:18:00.030 --> 00:18:04.320 you're dealing with people in that territory you want to meet 305 00:18:04.440 --> 00:18:07.710 the high standard you have to meet. We've also seen things 306 00:18:07.710 --> 00:18:12.450 like, this week Snapchat revised its AI privacy policy following 307 00:18:12.600 --> 00:18:18.780 the U.K. ICO probe, perhaps more copyright and privacy but in the 308 00:18:18.780 --> 00:18:22.320 same area, we also saw Hollywood megastar Scarlett Johansson say 309 00:18:22.320 --> 00:18:26.730 that she was shocked, angered and in disbelief that Mr. Altman 310 00:18:26.730 --> 00:18:30.330 of OpenAI would pursue a voice that sounded so eerily similar 311 00:18:30.330 --> 00:18:33.810 to hers, after she had specifically declined to allow 312 00:18:33.810 --> 00:18:40.800 OpenAI to use her voice. On the other side, for many, the 313 00:18:40.800 --> 00:18:44.850 benefits of AI justify this loss of privacy. There was a recent 314 00:18:44.850 --> 00:18:48.210 PwC study that found that productivity growth was almost 315 00:18:48.210 --> 00:18:52.230 five times as rapid in parts of the economy where AI penetration 316 00:18:52.230 --> 00:18:56.130 was highest compared to less exposed sectors. And he went on 317 00:18:56.130 --> 00:18:58.200 to say that we're only seeing the tip of the iceberg because 318 00:18:58.200 --> 00:19:03.540 so many areas aren't actually using AI. If you go on to any AI 319 00:19:03.570 --> 00:19:07.050 entrepreneurial forums, and that they're quite interesting, but 320 00:19:07.050 --> 00:19:10.320 obviously, they're pretty gung-ho, and there's a consensus 321 00:19:10.470 --> 00:19:14.100 that the ability to innovate in AI requires a near absence of 322 00:19:14.100 --> 00:19:18.330 regulation. In the narrow industry, cryptographer Ali 323 00:19:18.330 --> 00:19:21.300 ischaemia, as Mat pointed out the Sen. RSA, 324 00:19:21.720 --> 00:19:26.790 cryptosystem said during RSA, I'm quite pessimistic about the 325 00:19:26.790 --> 00:19:29.730 possibility of legally developing large language models 326 00:19:29.730 --> 00:19:34.500 in Europe unless you break the law. So that's really going the 327 00:19:34.500 --> 00:19:37.560 extreme. And certainly China and the U.S. have taken a more 328 00:19:37.560 --> 00:19:41.490 liberal approach when it comes to regulation ... going with 329 00:19:41.610 --> 00:19:45.390 recommendations ... rather than regulations. Again, specifically 330 00:19:45.390 --> 00:19:50.640 to try and avoid stifling innovation. Having said all 331 00:19:50.640 --> 00:19:53.790 that, even while we're recording this, there's an international 332 00:19:53.790 --> 00:19:57.720 AI safety conference underway in South Korea, with both China and 333 00:19:57.720 --> 00:20:00.840 the U.S. attending. And the room, which does go beyond 334 00:20:00.840 --> 00:20:04.740 privacy, includes reliability, potential misuse, existential 335 00:20:04.740 --> 00:20:08.220 threats. But the premise is that there's a need to address these 336 00:20:08.220 --> 00:20:12.120 risks uniformly, saying that risks and challenges of AI can't 337 00:20:12.120 --> 00:20:14.880 be reduced to one jurisdiction. So it's calling for the 338 00:20:14.880 --> 00:20:17.250 establishment of a consistent global approach to AI 339 00:20:17.250 --> 00:20:20.580 regulation, putting checks in place that can be adapted across 340 00:20:20.580 --> 00:20:25.050 the board, rather than specific regional industry standards. The 341 00:20:25.050 --> 00:20:27.900 idea is that tech companies will be held accountable for 342 00:20:27.900 --> 00:20:30.690 deploying tech responsibly, while governments put 343 00:20:30.690 --> 00:20:34.650 regulations in place to ensure safe deployment. But, in 344 00:20:34.650 --> 00:20:36.900 addition to the differing national approaches, a big 345 00:20:36.900 --> 00:20:39.780 problem is that the rate of change and advancement in AI is 346 00:20:39.780 --> 00:20:45.270 so rapid that legislation was unlikely to keep up. Then, the 347 00:20:45.270 --> 00:20:48.480 problem for the rest of us is this fend for yourself 348 00:20:48.480 --> 00:20:51.930 environment, what do we do and many CISOs are looking at how 349 00:20:51.930 --> 00:20:54.630 they can prevent their intellectual property classified 350 00:20:54.630 --> 00:20:59.760 information, regulated data, including PII and other 351 00:20:59.760 --> 00:21:03.600 sensitive data ng gapping, somebody else's LLM ... they can 352 00:21:03.600 --> 00:21:06.780 include the use of small language models using only your 353 00:21:06.780 --> 00:21:11.280 own data, private chat box that outsiders can't access, but then 354 00:21:11.280 --> 00:21:14.910 that will limit their learning capacity. Others such as 355 00:21:14.970 --> 00:21:17.430 Britain's Department for Work and Pensions has banned 356 00:21:17.430 --> 00:21:21.780 employees and contractors from using ChatGPT and similar LLMs. 357 00:21:22.770 --> 00:21:24.720 But then, of course, we have potentially the problem of 358 00:21:24.720 --> 00:21:29.280 shadow IT, or shadow AI I should say in this case. We're also 359 00:21:29.280 --> 00:21:32.370 seeing an increasing interest in technical controls, such as 360 00:21:33.540 --> 00:21:36.600 applying data loss prevention software and blocking or 361 00:21:36.600 --> 00:21:40.350 filtering of sites and services, inventory controls that vet the 362 00:21:40.350 --> 00:21:42.990 AI models before they're deployed, monitoring the 363 00:21:42.990 --> 00:21:46.410 providers for compliance and application of privacy and 364 00:21:46.410 --> 00:21:50.190 security rules. So this basically also does include 365 00:21:50.190 --> 00:21:53.640 strengthening the terms of engagement in relation to what 366 00:21:53.640 --> 00:21:58.350 services and AI products that we actually not just buy but 367 00:21:58.410 --> 00:22:03.390 commission if we get others to build them on our behalf. So how 368 00:22:03.390 --> 00:22:05.790 are we likely to get a global consensus on how we should 369 00:22:05.790 --> 00:22:10.110 develop and deploy AI? If you look at any of the polarized 370 00:22:10.110 --> 00:22:12.600 discussions underway online today, and you can choose any 371 00:22:12.600 --> 00:22:16.290 geography and almost any topic. It is clear you're seeing 372 00:22:16.320 --> 00:22:19.800 neither side trust anything the other one is saying. And given 373 00:22:19.800 --> 00:22:23.640 today's geopolitics, you can say that international AI agreement 374 00:22:23.730 --> 00:22:27.930 is impossible, yet through their flaws we do have International 375 00:22:27.930 --> 00:22:31.050 Atomic Energy Commission, International Aviation Law, the 376 00:22:31.050 --> 00:22:34.350 Law of the Sea, because we understand and agree about the 377 00:22:34.350 --> 00:22:38.160 dangers and benefits. And the international AI agreement does 378 00:22:38.160 --> 00:22:41.160 need to be added to that list, even if it's just to establish 379 00:22:41.160 --> 00:22:42.600 principles, if not laws. 380 00:22:44.310 --> 00:22:46.590 Anna Delaney: That was a great overview as a massive debate. 381 00:22:47.400 --> 00:22:50.250 Tony, what questions remain for you? Where will you be watching 382 00:22:50.250 --> 00:22:52.650 us as this continues to unfold? 383 00:22:53.970 --> 00:22:56.970 Tony Morbin: Well, I mean, there is a big competition going on 384 00:22:57.060 --> 00:23:02.490 between the U.S. and China as to who's going to control AI. China 385 00:23:02.490 --> 00:23:05.430 has huge amounts of data internally from its own 386 00:23:05.430 --> 00:23:08.730 population, who don't really have any say over whether that 387 00:23:08.730 --> 00:23:13.830 data is being used or not. China's looking for geopolitical 388 00:23:13.830 --> 00:23:17.460 advantage, and others such as Russia and Korea, and so on will 389 00:23:17.460 --> 00:23:21.240 probably finally do the same. In the U.S., yes there is this 390 00:23:21.240 --> 00:23:24.810 political advantage but also commercial advantage and how do 391 00:23:24.810 --> 00:23:30.840 you get commercial advantage. Europe is trying to regulate for 392 00:23:30.840 --> 00:23:35.520 privacy for individuals, but is it going to lose out on the 393 00:23:35.520 --> 00:23:38.670 innovation as the entrepreneurs seem to be saying it will do. 394 00:23:38.670 --> 00:23:42.390 So, that's a big concern. And this is just one of the aspects 395 00:23:42.390 --> 00:23:46.950 of safety, of course, privacy, there's also, you know, today, 396 00:23:46.950 --> 00:23:50.730 we've also got in the U.K., the former head of the post office, 397 00:23:51.840 --> 00:23:58.140 giving evidence over a load of post office workers being 398 00:23:58.470 --> 00:24:02.160 imprisoned for allegedly stealing money when it appears 399 00:24:02.160 --> 00:24:06.870 that it was all down to a flaw in ... a computer flaw. So I'm 400 00:24:06.870 --> 00:24:09.660 just saying that kind of emphasizes how much do we trust 401 00:24:09.780 --> 00:24:13.050 systems that we don't know how they work? Because the 402 00:24:13.050 --> 00:24:14.790 consequences can be very dire. 403 00:24:16.230 --> 00:24:18.450 Anna Delaney: Absolutely, we need complete transparency here 404 00:24:18.450 --> 00:24:21.630 and I don't know if we're going to get that ever or anytime 405 00:24:21.630 --> 00:24:24.720 soon. But no doubt ... to be continued. Thank you so much, 406 00:24:24.720 --> 00:24:28.050 Tony. And finally, just for fun, as Tony said, the latest 407 00:24:28.050 --> 00:24:31.290 controversy around OpenAI allegedly using a vocal likeness 408 00:24:31.290 --> 00:24:34.890 of Scarlett Johansson for their new AI assistant's voice 409 00:24:34.980 --> 00:24:39.120 inspired this question. As a journalist whose voice would you 410 00:24:39.120 --> 00:24:42.960 personally choose for AI-generated content and why? 411 00:24:45.030 --> 00:24:48.660 Marianne McGee: For me, I will say Diane Sawyer. I don't know 412 00:24:48.660 --> 00:24:53.160 if Anna and Tony you in the U.K. are familiar with her but she 413 00:24:53.160 --> 00:24:57.930 was a longtime correspondent at 60 minutes. She later became 414 00:24:57.930 --> 00:25:02.130 anchor of ABC World News and you know, she was a co-anchor of 415 00:25:02.160 --> 00:25:07.800 20/20. And I say her because I always found her authoritative, 416 00:25:08.100 --> 00:25:12.180 yet empathetic and very believable, and, you know, just 417 00:25:12.180 --> 00:25:15.810 soothing. So, you know, I think that she would have the that 418 00:25:15.810 --> 00:25:19.830 voice would and her that her style would make people 419 00:25:19.830 --> 00:25:23.730 convinced that whatever this AI is saying must be true. And you 420 00:25:23.730 --> 00:25:27.750 know, Diane, I advise you not to lend your voice for any of this. 421 00:25:29.670 --> 00:25:32.160 Anna Delaney: For that trust, yes, I'm going to Google her 422 00:25:32.160 --> 00:25:34.950 voice now. Michael? 423 00:25:36.210 --> 00:25:38.220 Michael Novinson: Well guess I'll give someone Anna and Tony 424 00:25:38.220 --> 00:25:40.740 are familiar with are definitely familiar with ... it should be 425 00:25:40.740 --> 00:25:43.440 Dame Helen Mirren, not a journalist but a lovely voice 426 00:25:43.440 --> 00:25:46.380 nonetheless. And if I had someone narrating my life and my 427 00:25:46.380 --> 00:25:47.940 work, who better than her? 428 00:25:49.440 --> 00:25:52.890 Anna Delaney: We loved Dame Helen. Great choice. Tony? 429 00:25:53.400 --> 00:25:55.860 Tony Morbin: I'm hoping you will have heard of the one I'm 430 00:25:55.860 --> 00:25:58.980 saying. I once heard a comment that, you know, there comes a 431 00:25:58.980 --> 00:26:01.320 time in your life, you have to choose whether your role model 432 00:26:01.320 --> 00:26:05.790 for old ages, Keith Richards or David Attenborough. Well, I've 433 00:26:05.790 --> 00:26:08.760 switched over to the David Attenborough side now, and I'd 434 00:26:08.760 --> 00:26:12.930 be looking to his voice for his authenticity. And along the 435 00:26:12.930 --> 00:26:16.410 lines of Marianne, as I say, if you can fake authenticity, 436 00:26:16.410 --> 00:26:17.100 you've got it made. 437 00:26:18.030 --> 00:26:22.080 Anna Delaney: So Tony, one of my suggestions was also David. So 438 00:26:22.080 --> 00:26:26.430 David, who is obviously a lead authority in nature docs, but 439 00:26:26.430 --> 00:26:31.260 also Morgan Freeman, because he is the voice of God. But I think 440 00:26:31.260 --> 00:26:35.580 I'm gonna go for the Duchess a bit like Marianne said, we need 441 00:26:35.580 --> 00:26:40.680 someone trustworthy. The Duchess in the female cat in the rest of 442 00:26:40.680 --> 00:26:45.300 cats, of course, voiced by Eva Gabor, and I just think her 443 00:26:45.300 --> 00:26:48.990 voice exudes sophistication and warmth and elegance. And as I 444 00:26:48.990 --> 00:26:50.970 said, ultimately, you trust her. 445 00:26:51.990 --> 00:26:55.380 Tony Morbin: Another voice that I do love but unfortunately he's 446 00:26:55.380 --> 00:26:59.850 evil is the tiger ... Shere Khan in The Jungle Book, straight 447 00:26:59.850 --> 00:27:00.360 voiceless. 448 00:27:02.580 --> 00:27:04.440 Anna Delaney: Awesome stuff in The Lion King, we could go on. 449 00:27:05.490 --> 00:27:09.720 Thank you so much. This has been great, informative as always 450 00:27:09.750 --> 00:27:11.700 educational. Thanks. 451 00:27:13.260 --> 00:27:13.800 Michael Novinson: Thank you Anna. 452 00:27:14.370 --> 00:27:14.940 Marianne McGee: Thanks Anna. 453 00:27:15.210 --> 00:27:17.130 Anna Delaney: Thanks so much for watching. Until next time.