How State Governments Can Regulate AI and Protect PrivacyRisk Counsel Jenny Hedderman Discusses Privacy, Tech Laws and 'Regulating Jell-O'
As risk counsel for the statewide Office Management Team in the Massachusetts Office of the Comptroller, Jenny Hedderman had a lot to say about the risks of generative AI, starting with the fact that "the pressure to get on the bandwagon" with artificial intelligence is causing organizations and agencies to bypass security.
Regulating AI is "like regulating Jell-O," Hedderman said, because it is "amorphous and always changing." But states are looking at regulating "areas of harm" rather than regulating AI as a whole, she said, and companies that self-regulate may see more gains than will be seen with legislation for a long while, since technology laws are always behind by several years.
Hedderman said bias and exclusion in algorithms are good places to start when it comes to regulating AI. Companies that use AI should make sure it is making their automated processes better - not just faster, she said.
In this episode of CyberEd.io's podcast series "Cybersecurity Insights," Hedderman also discussed:
- Why people don't seem to care about privacy until it has been compromise and why the government needs to protect privacy for citizens;
- Why reliance on email and third-party vendors creates risks for businesses;
- How the legal profession should use - and teach about - AI.
Hedderman specializes in compliance, internal controls and risk management in the areas of statewide accounting, payroll, financial reporting and statewide financial audits for 154 state agencies. Her current focus is developing the comptroller's statewide risk management program, including cybersecurity internal controls and cybersecurity awareness to reduce fraud and cyber incidents. She is an adjunct professor of business law at Endicott College.