Artificial Intelligence & Machine Learning , Biometrics , Governance & Risk Management

Draft EU Regulation Proposes Curbs on AI for Surveillance

Regulation Would Restrict Use of Facial Recognition - Exempting Military Use
Draft EU Regulation Proposes Curbs on AI for Surveillance

European politicians continue to debate the extent to which artificial intelligence should be allowed in public and other spaces, and are attempting to advance the discussion by setting definitions and agreeing on standards.

See Also: Live Webinar | How To Meet Your Zero Trust Goals Through Advanced Endpoint Strategies

A proposed EU regulation on AI aims to restrict the use of facial recognition for surveillance, according to a draft of the regulation obtained by the Politico.EU news site.

The 81-page draft aims to address privacy concerns about what it refers to as "high-risk AI systems," such as remote use of biometrics in public places, whether by government or private-sector organizations.

“The rules established by this regulation should apply to providers of AI systems irrespective of whether they are established within the [EU] or in a country outside the [EU],” the draft states.

In August 2019, a developer's use of facial recognition software around the Kings Cross railway station in London sparked controversy about violations of the EU's General Data Protection Rule's privacy provisions, and the project was abandoned (see: Use of Facial Recognition Stirs Controversy).

What Would Be Prevented?

The aim of the regulation is to "establish common normative standards for all high-risk AI systems.” These standards would apply to methods of surveillance that include monitoring and tracking of people "in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources.” This includes the use of facial recognition and biometric identification, as well as algorithmic social scoring of individuals based on their online and offline activities. The proposal also calls for prohibition of the sale, deployment or use of AI-powered systems that can generate images, audio or video content intended to manipulate human behavior, opinions or decisions.

Organizations that violate the proposed regulation would face a fine of up to 4% of their global revenue.

But military use of AI systems would be exempt from compliance.

More Clarity Sought

Daniel Leufer, a European policy analyst, says the EU's proposed regulation lacks clarity and contains several loopholes, including the lack of limitations on AI use by the military.

"There is lots more to unpack, and plenty of other red flags, but … Article 52 on the establishment of an EU database on high-risk systems would create a publicly viewable database of high risk systems used in the EU ... excellent," Leufer said in a Wednesday tweet.

Leufer calls for expansion of the current proposals on limiting the use AI to include all public sector AI systems, regardless of their assigned risk level. "This is essential because people typically do not have a choice about whether or not to interact with an AI system in the public sector," he added.

Privacy Concerns

Over the past several years, the use of facial recognition and other AI technologies has stoked privacy concerns in Europe and other parts of the world. Concerns include data harvesting, unauthorized tracking, misuse of data for credential stealing and potential identity theft.

In August 2019, Sweden's Data Protection Authority issued its first fine for violations of EU's General Data Protection Regulation after a school launched a facial recognition pilot program to track students' attendance without proper consent (see: Facial Recognition Use in UK Continues to Stir Controversy).

In March 2020, the American Civil Liberties Union filed a Freedom of Information Act lawsuit against the U.S. Department Of Homeland Security and three of its agencies in an effort to learn more about how the department uses facial recognition technology at airports and the country's borders (see: ACLU Files Lawsuit Over Facial Recognition at US Airports).

In the lawsuit, the ACLU alleged that the agencies' increasing use of facial recognition technology at airports and at border crossings to scan travellers faces could pose "profound civil-liberties concerns" and enables "persistent government surveillance on a massive scale."

Last year, Jay Inslee, governor of the state of Washington, signed a new law that restricts how the state's agencies can use facial recognition technology and requires law enforcement agencies to obtain a warrant before they can use the technology for a criminal investigation (see: Washington Governor Signs Facial Recognition Law).

About the Author

Akshaya Asokan

Akshaya Asokan

Consultant Editor, ISMG

Asokan is a consultant editor for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.