Artificial Intelligence & Machine Learning , Geo Focus: The United Kingdom , Geo-Specific

AI Recruitment Tools Prone to Bias, Privacy Issues

ML, NLP Tools Collect More Personal Information Than Required, U.K. Regulator Says
AI Recruitment Tools Prone to Bias, Privacy Issues

Artificial intelligence tools currently used by organizations in the United Kingdom to screen job applicants pose privacy risks and are susceptible to biases and accuracy issues, the U.K. Information Commissioner's Office found.

See Also: Accelerating defense missions with a global data mesh

The data regulator uncovered the issues after it audited AI tools used for recruitment purposes such as predicting a candidate's interest in a job, scoring job seekers' competencies, and for analyzing the language and tone of candidates appearing in recorded video interviews.

The study mainly focused on AI solutions developed with machine learning and natural language processing, and it did not include generative AI-powered solutions. AI systems help businesses process large chunks of data with greater efficiency, but the ICO's analysis found that models used for recruitment purposes come with "inherent risks to people and their privacy."

"Our audits found areas for improvement in data protection compliance and management of privacy risks," the ICO said. "We did witness instances where there was a lack of accuracy testing. Additionally, features in some tools could lead to discrimination."

In terms of privacy, the ICO said AI developers in some cases used more data than they needed to train AI solutions. For instance, while maintaining a candidate database, in addition to the candidates' names and contact details, some AI developers collected and processed less essential data such as photos, the ICO said.

These databases also contained a wide swathe of personal information scraped from social media and job networking sites. In certain other cases, the companies anonymized personal data to develop other solutions, the ICO said.

This personal information was processed without adequate consent from the job candidates, which is a key privacy requirement under the U.K. General Data Protection Regulation.

The report also cited issues with accuracy and bias stemming from AI developers failing to use appropriate evaluation metrics to verify the outcome of their tools or not separating training data from testing data

When companies failed to deploy sampling techniques to ensure models had access to diverse data, the AI solutions generated biased outcomes, including filtering out candidates and inferring individuals' gender and ethnicity. "This inferred information is not accurate enough to monitor bias effectively," the ICO said.

The regulator recommended several steps to help AI model developers comply with U.K. GDPR and address accuracy and model bias including developing AI tools using pseudonymized personal information, minimizing datasets for training and testing AI, deleting personal information once its retention period expires, and assessing the validity and accuracy of training data.

Addressing AI Challenges

While the ICO report is a positive step toward protecting job seekers' privacy rights, the lack of a sector-specific regulator to oversee AI-based recruitment in the U.K. is a concern - and a potential policy gap that the government needs to address urgently, said Michael Birtwistle, associate director at Ada Lovelace Institute, an independent AI research organization.

"This leaves a concerning gap in any approach to regulating AI that relies exclusively on existing regulators. It also highlights the importance of existing safeguards in data protection law, particularly around automated decision-making," Birtwistle said.

The ICO report highlights attempts by "data-hungry systems" to bypass data protection rules, said Susannah Copson, legal and policy officer at digital rights organization the Big Brother Watch.

Copson said the U.K. government must urgently enforce AI transparency and accountability measures such as algorithmic impact assessments to address biases and privacy risks. "Such steps are essential to prevent job candidates and recruiters alike from falling victim to flawed algorithms," she added.

While the Conservative government had previously indicated that it will not introduce a binding AI regulation in the U.K, Peter Kyle, the newly elected Labour technology secretary on Wednesday said his government will likely introduce AI safety rules next year.


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.eu, you agree to our use of cookies.