The AI Act should mandate tests for human oversight

New research demonstrates that people do not always learn to disregard poor algorithmic recommendations, a finding that should be reflected in the EU’s AI Act
Photo: Adobe Stock

By Johannes Walter

Johannes Walter is a researcher at the ZEW – Leibniz Centre for European Economic Research and a PhD candidate in Technical Economics at the Karlsruhe Institute of Technology

14 Mar 2023

Just 0.2 seconds prior to collision, the self-driving car alerted its driver to retake control. Unfortunately, it was too late: one Sunday night in 2018 Elaine Herzberg became the world’s first pedestrian to lose her life as a result of a self-driving car.

At the time of the accident, which occurred in the United States, the car had been in autonomous mode for just under 20 minutes; long enough for the car’s operator to become complacent and relinquish supervision of the vehicle.

In the European Union’s proposed Artificial Intelligence Act (AI Act) the concept of human oversight is one of the main mechanisms intended to prevent harmful AI outcomes.

Of course, the concept of human oversight is not restricted to autonomous driving: medical professionals oversee algorithmic recommendations on how to treat patients, judges consider algorithmically derived risk predictions of recidivism, and corporate human resource departments use algorithms to recommend which applicants to interview.

The promise of human oversight is for humans to notice and correct for any kind of wrong algorithmic advice. And indeed, human oversight can work. Human operators have successfully intervened when their autonomous car was getting in danger. Child welfare experts in the UK were found to correct erroneous algorithmic advice. And in recent months millions of users have easily spotted factual errors and hallucinations in large language models like OpenAI’s ChatGPT.

But in many applications, human oversight fails. In recent years the evidence that humans are often poor supervisors of AI systems has been mounting. We show in a new experiment that people do not learn to disregard poor algorithmic recommendations, even when they receive an explanation of how the algorithm works and interact with it over several rounds. Other studies have shown that human oversight has failed to prevent discriminatory decisions in policing, medical applications and felony sentencing.

The causes for this finding often stem from the way humans interact with AI systems and the resulting psychological effects. For example, people have been found to rely excessively or too little on algorithms, depending on whether the task is perceived as objective or subjective in nature.

People have been found to rely excessively or too little on algorithms, depending on whether the task is perceived as objective or subjective in nature

The problem with the AI Act is that it presupposes that AI systems have always been designed in such a way that human oversight will be effective. As evidenced by self-driving cars that allow their operators to become distracted for extended periods, this is clearly not the case. It is therefore critical for each high-risk application to test whether humans can successfully exploit algorithmic advice when it is of high quality and ignore or correct it when the advice is poor.

Based on the scientific findings in the literature and our own research, I propose three recommendations to improve the AI Act and any future AI legislation in Europe. First, the AI Act should acknowledge that it may not always be possible to make human oversight work successfully. Secondly, within the existing conformity assessments for high-risk AI applications, the AI Act should mandate randomised controlled trials to determine if and how human oversight can be successful. Finally, if a specific AI application is found to exhibit or exacerbate biases that human oversight cannot successfully prevent, the system should not be implemented in its current form.

Ensuring that human oversight works as intended is key to prevent future discriminatory decisions or further casualties.

Categories

Opinion