How to amend the Artificial Intelligence Act to avoid the misuse of high-risk AI systems

When it comes to amending the Artificial Intelligence Act, practices endangering fundamental rights must be banned and high-risk applications should be strictly regulated, argues Marcel Kolaja
Source: Adobe Stock

By Marcel Kolaja

Marcel Kolaja (CZ, Greens/EFA) is rapporteur for the opinion of the Committee on Culture and Education (CULT) on the Artificial Intelligence Act

21 Apr 2022

As the opinion rapporteur for the Artificial Intelligence Act in the Committee on Culture and Education (CULT), I will present a proposal for amending the Artificial Intelligence Act in March. The draft focuses on several key areas of artificial intelligence (AI), such as high-risk AI in education, high-risk AI requirements and obligations, AI and fundamental rights as well as prohibited practices and transparency obligations. 

The regulation is aiming to create a legal framework that prevents discrimination and prohibits practices that violate fundamental rights or endanger our safety or health. One of the most problematic areas is the use of remote biometric identification systems in public space.

Unfortunately, the use of such systems has increased rapidly, especially by governments and companies to monitor places of gathering, for example. It is incredibly easy for law enforcement authorities to abuse these systems for mass surveillance of citizens. Therefore, the use of remote biometric identification and emotion recognition systems is over the line and must be banned completely.

Moreover, the misuse of technology is concerning. I am worried that countries without a functioning rule of law will use it to persecute journalists and prevent their investigations. It is obviously happening to a certain extent in Poland and Hungary, where governments have used the Pegasus software to track journalists and members of the opposition. How hard will it be for these governments to abuse remote biometric identification, such as facial recognition systems?

It is absolutely necessary to set rules that will prevent governments from abusing AI systems to violate fundamental rights

As far as we know, the Hungarian government has already persecuted journalists in the so-called interest of national security for questioning the government’s actions amid the pandemic. Even the Chinese social credit system, which ranks the country’s citizens, is based on the alleged purpose of ensuring security. 

It is absolutely necessary to set rules that will prevent governments from abusing AI systems to violate fundamental rights. In October, a majority of the European Parliament voted in favour of a report on the use of AI in criminal law. The vote showed a clear direction for the European Parliament in this matter.

The proposal includes a definition of so-called high-risk AI systems. HR tools that could filter applications, banking systems that evaluate our creditworthiness and predictive control systems all fall under the definition of high-risk because they could easily reproduce bias and have a negative impact on disparity. 

With AI being present in education as well, the proposal includes test evaluation and entrance examination systems. Still, this list shall be expanded to include online proctoring systems. However, there is a problem with different interpretations of the GDPR in the case of online proctoring systems, resulting in differences in personal data protection in Amsterdam, Copenhagen and Milan.

According to the Dutch and Danish decisions, there was no conflict between online proctoring systems and the GDPR, but the Italian data protection authority fined and banned further use of these technologies. Currently, universities are investing in new technologies without knowing whether they are authorised to use them or if they are going to be fined.

HR tools that could filter applications, banking systems that evaluate our creditworthiness and predictive control systems all fall under the definition of high-risk because they could easily reproduce bias and have a negative impact on disparity

In my opinion, technologies used for students’ personalised education should be included in the high-risk category as well. In this case, incorrect usage can negatively affect a student’s future. 

In addition to education, the CULT committee focuses on the media sector, where AI systems can be easily misused to spread disinformation. As a result, the functioning of democracy and society may be in danger.

When incorrectly deployed, AI systems that recommend content and learn from our responses can systematically display content which form so-called rabbit holes of disinformation. This increases hatred and the polarisation of society and has a negative impact on democratic functioning.

We need to set clear rules that will not be easy to circumvent. Currently, I am working on a draft legislative opinion which will be presented in the CULT committee in March. I will do my best to fill all the gaps that I have identified.

The Council is also working on its position. A common compromise presented by the Slovenian presidency was found, for example, in the extension of social scoring from public authorities to private companies as well.

Categories

Opinion