Is the AI Act missing safeguards on migration?

The European Commission’s proposed AI Act – the first-ever legal framework on artificial intelligence – includes an exemption that could allow for the use of certain high-risk technologies in migration-related procedures
Illustration by Joe Magee

By Laura Lamberti

Laura Lamberti is a junior reporter at The Parliament Magazine

25 Jan 2023

The photo is hard to stomach: it shows a man’s back lined with bright red and pink lacerations. The caption reads: “Injuries sustained to the abovementioned respondent’s back after expulsion by Croatian authorities.”

The image was taken in Vrata, Croatia, in 2019, by affiliates of Border Violence Monitoring Network (BVMN), a coalition of organisations documenting illegal pushbacks and police violence by European Union Member State authorities in the Western Balkans and Greece.

It is one of many case studies in a report they prepared for the United Nations Special Rapporteur on contemporary forms of racism, xenophobia and related intolerance. Focusing on the role of technology in illegal pushbacks from Croatia to Bosnia and Herzegovina and Serbia, the coalition’s findings were submitted in advance of the Rapporteur’s own report on Race, Borders and Digital Technologies.

The man with the lacerations hypothesised that he might have been tracked with night vision binoculars. Drones, helicopters and thermal vision cameras also feature heavily in these testimonies. Part of the EU’s increased use of technology to monitor migration and improve border security includes this more traditional military equipment but also the development and deployment of something even more imperceptible: artificial intelligence (AI).

Both AI and migration are at the top of the European Union’s agenda, as evidenced by the agreement on a general approach on the AI Act (pending Parliament’s position) and the new European Union Action Plan on the Western Balkans.

However, when it comes to regulating the use of AI in migration-related procedures, the long-awaited AI Act – the first attempt by a major regulator to tame the potentially ferocious AI beast – strays away from the “fundamental rights approach” the EU claims to hold as its north star, at least according to human rights advocates and some scholars who study tech and migration.

To understand their concerns, it is necessary to start from the end: more specifically, from Article 83 of the AI Act’s 85 articles.

The article states that the regulation shall not apply to AI systems which are components of large-scale IT systems, which includes EU migration databases like Eurodac (European Asylum Dactyloscopy Database) and the upcoming ETIAS (European Travel Information and Authorisation System).

Despite being labelled as “high risk” in the four-tier risk hierarchy around which the AI Act is structured, AI systems such as automated risk assessments and biometric identification systems are exempt from regulation under Article 83. “It arbitrarily exempts hundreds of millions of people who are not European citizens from the safeguards that the AI Act foresees,” says Caterina Rodelli, an EU policy analyst at Access Now, an international NGO working on digital civil rights.

She puts it more bluntly: “Article 83 is the proof that EU policies are racist.”

When Access Now asked the Commission about the reasoning for this exemption in its proposal, no clear answer was forthcoming. Rodelli draws her own conclusions: “Looking at the overall political spectrum when it comes to immigration, it is clear the reason is a lack of political will.”

A Commission source notes that respect for fundamental rights is a condition for the legality of all EU acts, including the large-scale IT systems in question. Moreover, Article 83 does not constitute a blanket exemption: the regulation would apply in case the replacement or amendment of the relevant legal acts on large-scale IT systems leads to a significant change in the design or intended purpose of the AI system.

If the AI Act remains as is, this exemption – which some view as a tacit approval for the use of high-risk AI systems in migration – will be codified into legislation. Hope Barker, a senior policy analyst at the BVMN, warns that when something like this is codified into legislation at the EU level, the phenomenon it legitimises is bolstered, and consequently strengthened. “It shows that there’s no political will from the top to try and fight against these violations,” she says. “At that point, how is anyone supposed to be held accountable?”

If the AI Act remains as is, this exemption – which some view as a tacit approval for the use of high-risk AI systems in migration – will be codified into legislation

According to a spokesperson for the European Association for AI (EurAI), an international association aiming to promote the science and technology of artificial intelligence in Europe, “AI systems in decision-making can obfuscate the understanding of who exactly is responsible for, and thus has power to reverse which decision”. Within the context of migration, the responsibility for ensuring rights are respected “can disappear between two countries”, the EurAI spokesperson explains by email.

Working on the ground with migrants in northern Greece, Barker saw the effects of policy- and technology-enforced prejudice. “Something we’ve seen at borders is that ethnic profiling of individuals is used to justify or facilitate human rights violations,” she says, voicing concern for the AI Act’s lax stance on AI and profiling.

In 2020, the United Kingdom’s Home Office abandoned the use of a visa streaming algorithm which automatically processed information provided by visa applicants and assigned each person a colour code based on a traffic light system. Campaigners claimed the algorithm was racially discriminatory; solicitors for the Home Office wrote in a letter that the department intended to consider and assess issues around unconscious bias.

Despite evidence of algorithmic bias and its potential harms, algorithms used to profile travellers and migrants will most likely also fall outside the scope of the AI Act. Why? Because this AI technology constitutes the basis of the European Travel Information and Authorisation System (ETIAS) scheduled to be implemented in early 2023 and is exempted under Article 23.

Targeting visa-exempt visitors traveling to the Schengen Area, ETIAS will check the data in the travel authorisation application form against EU information systems for borders and security. Moreover, ETIAS will deploy an automated risk assessment system that profiles travellers based on risk indicators, such as education level and statistics on irregular migration, which could reinforce automated suspicion against people of certain backgrounds.

In Rodelli’s view, one of the most problematic aspects about automated risk assessment is the lack of transparency regarding its design.

Transparency concerns in relation to the design and testing of AI systems used for border management come not only from civil society but from inside the hemicycle as well. Patrick Breyer, an MEP from the German Pirate Party, with the Greens/EFA in the EU Parliament, has spent years fighting for transparency regarding the ethical and legal assessments of iBorderCtrl (Intelligent Portable Border Control System), a project funded by Horizon 2020, the EU’s programme for research and innovation. iBorderCtrl is a video lie-detector aimed at providing border officers with information regarding which individuals present “biomarkers of deceit”.

In 2019, Breyer filed a lawsuit at the European Court of Justice for the release of documents on the ethical justifiability, legality and results of the project after the EU’s Research Agency refused to release them. The first instance court ruling called for the publication of legal and ethical evaluations of “automated deception detection” but not any documents specifically related to the project; Breyer appealed the decision.

“I find it totally unacceptable that business interests are given priority over the future of the society we want to live in,” Breyer says, stressing the danger of a society in which people risk being labelled liars without any avenues for challenging such a designation.

China’s social scoring system is a widely referenced example of the types of AI applications the AI Act categorises as posing an “unacceptable risk”. Breyer argues the EU will eventually look like China if it doesn’t ban this type of AI. Employing the methods of authoritarian regimes will lead the EU to abandon its values, he argues, and ultimately lose what he calls the “global struggle of authoritarianism versus the free world”.

To Breyer, the EU must continually fight for democracy, fundamental rights and freedoms within its borders and remain loyal to these principles in the global arena. “Even if this technology is not allowed to be used in the EU, once the EU funds its development, it can be sold to and used by oppressive regimes, and we have a responsibility when it comes to research and development to make sure that it aligns with our values and fundamental rights,” he explains.

The European Union Agency for Fundamental Rights (FRA) believes the AI Act’s requirements for high-risk AI systems will help mitigate risks they pose to fundamental rights. Nevertheless, systems exempted under Article 83 should be checked for their compliance with the AI Act to increase transparency and accountability.

The Council’s final position cannot be adopted until the European Parliament adopts its own position, which requires evaluating all amendments on the table, including those on migration.

However, according to Rodelli from Access Now, the fact that there is still time to include additional safeguards for the use of AI in migration does not necessarily inspire optimism: “We know very well from previous instances that it is the kind of topic policymakers are willing to sacrifice.”

Read the most recent articles written by Laura Lamberti - EU AMA: What does it take to make into the European Parliament’s art collection?  

Categories

Feature