At Europe’s borders, AI is testing the limits of EU rights

From lie detectors to biometric databases, the EU’s growing use of artificial intelligence in migration control is exposing gaps in its legal framework, and risks blurring the line between border enforcement and internal security.
Thermal video capability displayed by the Bavarian Police, Aug. 2021. (Sachelle Bab, ZUMA Press, Inc., Alamy)

By Peder Schaefer, Margherita Dalla Vecchia

Peder Schaefer is a reporter and Margherita Dalla Vecchia is an editorial assistant at The Parliament Magazine.

05 Feb 2026

As Europe moves to fortify its borders and curb irregular migration, the European Union is pouring hundreds of millions of euros into artificial intelligence technologies — including lie detectors, speech recognition tools and drones. 

Critics say this deepens a double-standard in which Europe’s strict tech and data protection rules apply to citizens, but not to migrants seeking a new life. 

Experts who spoke with The Parliament said the gap between Europe’s self-professed human rights values and its border control practices is set to widen. Spending on AI-driven border technologies continues to rise, while the European Commission’s Migration and Asylum Pact calls for increased digitization of border management. 

Critics also warn that technologies developed for migration control rarely stay at the border. Tools first deployed on migrants can be repurposed for domestic law enforcement, blurring the line between migration management and internal security. 

“This doesn’t just stop at the border,” said Petra Molnar, a researcher at Harvard University who studies the global use of AI in migration systems. “It’s about normalization of surveillance in other facets of public life.” 

More EU funding, fewer safeguards

Between 2007 and 2020, the European Union spent € 341 million on border control projects involving some form of AI, according to the London-based research group Statewatch. 

Since 2020, EU funding available to member states for border control programs has risen by 45%. More than 70% of that funding has been directed toward new infrastructure, including AI tools and the development of data systems to monitor migration. 

Alongside increased spending, loopholes in Europe’s otherwise stringent AI regulations allow broad use of the technology in migration and security settings. 

The EU’s landmark AI Act banned facial recognition technologies and criminal risk assessments but permits tools like lie detectors and mobile phone extraction systems with oversight. Other AI migration technologies — such as forecasting tools used to predict irregular migration flows — face few regulatory safeguards

Migrants and EU citizens exist in a legal “parallel reality” under the AI Act, said Wael Qarssifi, a former migration and technology fellow at the Migration and Technology Monitor. 

Ethical concerns extend beyond regulation. Researchers told The Parliament that a permissive approach to digitized migration control puts EU policy at odds with the Treaty of Fundamental Rights of the European Union and could ultimately undermine protections for citizens as well. 

Molnar pointed to the United States, where technologies initially deployed at the southern border are now being used in domestic law enforcement. She said that carve-outs in the AI Act allow tools tested on migrants to later be redeployed on EU citizens. 

AI expands along Europe’s borders

AI migration technologies are already being tested or used by national authorities in 11 countries, according to a 2023 report for the University of Oxford authored by Derya Ozkul, a professor at the University of Warwick. 

Lie detection technologies are being trialed in Greece, Latvia and Hungary, while mobile phone data extraction is used in Norway, Denmark, Germany and the Netherlands. Elsewhere, authorities rely on AI to verify documents, assess security risks, recognize regional dialects of asylum seekers, collect biometric data and pilot military-grade drones in the Mediterranean. 

Some of the most advanced systems are used in Greece. There, the Automated Border Surveillance System combines drones, cameras, detectors, and AI to monitor the movement of migrants along the Greco-Turkish border and across the Aegean Sea. Greece has received over €1 billion in EU funding for border management between 2021 and 2027. 

Not all technologies pose equal risks. Ozkul said AI mobile phone extraction technologies can generate false migration routes due to faulty GPS data or the use of second-hand phones. While the data alone isn’t sufficient to reject an asylum application, it can still distort decision-making. 

But despite growing documentation of AI use in migration management, researchers say they still lack a clear picture of how these systems operate in practice. 

“Those companies do not have to expose the details of their AI systems, how it was developed, how it is being used, and what flaws those systems have,” said Qarssifi. “There is this bubble of secrecy that is surrounding any AI system that is being used in migration law.” 

According to Niovi Vavoula, a professor in cyber policy at the University of Luxembourg, this opacity is often justified by governments as necessary for security. Vavoula and Ozkul added that researchers frequently face barriers when seeking access to migration-related data. 

ETIAS under scrutiny

Beyond AI-related loopholes, lawyers and researchers who spoke with The Parliament said migrants and asylum seekers don’t benefit from the same data protections as European citizens. 

A prominent example is ETIAS, the European Travel Information and Authorization System, set to come into effect at the end of 2026. The system is designed to screen travelers for security and irregular migration risks as part of the Commission’s broader migration control strategy. 

But the scale of data collection — and its integration into interconnected EU-wide databases — has raised alarms among privacy advocates. 

A Belgian court case brought by the Ligue des Droits Humains and referred in December to the Court of Justice of the European Union argues that ETIAS is in “disproportionate interference” with the “fundamental rights” of migrants. The challenge centers on the legislation’s broad definition of “risk,” which allows for indiscriminate data collection. 

The CJEU could rule on the legality of ETIAS within the coming year. 

“This notion of ‘risk’ for public security is very badly defined,” Catherine Forget, the lawyer litigating the case, told The Parliament. “This is about crimmigration, not only about the fight against serious crime, but also immigration more broadly,” she said, referring to the criminalization of migration. 

However, ETIAS is only part of a much larger data ecosystem. EU-LISA administers the European Entry Exit System, the Visa Information System, and Eurodac. According to Vavoula, these systems don’t give migrants the same level of data protection as citizens. 

And the scope of these systems is expanding. Under the new Migration and Asylum Pact, Eurodac will begin collecting facial images and identity documents, data, and include information on children as young as six years old. 

For decades, Europe has been a global leader in digital regulation and data protection. But as the bloc pursues a tougher and increasingly digitized border policy, the contradiction between its human rights framework and border security practices will come into even sharper relief. 

 

Sign up to The Parliament's weekly newsletter

Every Friday our editorial team goes behind the headlines to offer insight and analysis on the key stories driving the EU agenda. Subscribe for free here.

Related articles