Op-ed: Abandoning the AI Liability Directive brings unacceptable risks

The European Commission’s move towards so-called simplification will harm both competitiveness and consumer security in the EU.
MEPs discuss the AI Liability Directive at a hearing in the JURI committee. (European Union 2025 - Source :European Parliament)

By Sergey Lagodinsky

MEP Sergey Lagodinsky (Greens/EFA, DE) represents the JURI Committee in the working group on the implementation and enforcement of the AI Act.

25 Apr 2025

SLagodinsky

Europe’s need to cut red tape is no secret. It comes up in all my frequent engagements with business — whether startups, scale-ups or established companies. The European Commission has pledged to deliver. I fully share the goal, but I increasingly doubt the means. 

The AI Liability Directive (AILD), which the European Commission has decided to abandon, is a case in point. Proponents of this step, including Henna Virkkunen, the Commissioner responsible for tech sovereignty, security and democracy, have argued that additional liability rules could stifle innovation and investment in Europe. But by scrapping the directive, the Commission will achieve what it wants to avoid: leaving companies without clear legal guidelines will reduce their incentives to invest.  

Legal uncertainty: A barrier to AI innovation in EU

Investors in Europe are already known for their risk aversion. With AI technologies increasingly interacting with both the real and virtual worlds, the risks are multiplying, and the Commission’s decision adds legal opacity and fragmentation to the mix.  

The chains of accountability remain unclear. Who is responsible when risks inevitably materialise — those who develop, deploy, sell, or design? And what if they share responsibilities among each other? You don’t need to watch Netflix to know that the mirror in which we’re looking for answers is not only black, but broken into 27 pieces.  

Currently, companies dealing with AI-driven technologies have little idea how innovative the judge facing them might be, nor which of the 27 legal frameworks will confront them.

AILD’s role in Europe’s digital rulebook

Some opponents of the directive say there’s no need for further regulation since the AI Act and the new Product Liability Directive (PLD) cover the same ground. This is wrong, misinformed or manipulative, depending on how much benefit of the doubt we want to grant the critics.  

Neither the AI Act nor the revised PLD are substitutes for the AILD. The difference is very clear: The AI Act deals with pre-emptive risk management, telling AI players what they should do to avoid harm. It does not address who is responsible after harm has occurred.  

The Product Liability Directive, meanwhile, covers damages following an incident, but those are different damages than those addressed by the AILD. The differences between product liability (PLD) and producer’s liability (AILD) are widely known to any law student, and ought to be known to the Commission. 

Without AILD, AI risks undermining trust & safety

AI harms often go beyond product defects. What if AI causes damage in a professional context on professional tools? What if the harm stems not from a manufacturing defect, but from a failure to instruct users adequately? What if the injury arises from a “rogue” AI behaviour not rooted in technical fault but in deployment mismanagement?  

There’s also a growing class of use cases where programmers use generative AI void of any apparent defects to code applications that include some AI elements. What if such privately used and privately manufactured applications create harm to third parties? Ignoring these scenarios isn’t just a legal blind spot — it’s a political liability. Do we have a "Political Liability Directive" and would it cover misguided omissions from working plans of the Commission? The answer is no.  

The Commission must know better. By refusing to adopt harmonised AI liability rules, it leaves businesses exposed to a patchwork of national standards and conflicting interpretations, precisely when we are trying to accelerate AI uptake across the continent.  

Instead of clarity, we get legal roulette. In this case harmonisation does not mean overregulation; it means smart, targeted, fact-based rules that give both innovators and consumers legal certainty.  

The opacity, seeming autonomy and unpredictability for users make responsibility hard to pinpoint. The AILD aimed to close these gaps through reasonable, modern tools like disclosure duties and rebuttable presumptions of fault — measures designed for AI’s unique risks.  

The Commission’s vague hints about “future legal approaches” offer little comfort. Businesses need legal certainty now, not open-ended promises for the future.  

At the heart of the debate is a bigger question: Do we truly want a digital single market in Europe that goes beyond idle talk? If the answer is yes, harmonisation is essential and must be rooted in fact. Without it, we get more fragmentation, not predictability; more confusion, not clarity. With its latest retreat, the Commission isn’t simplifying — it’s surrendering.  

Sign up to The Parliament's weekly newsletter

Every Friday our editorial team goes behind the headlines to offer insight and analysis on the key stories driving the EU agenda. Subscribe for free here.

Read the most recent articles written by Sergey Lagodinsky - EU politicians gave Putin a pass for years. It’s time to stop and hold ourselves accountable

Categories

Technology
Related articles