Who decides how AI is used in war?

From Ukraine to Iran, artificial intelligence is accelerating warfare. But efforts to regulate its use remain slow and fragmented.
LUCAS drones at undisclosed air base in the United States, Nov. 23, 2025. (US Air Force)

By Paula Soler

Paula Soler is a reporter at The Parliament Magazine

21 Apr 2026

@pausoler98

Artificial intelligence is rapidly changing how wars are fought, outpacing the legal frameworks meant to govern it. 

In Ukraine, drones with autonomous navigation have increased target engagement success rates from roughly 10–20% to as high as 70–80%. Israel has reportedly incorporated AI into its target identification processes for airstrikes against Hamas in Gaza. More recently, in Iran, the United States has relied on AI-enhanced intelligence, surveillance and reconnaissance to support battle management and targeting decisions. 

These systems can compress decision timelines, expand situational awareness and increase the speed and precision of military operations. But as they proliferate, so do concerns. Policymakers are grappling not only with how these tools work, but whether existing frameworks under international law are sufficient to govern their use.  

“States have acknowledged that these don’t refer to AI specifically, but they are general rules that states should be complying with as a matter of legal obligation,” said Netta Goussac, a senior researcher at the Stockholm International Peace Research Institute. 

However, the challenge isn’t only the absence of AI-specific rules, Goussac said, but the lack of transparency. There is limited visibility into when and how militaries deploy AI, what safeguards are in place and what humanitarian consequences may follow. “This sort of opacity erodes trust.” 

Most militaries continue to emphasize a human-in-the-loop model, preserving formal human control over the ultimate decision to engage with force. In practice, though, the growing integration of autonomous functions into weapons systems is blurring the line between human judgment and machine execution. 

U.N. seeks ban on ‘killer robots’ 

Existing international laws don’t explicitely prohibit machines from making final decisions to kill, said Gerry Simpson, associate director at Human Rights Watch. But there are efforts to impose clearer limits, most notably at the United Nations. 

For nearly a decade, the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE) has focused on the risks these weapons pose. But progress toward common legal guardrails has been slow, while battlefield developments are already outpacing those early debates. 

“We've seen, especially since the war in Ukraine, that AI isn’t just part of weapons systems, it's also part of military decision-making much more broadly,” said Ingvild Bode, director of the Center for War Studies at the University of Southern Denmark.  

Since the U.N. group began meeting in 2017, Bode said, it has struggled to translate their discussions into written commitments to regulate lethal autonomous weapon systems or, as they have come to be known, "killer robots.” 

More than 60 countries have signed a declaration on responsible military use of AI, but major powers, like the U.S. and China, are hesitant to commit to legally binding agreements, wary of constraining their strategic advantage in developing A.I.-enabled autonomous weapons and defense systems.  

A key decision point is approaching. In November, the U.N. group must decide whether to continue talks under its current format, shift discussions to the U.N. General Assembly or pursue formal treaty negotiations led by a specific state. 

According to Simpson, the challenge is that GGE operates by consensus, meaning any meaningful outcome requires unanimous agreement. That increases the likelihood that negotiations will shift to a different forum.  

The choice, he said, is between staying in a “bureaucratically heavy process” that could take years to bring major military powers on board, or pursuing a faster, more flexible path — even if it means initially excluding those powers. 

Lessons from Anthropic v. Pentagon  

In late February, California-based Anthropic refused to grant the U.S. Department of Defense unrestricted access to its AI model, Claude, for military use. U.S. President Donald Trump publicly criticized the decision, while Defense Secretary Pete Hegseth labeled the company a “supply chain risk.” Anthropic responded by filing a lawsuit to block the Pentagon from placing it on a national security blacklist.  

The company has since doubled down on a more cautious approach. It later restricted access to its latest model, Mythos, citing concerns about misuse — particularly in enabling cyberattacks. 

The episode exposes a broader structural gap: as private firms play a growing role in supplying AI to militaries, decisions about safeguards are largely made by companies themselves, leaving oversight fragmented and uneven.  

Bode said Anthropic’s case is “a good business case in some ways,” as tech companies should care more about ensuring they are selling reliable, legally compliant products. “Once they sell them, they lose control. But they may still be drawn into significant controversy.” 

In Europe, the CEO of Mistral AI, Arthur Mensch — now seeking to expand defense contracts — recently said in Brussels that the continent should secure sovereign control over military AI. 

“If these artificial intelligence systems are actually procured from foreign companies, then ... our militaries can be turned off,” Mensch said.  

However, when it comes to setting safeguards for how those systems are used, he argued responsibility should ultimately rest with the customer. 

Analysts see a broader pattern of burden shifting. Companies call for clearer rules from states, which carry the legal obligations. But industry actors also hold critical knowledge about how these systems work and, as a result, can’t be treated as neutral vendors. 

“Responsible development and lawful use of these technologies relies on collaboration,” Goussac said. “It’s a joint exercise between state and industry — neither can do it alone.” 

Sign up to The Parliament's weekly newsletter

Every Friday our editorial team goes behind the headlines to offer insight and analysis on the key stories driving the EU agenda. Subscribe for free here.

Read the most recent articles written by Paula Soler - As transatlantic ties fray, NATO remains intact — for now

Related articles