AI: The security disruptor
Despite MEPs calling for a ban on autonomous weapons systems, military experts warn the EU cannot afford to fall behind China, Russia and US in AI innovation
In October, the Europe Parliament held a public hearing on AI and the future impact on security. Director of research at The Hague Centre for Strategic Studies, Tim Sweijs, said "This topic is not only of tremendous importance to global security, but also to European security and its citizens."
He believed that "AI is expected to have a profound impact on the character and nature of future conflicts. Leading military powers such as the US, China and Russia are currently investing heavily in AI related defence technologies."
He added, "they are doing this to increase their military capabilities, with the goal of these investments being to improve the effectiveness and efficiency of current generation military capabilities. Yet at the same time, they intend to bring about disruptive changes so that these powers have a competitive military edge."
For Sweijs, "the security impact of AI is an issue of great concern that is fundamental to future of international peace and security."
"The AI train has left the station, whatever we feel about AI is irrelevant. The fact is we are already engaged in a global arms race on the defensive side of AI"
There were both positive and negative implications for the use of AI. One positive highlighted was, "more precise targeting through AI, which will make war paradoxically more humane as there will be fewer fatalities," as AI weapons will do a better job at identifying civilians.
Other benefits include better medical treatment, earlier warning of possible attacks, and the 'de-escalation' of strategic decision-making processes, as other options are considered in a crisis situation, where different forms of information are being rapidly received, and take time to comprehend.
On the negative side there will be plenty of challenges as conventional military doctrines become obsolete.
The number of wars may well increase as the political cost lowers due to fewer soldiers being sent to the frontline.
This could then impact on constraints currently implemented in military operations.
Also, undertested AI applications rushed into military applications, could allow decision-making processes to spiral out of control and lead to conflict, in a similar, way that computerised automatic financial trading programmes can lead to market crashes.
The possible dangers of AI and autonomous weapons, led the European Parliament’s Greens/EFA group, in September, to push through, with the support of 566 MEPs, an own initiative resolution, calling for the banning of these weapons.
Security spokesperson Bodil Valero, said "Autonomous weapons systems must be banned internationally, the power to decide over life and death should never be taken out of human hands and given to machines."
She wanted to see the Commission and Council adopt a common position to present to the UN, to introduce a global ban.
However, Wendy R. Anderson, former deputy chief of staff to US secretary of defence Chuck Hagel, warned, "I don’t think bans work, I’m just a realist, I wish they did."
She highlighted how the nuclear Non-Proliferation Treaty (NPT) failed to stop non-signatory countries gaining nuclear weapons.
"Despite many countries having signed the NPT, which is a ban on the use and further development of nuclear weapons, the treaty didn’t prevent North Korea from building a nuclear arsenal."
She added, "The AI train has left the station, whatever we feel about AI is irrelevant. The fact is we are already engaged in a global arms race on the defensive side of AI."
Former military officer and member of the SEDE committee, Geoffrey Van Orden said, "Autonomous weapons are already with us, even if they have yet to be subject to an agreed definition."
He pointed out that AI has been used in successful weapons systems such as Israel's anti-missile defence system, Iron Dome.
He believes that most of the major military powers are investing enormously in R&D on autonomous systems and warned, "We need to find ways to combat the threat to major weapons platforms."
The key issue for the UK Conservative deputy is the, "degree of residual human control over such systems – to activate them, to enable them to identify targets and to take the decision whether or not to engage a target. At this stage there is a desire to ensure that human control is built into future systems, rather than bolting it on as an afterthought."
Director of governance of AI at Oxford University, Allan Dafoe, believes the EU could play a critical role in redirecting an AI arms race. "The EU is in a unique position to offer itself as a counterweight to the US and China. Its member states are already active in trying to build norms and institutions for AI that balances innovation with social values."
Dafoe suggests that, "How we govern advanced AI, will be the most important issue of this century. If we do it poorly it could cause substantial disruption and harm."
There are different reasons why people believe in extremist ideologies or join extremist groups, explains Alexander Ritzmann.
The recent announcement that the world’s first university dedicated to artificial intelligence (AI) will open its doors in Abu Dhabi in 2020 marks the latest move in the Middle East’s growing...
Agnieszka Łukaszczyk explains how satellite imagery can help set the UN Sustainable Development Goals back on track.