Robotics and Artificial intelligence

On October 17, the JURI Committee organised a workshop with National Parliaments on Robotics and Artificial Intelligence – Ethical issues and regulatory approach. 

By Jenni Kortelainen

19 Oct 2016

Mady Delvaux-Stehres (S&D, LU) opened the workshop and presented the panellists.

I Ethical issues of robotics and artificial intelligence – Will artificial intelligence bring in utopia or destruction?

Ms Francesca Rossi, IBM-Research and Professor of Computer Science at the University of Padova, Italy, began by defining artificial intelligence (AI) as the capability of a computer programme to perform tasks usually associated to intelligence in human beings. Furthermore, she noted two terms used at IBM in relation to AI: augmented intelligence and cognitive computing. Augmented intelligence is used to refer to enhancing and scaling human expertise, human/machine partnerships and exploiting unique human qualities. Cognitive computing refers to machine learning, human interface technologies and new computing architectures and devices.

Ms Rossi pointed out that AI is already present in our daily lives in areas such as GPS, robots, and voice recognition. In recent years AI has acquired perception capabilities such as self-driving cars due to increased amounts of available label data (data that has been qualified by humans as example data). She explained that although AI has been successful in games such as chess and Go, this is still far away from AI being used in real-world situations.

An important issue for Ms Rossi was building trust between humans and AI systems over time and finding the right level of trust. In her opinion, AI systems should be accountable, interpretable and understood by users. Without trust effective team work is precluded.

She also emphasised that AI should be compliant to human values, norms and professional codes in the specific scenarios intended for that particular AI system. It is also important to be aware of data or algorithm biases. Ms Rossi presented the possibility that AI machines could help humans to be even more ethical as they are more rational.

She explained briefly IBM’s history with AI and presence in Europe. IBM has a research laboratory in Zurich, Switzerland, their Watson IoT Headquarters is in Munich, Germany, and they are building their Watson Health European Centre of Excellence in Milan, Italy. IBM is committed to AI ethics: they have an internal ethics working group, have written white papers on the topic and are designing educational modules for engineers and students. IBM also participates in cross-industry collaboration.

Ms Rossi noted that she was one of the first people to sign the Future of Life (FLI) open letter on ‘Research Priorities for Robust and Beneficial Artificial Intelligence’ in January 2014. Since the signing of the letter, 37 FLI funded research projects have taken place and AI ethics is more present in both academic and corporate environments as well as in international organisations. In the US, the White House also organises workshops and reports on the future of AI and the strategic direction of AI development.

To conclude, she noted that although AI can help us solve many difficult problems, to get all the benefits of AI and avoid pitfalls, AI developers need to work to make AI more compatible with humans. Regulations should play a careful role and allow AI innovation to flourish and advance. The European Parliament and the European Commission are in the best position to foster innovation, she said.

Mr Murray Shanahan, Professor of Cognitive Robotics at Imperial College London, UK, talked about the extent to which regulating AI is necessary. He divided his comments between short-term (10-year horizon) and long-term (50-year) perspectives. Mr Shanahan does not agree with regulating AI development and research in the short-term as AI research in itself does not pose an inherent threat (short- or long-term) and it promises great benefits. Moreover, trying to regulate AI research would be impossible to enforce.

However, Mr Shanahan noted that AI regulation is appropriate in areas such as self-driving cars, autonomous weapons, medical diagnosis and service industries. In each of these areas, issues arise that are specific to those applications and sectors. In many sectors there are existing regulations and now it is important to see how they extend to AI technology. And when they do not extend, it is important to see how AI should be included. Regarding self-driving cars, he said that the trolley problem - a car is faced with a problem and it needs to make a choice – is not useful and takes time from discussing more pertinent issues.

One of the most important issues in AI for Mr Shanahan is transparency. He agreed with Ms Rossi that it is important for the systems to be accountable. He mentioned in particular neural networks which are an extremely powerful and promising technology, but they are also black boxes. Systems based on them will make a decision working through series of computations, but it may be impossible for humans to understand why that decision has been made.

As such, there is a trade-off between effectiveness and transparency. Opacity is fine for some applications such as entertainment, but for more important issues such as financial, medical and military applications, it is unacceptable for the reasoning not to be humanly understandable.

On transparency, Mr Shanahan proposed a principle to follow: ‘It should always be possible to supply the rationale behind any decision taken with the aid of artificial intelligence that can have a substantive impact on one or more person’s lives’. This principle would be followed by a policy according to which ‘it must always be possible to reduce the AI system’s computations to humanly-comprehensible form’.

With regards to the longer, 50-year horizon, human-level artificial general intelligence is increasingly likely, but it is a long way off and today we can have no idea what it would be like. He said that an advanced warning, a 5-year warning, would be great. As it is impossible to know when and how human-level AI will arrive, doing long-term AI safety research now is worthwhile.

To conclude, Mr Shanahan noted that if human-level AI were on the way, it would be useful to think about limiting its autonomy, licensing its development, licensing the right to run or copy it, and forbidding the creation of AI with consciousness.

Mady Delvaux-Stehres (S&D, LU), opened the floor for discussion.

A researcher from the University of Twente, responsible for robotics, had a question for the first speaker on trusting the system because in the future the case would be more  whether we trust the maker and developer of the system, and who has access to data and how is it used. She said that there will be humans behind the scenes who will be accountable. For Mr Shanahan, she posed a question on whether it was even possible to decouple research from implementation in which case it is not possible to say that the research does not pose an inherent threat. She also disagreed with the trolley dilemma being a distraction.

Ms Rossi responded that trusting the system and trusting the maker are not mutually exclusive, because in the end it is about trusting the AI system. The maker will decide on an initial dataset and will need to be aware of data and algorithm bias. Once the system is defined, the system will evolve over time, will learn and get new data. As such, it will not be enough to trust the initial state of the system but to trust it as it functions in daily life. It will be a dynamic system of trust.

Mr Shanahan does not have any objections on points made as nuances are difficult to explain in a short presentation. On autonomous cars and the trolley problem, Mr Shanahan explained that that specific problem is a distraction because it will be so rare, but not questions like it. Autonomous cars will be able to stop so much quicker than any human.

Max Andersson (Greens/EFA, SE) asked Mr Shanahan on how it would be possible to know really when human-level AI emerges since predictions have in the past usually been always wrong. He also wanted to know whether requiring all AI decisions to be understandable to humans be setting the bar too high since most decisions made by humans are irrational.

Mr Shanahan responded that we should not expect more from AI than humans. But if AI makes a medical decision, then we expect accountability and do not want the machine to merely say that it knows best. On the question of timing, there are many caveats and we cannot be certain that human-level AI will not arise in 10 years, but cannot be certain that it will arise at all. For those reasons, we should invest in AI safety research now. But this is difficult to do because do not know what human-level AI will actually be like.

Michal Boni (EPP, PL) called for the importance of having a secure and safe framework for the development of robots. Any regulatory and non-regulatory framework will need to both make humans secure but also not kill innovation. Mr Boni said that they first need to see how existing regulations can be used with these new developments. He wanted to know the presenters’ opinion on how legislators need to work on regulatory frameworks to create better cooperation between humans and robots.

Ms Rossi agreed with Mr Boni that it is already a challenge by many AI developers to understand how to embed existing laws into AI systems. Many associations, such as AAAI, are already looking into defining a code of conduct. She said that it is important for associations and regulatory entities to work together. Many things are still in very early stages, issues and solutions are not clear. As such researchers and policy makers need to be involved.

Mr Shanahan pointed out that some of these issues require a global discussion. One of the most important ones is the issue of lethal autonomous weapons, which is already being discussed in the UN.

Ms Renate Künast, Chairwoman of the committee on Legal Affairs and Consumer Protection, Bundestag, Germany, asked Ms Rossi a clarification on what she meant by trust as a concept. She also posed the question of software being programmed in certain ways as a system. Furthermore, she asked Mr Shanahan if all potential activities would be covered by a possible licensing system. Ms Künast called for autonomous weapons to be banned from the beginning and asked the EU to be clear about this. She also noted that humans need to be able to shut down all robots if needed.

Ms Rossi said that trusting a machine does not differ that much from trusting a human. For example in healthcare, a doctor cannot know all the research relevant to a patient, thus the robot can read, digest and assimilate the research and suggest possible solutions. In this situation, the human is the final decision-maker but gets possibilities from the machine. The human can interact with the machine and this is how trust should be built. Human asks questions and gets explanations on why certain things are done.

She understood that with autonomous systems some decisions are delegated to the machine, but called for caution with delegating ethical decisions. Everybody should be involved in deciding which ethical decisions are delegated.

Mr Shanahan agreed that there should be a ban on lethal autonomous weapons, but also acknowledged that it is a debate. In the long-term, the issues are very different because one cannot know what will happen.

Andrea Bertolini, Assistant Professor of Law, Scuola Superiore Sant’Anna, Pisa, Italy, had a question to Ms Rossi on AI accountability. He wanted to know whether AI should be responsible in and for itself and if yes, in what conditions. His personal understanding is that there are no ontological or technological reasons to make AI accountable as such. On the last comment that humans should be the last decision-makers, Mr Bertolini said that the producer should also be made responsible for what the device does after it is released to the market. In the healthcare example, he noted that the doctor could be held responsible both for failing to follow the robots advice and for complying with it. As such, the producer should not be relieved of liability.

Ms Rossi explained that she did not mean accountability necessarily in legal terms, but that the system should be able to explain its reasoning so that humans will be aware of its limitations. This way can the right level of trust can be built. The system is going to adapt and improve its capabilities and then the maker is responsible for deploying a system that has the right capabilities for building that trust. Robots should help humans and not create problems.

Ms Künast wondered about liability and whether it would even be possible to make robots abide by regulations since humans have problems with this as well. She wanted to know where the liability lied if the robot had human-level intelligence. Could robots be programmed to have emotional intelligence?

A researcher from Maastricht University asked that if a machine is intelligent enough to learn by itself, would the machine have rights and duties. Would the machine have some fundamental rights?

Mr Shanahan said that probably human-level intelligence will not arrive in the next ten years and as such need to have a precise discussion on short-term issues and not muddle science fiction with reality.

The researcher from Maastricht University wanted to know about user responsibility.

Ms Rossi said that there is a need for a multidisciplinary discussion. 

If you are interested in reading the full briefing, please sign up for a free trial of the Dods EU Monitoring service.

Read the most recent articles written by Jenni Kortelainen - IMCO Working Group on the DSM