Creating “human-centric” Artificial Intelligence

Digital developments will affect us all but, as Maltese Socialist deputy Alex Agius Saliba warns, the risks associated with new technologies must be addressed.
credit: Adobe Stock

By Alex Agius Saliba

Alex Agius Saliba (S&D, Malta) is a member of the Committee on the Internal Market and Consumer Protection 

03 Mar 2020


Everything around us, every aspect of our lives, is affected by developments in the digital world in one way or another. This process will continue to exert a greater impact given the rapidly increasing advances in Artificial Intelligence (AI).

AI is found in day-to-day items we take for granted, such as smartphones and their applications, as well as in more sophisticated settings such as the healthcare or security sectors. The EU needs to be fast, ambitious, and well prepared if it is to fully embrace this digital transformation.

At the same time, it also needs to avoid the numerous potential threats that may appear and address problematic issues and gaps in the existing rules. Products need to be safe and legally compliant by default, and risks such as discrimination, loss of privacy and autonomy, lack of transparency and enforcement failure must be avoided at all costs.


RELATED CONTENT


The shift towards automated decision making (ADM) and AI will change the way our societies function beyond recognition. Our citizens deserve the best, and that is why we need a new legal and ethical framework for AI, one focused on upholding fundamental rights, ethical aspects, regulatory safeguards and liability, thus protecting our democratic societies and citizens as users and consumers.

In its new European strategy for AI, the European Commission has recognised the need for such actions. It outlines the weaknesses in the current legislation and introduces new regulatory options for future measures on AI. These options include potential new rules on safety, liability, fundamental rights and data by the end of the year.

The Commission also envisages mandatory requirements for high-risk AI applications and rigorous testing for high-risk AI technology before deployment or sale within the EU’s vast internal market, as well as a voluntary labelling scheme for other AI applications.

However, we also need to act quickly – but not impulsively - to ensure that our industry and citizens have the best chances to reap all the benefits of this digital transformation, while at the same time avoiding potential harm to the society and the individual.

"The EU needs to be fast, ambitious, and well prepared if it is to fully embrace this digital transformation"

We also need future-proof laws, a level playing field for companies of all sizes, and - most importantly - a citizen-oriented digital transition that leaves nobody behind. We also should strengthen the EU’s support and investment in R&D of AI innovations, including in digital start-ups and scale-ups.

Investments in this sector are vital if we are to advance Europe’s position in developing AI applications. Industry is capable of achieving major breakthroughs in this sector; however, public and private funding remains the driving force for innovation and, ultimately, the key to the enhancement of our capabilities and autonomy.

To complement this, the EU should not only be concerned with the amount of funding made available through programmes such as the Digital Europe Programme and Horizon Europe, but it should also seek to address the participation gaps that exist in current funding programmes.

Widespread uptake of funding by companies across the Union is of utmost importance. We also need to urgently address the various risks associated with these new technologies, which require careful attention and analysis on a broad political level.

This is why the Commission needs to undertake a comprehensive exercise to identify any disruptive developments brought about by the application of AI in our societies across the board. It should deliver an ethical digital revolution, one that leaves nobody behind, with clear liabilities and non-discriminatory requirements that safeguard fundamental European rights.

For example, the EU’s rules on consumer protection, safety and liability, as well as those dealing with privacy and transparency, should be updated to make sure they are fit for purpose and consumers are protected, particularly in the event that AI products cause them harm. We should also look into the issue of transparency of algorithms, and biased algorithms in the consumer and internal market context.

"Industry is capable of achieving major breakthroughs in this sector; however, public and private funding remains the driving force for innovation"

Most importantly, we should always frame our discussion on the creation of a “human-centric” AI in Europe. This should be one where people always remain in control and responsible for decision making; we should reap all the benefits, but the human element should be maintained at all costs.

The Commission should also be more ambitious in its strategy for identifying technology manufactured from outside the EU in certain authoritarian regimes as necessarily ‘high risk.’ The AI evolution is unprecedented, and therefore we must focus our efforts on legislation with medium- and long-term effects to create a lasting, legal, and operationally-safe environment for all stakeholders.

One of the EU’s top priorities in the foreseeable future should be to continuously renew itself. It should develop an extensive regulatory framework that puts us in the driving seat of this digital revolution and gives us - and our future generations - sufficient protection from prejudices, inequalities and violation of our fundamental values.

Read the most recent articles written by Alex Agius Saliba - Medicines in Europe: We must address affordability and availability