Nobody would deny that digital technologies, in particular Artificial Intelligence (AI) and robotics, have unprecedented potential to realise a more rapid diagnosis of diseases.
They can also allow us to personalise patients’ treatment, increase health system efficiency and ultimately improve people’s access to healthcare.
Yet, the extent to which the health sector actually benefits from the opportunities offered by these technologies remains limited. To address this, policymakers need to look at the existing uncertainties around liability, data security and privacy.
It is crucial that the highest level of transparency, data security and data privacy is guaranteed, and this is where policymakers can play a key role. It is our responsibility to establish a framework that fosters trust, safeguards data privacy and sees to it that data ownership remains with the patient.
Under no circumstances can personal health data be used to discriminate patients or to disadvantage them, for example, by making their health insurance more expensive if someone does not use a health-tracking application.
Moreover, we have to avoid a situation in which personal health data becomes a core source of income for big companies. First and foremost, the focus should be on patients and their needs.
Accountability is key in the discussions about trust, and it needs to be clear who is accountable and who is liable for any damage or unexpected outcome.
For example, who is liable if an AI based diagnosis is incorrect? Who is held accountable if a robot-assisted surgery goes wrong? Is it the doctor, the manufacturer, or the patient agreeing to the treatment?
As a lawyer, I believe it is important to point out that the legal liability for damage is a central issue in the health sector where the use of AI is concerned.
Furthermore, AI-based systems need to be neutral and fair in order to ensure a non-biased outcome.
“Personal health data can under no circumstances be used to discriminate patients or give them a disadvantage towards others”
Developers of AI-based systems need to ensure that consumers are informed in an understandable and transparent way whenever the system does, decides or undertakes something.
The increased use of digital technologies in healthcare also has direct consequences for healthcare providers, systems and professionals.
Appropriate education, training and ‘digital readiness’ of health professionals is of utmost importance in order to secure the highest degree of professional competence possible and to keep the focus on patients’ health.
It is now possible to acquire new skills through surgery simulations and virtual reality. However, robotics and AI in particular require different and adapted training methods.
The European Parliament already underlined this in its report “Resolution on Civil Law Rules on Robotics”, and it will become increasingly pressing in the future.
Europe already faces a shortage of healthcare workers; imagine the challenge to secure a workforce that is qualified to handle new AI and robotic systems.
This means that we need to make sure that, as a society, we both support the research and development of AI and robotics, and put a focus on education within this field.
Boosting digital literacy across Europe is paramount and so is putting money to close digital divides within our societies.
The first step we can take is to integrate AI-based systems into university curriculums for various disciplines, including medicine and pharmacy, and to provide elementary AI training as early as possible.
We also have to make sure that everyone understands how their personal data is being used and why it is important that data ownership remains with the patients.
“Boosting digital literacy across Europe is paramount and so is putting money towards closing digital divides within our societies”
In its own initiative report on a Comprehensive European industrial policy on artificial intelligence and robotics, Parliament underscored the benefits of AI and robotics for health technology, services and programmes, drug discovery, robotic and robot-assisted surgery, to just name a few.
Yet, AI and robotics still present many complex new challenges related to human dignity, security, privacy, safety, as well as employment and liability, and therefore need new laws and principles.
Ursula von der Leyen announced before her election as European commission president that she wants to make Europe fit for the digital age. I believe this is critical if Europe does not want to fall behind other digitally pioneering countries.
While I have no doubt that new ethical and legal challenges are ahead of us, some rules and regulations need to be adjusted to guarantee the safety of AI-based systems, especially health robotics for patients.
This will be one of the main challenges we need to tackle in this new legislative term.
We will also need to secure additional funds for EU programmes such as Horizon Europe, the Digital Europe Programme and Invest EU.
And we need to not shy away from having the difficult conversations. I will continue having these at this year’s European Health Forum Gastein (2-4 October 2019 in Bad Hofgastein, Austria).
The organisers address the core question of why disruptive technologies, which seems to hit every single part of our lives, is rather hesitant to fully materialise in the health sector.
Under the theme of “A Healthy Dose of Disruption? Transformative Change for Health and Societal Well-Being”, stakeholders will come together to discuss how we can meet the challenges I have outlined in this piece.
Realising the full benefits of digital technologies, particularly AI-based systems, requires us to carefully consider their ethical, operational and educational challenges, and to build trust among the public.
With that being accomplished, the future of health care in Europe is, without question, digital.