Foreign electoral interference - until recently a niche topic for Cold War historians – is making an unwelcome comeback. The 2016 US elections and the Brexit referendum showed that western-style democracies are far more vulnerable to this kind of threat than previously thought. Although our understanding of this problem has increased thanks to the tireless work of investigative journalists and intelligence agencies, the EU and its Member States are not safe from future attacks.
In Europe, the primary perpetrator of these acts has been Russia. The country’s predecessor, the Soviet Union, mastered the art of active measures, as the so-called Mitrokhin Archive brilliantly illustrates. Russian intelligence services have merely updated their playbook by adapting it to new technologies. Unfortunately, other authoritarian-minded countries seem to be following in their footsteps.
In response, the European Commission and Council have launched a number of initiatives to counter this threat, including the Action Plan Against Disinformation, the Code of Practice on Disinformation, the East StratCom Task Force and the Rapid Alert System. These constitute an important first step and should be regularly reviewed to improve their effectiveness.
“The 2016 US elections and the Brexit referendum showed that western-style democracies are far more vulnerable to this kind of threat than previously thought”
In June 2020, the European Parliament joined this effort by voting to set up a Special Committee on Foreign Interference in all Democratic Processes in the European Union, including Disinformation (also known as INGE). The committee’s 12-month mandate began in September, and over the next few months it will investigate issues ranging from the role of social media platforms to the protection of critical infrastructure.
The final report will include recommendations for various legislative and non-legislative actions. A consistent finding of many reports analysing online disinformation is the inability of social media platforms to prevent the spread of factually wrong information and hate speech. The initial assessment of the Code of Practice on Disinformation shows that voluntary prescribed actions are insufficient.
More importantly, the radicalising impact of algorithms that recommend increasingly extreme content, often to unsuspecting users, remains unaddressed. Tackling disinformation and hate speech - which statistics show is on the rise - while not encroaching upon freedom of speech is one of the thorniest issues to address.
In the context of the upcoming Digital Services Act, legislators and legal experts will not only have to define and differentiate between misinformation and disinformation but also provide input on who gets to decide what content should be taken down and who should bear the responsibility for it. In another wrinkle, one has to keep in mind that content that is legal offline cannot be deemed illegal online.
Another priority of the INGE committee will be the financing of campaigns and political parties. The Alliance for Securing Democracy (an initiative of the US German Marshall Fund) recently published a study entitled ‘Covert Foreign Money’, which alleges that 83 percent of activities aimed at interfering in democratic processes in the last decade and funded by authoritarian regimes were enabled by legal loopholes.
Tangible and intangible contributions or non-profits with foreign donors remain shockingly underregulated in a large number of EU Member States. The manipulation of the media ecosystem combines the Soviet-era element of psychological operations with the possibilities offered by an open internet. Spreading lies is easier than ever, as is promoting dangerous conspiracy theories.
Their very nature makes them extremely difficult to root out once they become entrenched in people’s minds, which is why media literacy education will have a crucial role - the earlier in life the better. There are also further threats on the horizon. The advancement of AI will enable bad actors to increase both the volume and sophistication of their manipulations.
“A consistent finding of many reports analysing online disinformation is the inability of social media platforms to prevent the spread of factually wrong information and hate speech”
So-called ‘Deep fakes’ - videos that use an editing algorithm to replace the original person in the video with someone else - could potentially wreak havoc on the very idea of a functioning public sphere. Humans are visual creatures; we tend to believe what we see more than what we read or hear. An inability to trust our own eyes could lead to deepening cynicism and ultimately a withdrawal from the political process.
The INGE committee has a limited lifetime and a seemingly unlimited number of pressing issues to address. In the end, we should strive for an EU toolbox that acts as a deterrent as well as imposing costs on negligent and malicious actors in the event that preventive measures fall short.
Our tools will need to be sufficiently robust to keep pace with new technologies and techniques, yet precise enough to eliminate threats without infringing our values. This will be no easy task, but we owe it to our fellow Europeans and defenders of democracy worldwide.