‘Disinformation has a powerful impact on voting intentions’

AI is supercharging the threat to democracy. In a year of multiple elections, we asked Dr Julie Posetti to unpick the problem of fake news.
Amid multiple elections AI is increasing the risk posed by disinformation.

By Ana Fota

Ana Fota is a freelance reporter based in Brussels, with previous work published by The New York Times, POLITICO, Euronews and others.

20 Mar 2024

Europe is facing an historic election year that could alter its political future. Amid multiple elections – including for the European Parliament in June – technological advances are increasing the risk posed by disinformation.  

The Parliament spoke to Dr Julie Posetti, an award-winning journalist and deputy vice-president of global research at the International Center for Journalists, and a professor of journalism at City, University of London. She has led several UN-commissioned studies in the fields of disinformation, freedom of expression and the safety of journalists.   

What disinformation challenges do we face in this election-heavy year?  

Regardless of recent technological advances, this was always going to be a vital year for the defence of democracy. Because of the number of countries going to the polls, but also because it’s all happening in the context of a tilt towards authoritarianism, with some leaders – in the case of [Viktor] Orbán, for example, and to an extent [Donald] Trump – finding authoritarianism appealing. With this kind of ‘strongman dictator’ vibe being a model, the threat to democracy is significant.  

The function of disinformation and propaganda in such a context was always going to be challenging. But now we face these challenges supercharged – by the mainstreaming of generative artificial intelligence (AI) technologies.   

How does AI change the game?  

Generative AI technologies allow for faster and cheaper production of much more believable disinformation. We have started to see what elements of that could look like, but we have yet to see the ways in which it can be produced at speed, cheaply and believably, and deployed with all the algorithmic power that comes via platforms attuned to distributing content that elicits strong emotional reactions – particularly anger and hate – as algorithmic drivers.   

We are in an extremely vulnerable position. At the same time, we do not see similar speed and attention to the development of disinformation detection capabilities. We have all these threats looming large, but we haven't invested in parallel production of tools for detecting disinformation or fraudulent or manipulated content.  

What does this mean for the upcoming elections?  

The challenges are huge for news organisations, for those fact-checking and responsible for safeguarding elections, and for civil society organisations focused on trying to detect and dispel disinformation narratives and materials, at the same time as improving citizens’ media literacy.  

How does disinformation influence the outcome of elections in general?  

Disinformation narratives have a powerful impact on voting intentions. That problem has been supercharged by algorithmic distribution of political conspiracy theories and disinformation or harmful content, whether it’s from foreign state actors or particular stakeholders in the political process within a country.   

But now we have this really significant fuel on the fire from generative AI technology, which means you can confound the voting public more quickly, and potentially with worse impact. The big threat here is that when nobody knows what or who to believe, or who to trust anymore, many choose not to believe anything at all.   

How do you cut through that cesspool of disinformation and falsehoods and conspiracy theories? With facts and solid policy-based agendas? I mean, it’s extremely difficult.  

Is there a region of Europe you’re particularly worried about in terms of disinformation?  

I’m sorry to be pessimistic but I’m concerned broadly. Of course, we know there’s a clear link between disinformation purveyors and authoritarian movements. Authoritarianism is not the only precondition for dangerous levels of disinformation by any stretch, but these factors, when working together with misogyny and racism as co-producers of the toxicity, lead me to be extremely concerned about the states where democracy is really appearing to be quite fragile.  

How do news organisations fare in countering disinformation?  

I sit on the board of the International Fund for Public Interest Media, whose job it is to identify new sources of funding to invest in, particularly in middle and low-income countries, and to use those funds in a way that prevents manipulation or interference by the donors in those countries. I’m not seeing nearly enough commitment, globally, to supporting critical, independent reporting and innovative approaches to countering disinformation.  

How can governments or organisations equip themselves to fight disinformation?   

There are things they could do, including better and accountable investments in critical, independent journalism that can help prevent extreme vulnerability. Especially news organisations focused on fair and accurate reporting in the public interest – that’s where the investment needs to be.   

At the same time, it’s also a problem that so much emphasis is put on media literacy as a solution instead of focusing on dealing with the source of the problem. The disinformation agents and the platforms, which are the vectors for disinformation, are the source of the problem. That includes gender disinformation, which is a feature of online violence towards women journalists.  

The big threat is that when nobody knows what or who to believe, many choose not to believe anything at all.

You have to strike at the source – including the profiteering companies that leverage disinformation or content and hate speech through algorithmic redistribution in ways that have serious implications for democracies. You have to be able to address those problems.  

The European Union is attempting to do that through the Digital Services Act, but it has to be tackled in combination with education and investment in quality alternatives to this sort of content.   

The urgency being applied is increasing, which is great, but I’m sorry to say I feel we’re 10 years too late. Especially as we’re dealing with all of these other technological challenges. That’s not to say we should give up. We need to work harder and faster to ensure that democracy is valued and that human rights-centred responses to this challenge of democratic erosion and viral disinformation are addressed more effectively.  


Read the most recent articles written by Ana Fota - How the EU is funding green travel for artists to move across the Continent