Few terms are as misused in Europe and the United States as “disinformation.” While everyone seems to agree it is dangerous, not everyone agrees on its meaning.
For a long time, disinformation has been portrayed as a concept, concerned with objective realities rather than interests and intentions. This notion is peculiar, because disinformation is subjectively designed, for the purpose of harm.
Consider Tucker Carlson, the American TV personality known to make false, often dubious claims. Their precise categorization presents a challenge - do they fall under the umbrella of disinformation, propaganda or misinformation?
Similar issues arise when it comes to technology. With the launch of AI software like ChatGPT and Midjourney, the production of problematic content has been made readily available to anyone with digital access. With exponential growth of malicious online content, any expert fact-checking or manual analysis will be too slow to counter it properly.
Ultimately, we will have to rely on AI itself – and as it is an exclusively formalist medium, the words we use are crucial.
A second issue revolves around verifying whether the information is false. The legacy approach entails fact-checking activities ranging from expert judgments to sophisticated open source intelligence (OSINT) techniques.
Fact-checking is rooted in a questionable concept of what constitutes a “fact.” The philosophy of science has long grappled with the nature of facts, realizing that they do not exist as pre-made entities, but rather emerge as a result of inquiry. This leads us to a position inspired by the philosopher Thomas Kuhn: different conceptual schemes operate with different sets of facts.
Recently, American journalist Jacob Siegel labeled disinformation a “hoax,” portraying it as a tool for political suppression and suggested that retreating to good old liberal “human arts of conversation, disagreement, and irony” would suffice to address the problem.
An encounter with the reality of the digital sphere is enough to render such naiveté irrelevant. When German philosopher Jürgen Habermas speaks about ‘a new structural transformation of the public sphere,’ related to the advance of new media, he grasps the irreversible changes that the information age brought.
As a resident of Ukraine, who has experienced life during the war unleashed by Russia, I am acutely aware that there is no going back to a “pre-disinformation” time.
The Russian aggression was accompanied by an unprecedented pollution of the information space with fabricated malicious content. This led me to become involved with a Ukrainian initiative that focuses on combating malicious information warfare through the use of AI. Drawing on this experience, I have reached the following conclusions. The concept of disinformation reflects the crude reality we all see: there exists information that was designed to be misleading and harmful. To navigate this realm in the age of AI, problems ranging from the seemingly straightforward issue of defining disinformation to the nuanced issue of verifying facts, are to be addressed. Ultimately, it leads us to the most difficult dilemma, that of conflicting interests.
The concept of disinformation demands a reimagining through incorporating this overlooked value aspect. Achieving that would lead politicians, activists, journalists and tech-companies to a breakthrough.