What states are doing against disinformation campaigns
A manipulated Tagesschau news video in which a deliberately edited interview gives the impression that public media in Germany are inciting hatred against the opposition and minorities. A recording created with the help of artificial intelligence that imitates the voice of a pro-European Slovakian politician and puts statements in his mouth about how he wants to manipulate voters' votes.
AI and ever-improving algorithms are not only changing the way information is produced, shared, and consumed. They are also making it increasingly difficult to distinguish authentic content from fake content. In the super election year of 2024, when half of the global population is expected to vote, this is of crucial importance – especially as social media is now the most widely used source of information. Targeted disinformation campaigns can have far-reaching consequences not only for elections, but also for many other areas, from national security to healthcare and the private sector.
Bots set the pace
Institutions and governments, but also companies such as Google, are therefore increasingly addressing the issue. The Organization for Economic Cooperation and Development (OECD) recently published a 141-page report on how societies can deal with disinformation and strengthen information integrity. It also analyses what measures individual countries have already taken – so that other countries can learn from them.
„No democracy can solve the problem of the rise of disinformation on its own“, OECD Secretary-General Mathias Cormann is convinced. He therefore calls for greater international cooperation in this area. At the same time, he warns against wanting to regulate the media more and more out of fear of disinformation campaigns. „The fight against disinformation must never take the form of controlling information,“ he says.
However, the effect of disinformation created using generative AI is now being amplified by the use of algorithms and robots, known as bots. This means that false news can be spread quickly to a large number of users at low cost. It is therefore important to go beyond the self-regulation generally applied by traditional media, advises Elsa Pilichowski, Director of Public Governance at the OECD. This applies above all to social media. She recommends that all platforms that disseminate information should take more responsibility for this. To prevent their commercial interests from contributing to the massive spread of fake news, the OECD believes that more transparency is needed in addition to greater responsibility.
One example of how this can be achieved is a law that came into force in California in 2019, the Bolstering Online Transparency Act. This obliges bots to disclose their identity online. Experts believe it is also important to understand how algorithms are created in order to better combat fake news. This is because it is still not known what effect they have, explains Oana Goga. She is head of the computer science laboratory at the École Polytechnique for the French research centre Centre national de la recherche scientifique (CNRS), where she researches the risks of online platforms and AI for individuals and societies. „We need to focus on the process of producing false information rather than its content,“ she says.
The state service Viginum, which was launched in France in 2021, is also intended to help protect against foreign digital influence. It analyses public online content. Only recently, it discovered that a network of at least 193 information portals in European countries and the USA were disseminating pro-Russian, misleading, or inaccurate information. This information is not created by the portals themselves, but is sourced from accounts on social media, Russian press agencies, and official bodies.
Huge botnet unearthed
Almost as a consequence of these findings, the French National Assembly is now debating a bill inspired by the US Foreign Agents Registration Act (FARA). The aim is to force foreign agents who influence public life in France to register and comply with ethical rules. However, the left-wing populist opposition party La France Insoumise wants to block the law.
David Colon from the Institut des Sciences Politiques recommends raising awareness among the population in particular. Although the younger generation in particular is losing trust in official institutions, media education is underfunded, undervalued and not widespread enough, he criticizes. Colon advises taking inspiration from Finland, where media education has been on the curriculum since the 1950s. As part of the compulsory subject, Finnish pupils are now intensively studying disinformation campaigns.
To increase understanding of how disinformation is spread, researchers from the Social Decision-Making Laboratory at the University of Cambridge have developed the online game „Go Viral!“ with the Médialab of Sciences Po, supported by the UK and the World Health Organization. It simulates a social media environment and exposes players to common manipulation techniques. Sweden, on the other hand, founded the Swedish Psychological Defence Agency in 2022. It identifies, analyses, and counters foreign influence campaigns that are directed against the country's interests. This is also intended to raise public awareness.