Cold War on Social Media Platforms
Unsettling and misinforming voters to discredit democratic processes has existed since time immemorial. Rumours have circulated for years, for example suggesting that votes would be invalidated if the check mark extends beyond the designated space, if there is a punched hole at the corner of the ballot, or if it is torn, and that the ballot must be signed to be counted. But what is new is that such claims are now infiltrating deeply and widely into the timelines of social media users, and are no longer limited to smaller activist and conspiratorial communities.
Algorithms without a moral compass
On Facebook, X (Twitter), Youtube, Instagram, and TikTok, these false claims are now broadcast nationwide, and have a greater impact, thanks to the distribution algorithms used, than the public broadcasting system ever had. Especially as more and more digital users rely on social media as their main source of information.
The investigative network Correctiv provides information on current fake news on social media, and continuously discovers new cases of obvious disinformation. The Federal Returning Officer has also responded to the challenge with an information page.
Even worse, however, is another digital threat. Rumours, conspiracy theories, disinformation, fake news, images, and videos that are not immediately recogniseable as such, usually lacking in any context. If one wants to learn about their authenticity and the underlying facts, this information is not provided as it would be with reputable media, who at least mention the source or refer to alternative interpretations. This information could be obtained elsewhere on the web with a few clicks. But this requires time and effort for the user, which is the extra click that the social media tools want to direct elsewhere. The algorithms prefer to present the „next hot thing“ that should be clicked – promising them advertising revenue.
More space for „alternative“ worldviews
Moreover, such rumours, falsehoods, and half-truths spread on social media platforms so quickly that they can be seen in a broad mix of the population before any response can be made. With the European elections underway, the European Union Agency for Cybersecurity (Enisa) currently sees the EU as the main target of disinformation attacks. The aim of the actors is to sow discord, and discredit democratic institutions. Extreme parties thus get more space to spread their „alternative“ worldviews among the people.
Overall, according to Enisa, disinformants of both domestic and foreign origin are concerned with establishing dominance in particularly critical and sensitive topics, such as Ukraine, climate, migration, energy prices, and gender. These topics can easily be presented as black-and-white, and are more easily associated with beliefs and assertions than, for example, economic policy matters, where one-dimensional answers are scarcely possible. And for which the attention span of the audience on social media is also too short.
Networks paralysed
Artificial Intelligence (AI) is now also being used for disinformation strategies because fake news can be produced more precisely, inconspicuously, quickly, and on a larger scale. Political defenses are being overrun. Recently, the company Open AI (ChatGPT) identified and stopped five networks that had created fake accounts with AI and launched campaigns through them. One such network uncovered was the pro-Russian network „Doppelganger," the pro-Chinese „Spamouflage,“ and an Iranian operation known as the "International Union of Virtual Media (IUVM)“.
Prior to this, EU Commission Vice-President Vera Jourova had warned that the leadership in Moscow wants to influence the election to divide Europe. Although a task force was set up within the EU's diplomatic service years ago to react more quickly, this promises only one-off successes at best, because the response time is still too long, deleting the relevant messages is cumbersome, and the wave of disinformation campaigns has already swept through all timelines.
TikTok as a fake news accelerator
The Digital Services Act (DSA) is considered a sharp instrument, with which the EU obliges major operators of internet platforms to react more quickly and rigorously to disinformation campaigns. Nevertheless, the implementation of the legislation has only just begun, so no experience has been built up yet. In addition, the operators seem to vary in their sensitivity to fake messages. The question remains whether in a democracy, content endangering the state should be identified and blocked by private companies? Isn't that the state's task? Because it is possible that comments protected by the constitutional principle of freedom of speech are also suppressed. Shouldn't the state also upgrade digitally, and demand interfaces to social media networks to intervene directly?
According to the Transparency Database of the DSA, 58,082 cases of disinformation regarding „negative effects on political discourse and elections“ prompted action by platform operators across Europe on 1 June. The lion's share was on TikTok with 56,525 cases. This says something about the TikTok environment. However, it may also be because other platform operators like Facebook, who only make single-digit interventions daily in this regard, are still too negligent. The majority of political disinformation on TikTok is also sorted out using automatic techniques – error rates unknown. Considering around 53 million reports of violations across all areas and platforms in one day, it seems that there is no other way.
Twitter has ceased its involvement since the takeover
Meta (Facebook and Instagram) as well as Google, despite their low intervention numbers in politically influencing posts, claim to have drawn lessons from the US election fraud debacle. Just under four years ago, the denigration of the US presidential election result continued unchecked through the timelines of social media users. According to the social media sites, they have since developed their own guidelines for „election integrity". Fake accounts are identified and removed more quickly. However, since the takeover by Tesla founder Elon Musk, X (formerly Twitter) seems to have ceased engagement altogether. Green Party politician Terry Reintke says that "hate speech continues to linger there for far too long.“
TikTok is a special case, because political attitudes are also influenced by omission. Thus, the 1.6 billion users (20 million in Germany) hardly receive posts on the Uyghurs issue, although the network swears it does not censor. Videos addressing this suppressed ethnicity in China have disappeared from the network. Is the algorithm (or censorship tool?) not already depicting a world to all its users via the selection and frequency of videos on certain political topics? In this way, opinions and attitudes toward politics and economics can be influenced.
FBI Director Christopher A. Wray is also concerned that the Chinese have the ability to control the recommendation algorithm of the TikTok app. This allows them to manipulate content and use it for influence when needed. Nonetheless, this is not limited to TikTok, but applies to the algorithms on the platforms overall. Because they generally prefer short statements over lengthy differentiated explanations, more raucous and flashy posts than moderate ones, and trendier topics over more nuanced ones, fitting into the 30-second entertainment logic of the platforms.
Social media shapes opinion
In this respect, social media is shaping the opinions of young people about politics, economics, and foreign policy. Even when it's not about actual offensive disinformation, but about representing their perceived reality, including the topics that are particularly important to them from the perspective of the platforms. Around 65% of 14- to 29-year-olds prefer to consume their news via social media, according to surveys. And how the situation develops when AI is increasingly used as a source tool for disinformation, and at the same time as a tool for detection and countermeasures, remains to be seen. Minor, barely noticeable changes in the language of politicians in videos, minimal corrections of facial expressions, or a twisting of words can change the impression they make on viewers and listeners. At the same time, AI is also suitable for detecting disinformation. A cold war in the digital realm.