Battling digital windmills
Europe has long had to fend off an invasion: An army of digital, semi-automated bots that use the internet and social networks to create uncertainty and try to influence the free expression of opinion, for example during elections. Fake news sites and sleeper accounts are used to spread disinformation and discredit and vilify individual politicians, or the political system as a whole, with AI-generated images.
There is great concern that this could also shake up entire elections and influence citizens' political attitudes because they fall for it. In December, the Romanian Constitutional Court declared the first round of the presidential election invalid due to an „aggressive Russian hybrid attack“.
In fact, according to a survey by the digital association Bitkom, 88% of voters are concerned that „foreign governments are trying to manipulate the general election via social media“. 45% believe that Russia is active in this regard, while 42% also consider the USA to be a perpetrator. Trump's adviser, the tech billionaire Elon Musk, has recently repeatedly campaigned in favour of the right-wing populist Alternative for Germany (AfD) party. Only 26% see China at work in the background.
Social media dominate
The problem is also that 56% of those surveyed said that they mainly find out about politics via Facebook, 25% via LinkedIn and 18% via Instagram. Especially as more than half of pupils in Germany admit to having problems recognising fake news online in a special evaluation of the Pisa study. On top of this, around a third of young Germans do not check whether information is correct before they pass it on digitally.
The role of social media is therefore crucial. This is where the EU's Digital Services Act (DSA) comes into play. Under the Platform Act, services with over 45 million active European users must, among other things, analyse and block systemic online risks around the integrity of elections.
Toothless „guidelines“?
This affects Google (YouTube), Microsoft (LinkedIn), Facebook (Meta and Instagram) as well as Tiktok and „X“, the former „Twitter“. According to the guidelines, operators must respect fundamental rights, including the right to freedom of expression. Artificial and manipulated images, audio and videos must be clearly labelled or marked in another conspicuous way.
The Federal Network Agency invited the platform operators to a round table on this in January. However, there was no word on the results. However, the fact that US Vice President JD Vance rebuked the EU at the Munich Security Conference a few days ago because, in his opinion, it is suppressing „freedom of expression“ on the platforms of digital companies with its digital regulation shows what concept of freedom the new US administration is working from. Digital companies are being indirectly encouraged to resist. And this certainly suits them, as moderation, filtering and deletion on the platforms tie up a lot of resources.
Fact-checkers overwhelmed
In the USA, the fact-checkers have already been largely sacked, but they still exist on European platforms. But even they cannot immediately remove or flag offences, as experts criticise. Such posts are easy to find. However, the platform operators are often unable to do so simply due to the sheer volume of posts. Many real and artificial accounts (bots) spread their posts in a coordinated and AI-supported manner in many waves. In addition, the „information“ distributed comes from websites that appear to offer real news. This makes it difficult to recognise. And often even pages from real media are cloned for this purpose, as „Spiegel“ and „FAZ“ have already experienced.
A few weeks ago, a research group led by the fact-checking organisation Correctiv claimed to have uncovered a disinformation network consisting of 102 websites that allegedly attempted to influence the Bundestag elections. For example, it was claimed that the Bundeswehr was mobilising 500,000 men for a mission in Eastern Europe. Furthermore, the FDP politician Marcus Faber was accused of being a Russian agent. The genuine news site „Neue Presse“ announced that Germany was planning to „import“ 1.9 million Kenyan labourers. „A new migration crisis on the horizon?“ ask the authors in the headline.
Forensic tools
According to the Fraunhofer Institute for Secure Information Technology (SIT), disinformation agents on Telegram use their creator channels and separate spreader channels to increase the mass impact. With the Dynamo tool developed in Darmstadt, the dynamics of disinformation campaigns are visually processed and thus more understandable, the main actors become visible, and instruments can be developed to counter the spread of fake news. Research is also being carried out into ways and means of machine-filtering and making fake news and manipulation visible. Martin Steinebach, head of the Media Security and IT Forensics department at SIT has developed tools to make the subsequent manipulation of photos recognizable.
But more education reaches its limits when people are not even aware of the development, or – as the example in the USA shows – the sheer volume of disinformation alone changes the way they see the world. When „fake“ is increasingly infiltrating and is increasingly being mistaken for „facts“. This is shown by a look at the USA, where JD Vance and the Republican Marjorie Taylor Green had no problems spreading recognisably false claims about their political opponents.
Pressure on platform operators
Regulation remains indispensable, emphasised the head of the Federal Office for Information Security (BSI), Claudia Plattner, at the Munich Security Conference, and made three concrete suggestions: no accounts for bots, labelling of AI-generated content, and the offer to digitally sign content to be able to verify its authenticity. In technical and cryptographic terms, this is peanuts. Although the platform operators have to cooperate, implementation is largely free of charge and there is more transparency for users.
Recently, new fake news made the rounds, which – in line with US President Trump's wishes – is intended to support the latest political decisions: The story was that the US agency USAID spent 20 million dollars on actress Angelina Jolie, five million dollars on actor Sean Penn, and four million dollars on actor Ben Stiller to polish the image of Ukrainian President Volodymyr Zelensky. Trump advisor Elon Musk immediately re-tweeted this „news“ - probably because it seems to justify his halting of funding for USAID.