"Who will check the AI checking system?"
Prof Kersting, does the new AI Act actually fulfil its role as a model for the industry? Or is it creating a new bureaucracy monster?
I think the EU's fundamental approach of wanting to protect people and risk groups is sensible. After all, many rights can be violated through the use of AI, digital images of people can be created, they can be tracked at every turn, and their behaviour can be analysed. Citizens and society must be protected from this. More and more groups are already realising how AI can intrude into their lives and change them. Defining prohibition zones here is what the law does. And I also think that Europe is indeed leading the way this time when it comes to legally defining a new market. The USA will probably soon be marching in a similar direction.
But in the case of AI, the processes are in motion. Can rigid laws even do justice to this?
First of all, the regulations still need to be formulated and concretised for the new testing and inspection bodies that are to be set up. They must also be equipped and authorised to examine and test the package leaflets supplied by the industry for their applications, i.e. whether they contain the ingredients listed on them.
But how can this be proven without decrypting the code? In other words, can a company be expected to disclose its trade secrets?
Perhaps they don't have to disclose all the data, and larger blocks would suffice. I can also imagine – analogous to auditors in tax and corporate law – a public or private body that is given comprehensive insight but only passes on its certificate. At the moment, I'm much more concerned about whether these new auditing bodies will be able to keep pace with the enormous advances in AI. One of the following big data sets will probably contain 100 billion images. Nobody will be able to look at them. To check this data, we need our own AI systems, which in turn check and evaluate other systems. And who will check the checking system?
Clear vote against social scoring
To take a positive view of the legislation, where do you see the most significant advantages of the law?
It is the fundamental clarity. For example, there is a clear vote against social scoring, which should reassure many people. There are transparency requirements for development so that it is clear how these systems react and why they sometimes give such strange and distorted answers.
And why is that?
It's because of the training data because we humans also sometimes behave strangely, and this is reflected in the data. And sometimes also because this data is not representative but comes mainly from the Western world, for example. The sources must, therefore, be disclosed – especially when these systems make relevant decisions themselves in various areas.
But can't this transparency requirement be easily undermined?
Anything can be undermined. Just think of the speed limits on our roads. The decisive factor is that companies run a particular legal or financial risk with their misbehaviour. What's more, undermining can potentially be made more difficult by AI testing systems that also learn from this training data.
The necessary technical understanding and AI expertise must be available in the authorities and must not just be scrutinised legally and formally.
How great is the danger that the AI Act will set too narrow limits on progress in AI?
I hope that, despite all the necessary testing, there will still be enough technological openness. Therefore, we will probably have to switch to a stochastic approach and rely on good random samples. However, this also requires the necessary technical understanding and AI expertise to be present in the committees and authorities yet to be formed that will have to implement the law, and not merely legal and formal checks.
Was the AI Act passed too early because many of these questions and new review bodies were not even on the agenda?
No, because that would have led to significant delays. Now, politicians and industry at least know what they have to prepare for and what still needs to be done. After all, the AI industry itself has also called for regulation. Tesla boss Elon Musk had even proposed a six-month development break for the huge AI models. But then he suddenly came up with such a model himself. We must, therefore, remain closely involved in the implementation of the law, observe the industry and act pragmatically. And that is only possible with people from the industry itself.
Is there already AI expertise in the authorities that is now being called upon to formulate the law and structure the review bodies?
That's not the case at the moment. Hence, my call for AI expertise. After all, it's not just a matter of scrutinising the social and legal aspects, as is the case elsewhere. The specifications that these authorities make must, of course, take into account that their regulations are technically feasible. You simply need to know about the ongoing research projects.
But isn't the more significant problem that the AI Act entails a considerable amount of bureaucracy that stifles the smaller companies that it is actually intended to support?
The danger is there, of course.
How should we proceed now to prevent a Brussels bureaucracy monster?
We have to be careful, for example, that the transparency obligation does not overwhelm small companies and start-ups; or that the bureaucratic requirements simply get out of hand. As warnings about this are now being issued right at the start of the consultations, we can certainly still hope. Ultimately, however, transparency is a clear locational advantage for Europe, because it creates security and trust and thus promotes the market.
We have to be careful that smaller players are not outmanoeuvred by the big players.
I was wondering why the competition authorities are not immediately involved in the AI Act, given the bad experiences in other digital markets. Do you know the reason?
I've already asked myself that question. I think that would make sense. After all, digital business, in particular, is also about the platform economy. In any case, we have to be very careful that the big players do not outmanoeuvre smaller players on the market. Above all, however, the state must ensure that it continues to expand the digital infrastructure vigorously and also continues to support research where proprietary AI systems are being developed. That's where I'm most worried because not as much is happening at the moment as elsewhere.
Does this also apply to the auditing bodies that are yet to be set up, especially as there are particularly large players in the auditing market who could also take this on?
Yes, definitely here. We must not allow the AI Act to only open up new sources of funding for the KPMGs, EYs and Deloittes of this world.
Would the increased promotion of open source be a way to fulfil both the transparency requirements and the promotion of smaller companies?
That is indeed an important idea. Open source needs to play a bigger role in the AI sector than it has done so far – for the reasons mentioned but also for many others. Open-source models can be used to map the DNA of human behaviour, so to speak. And it is clearly in the public interest that this is not solely in the hands of commercial companies but in generally accessible libraries. There is still a lack of funding to build these systems. Later, other companies could access them and specialise in the models. Research on large open-source models would also help to clarify questions about the copyright of training data, similarities of AI results from the original data, etc., and to be able to make judgements.
What is your biggest concern with regard to AI legislation in Europe and its implementation?
That the opposite of what was intended will be achieved by mistake, i.e. well-intentioned but wrongly done, for example, if the specifics of the AI law turn out to be too narrow, rigid, bureaucratic or not in line with the market. At this stage, however, it is still too early to make such a judgement.
Of course, jobs are being lost, but new ones are also being created by AI.
And the impact of AI on the labour market? They won't keep you awake?
Of course, jobs are disappearing, but AI is also creating new ones; some are becoming easier, others more demanding, and there are entirely new opportunities. That has always been the case: with electricity, cars, and robots in factories. And there are also new forms of artistic expression. But photography has not replaced painting. And the computer has not replaced the chess player, even if the former can beat the latter.
The interview was conducted by Stephan Lorz.