The text below, in which public law professor Susanna Lindroos-Hovinheimo and assistant professor Riikka Koulu are interviewed, was published on 7.9.2023 as a news item by the University of Helsinki. Read the original news text here: https://www.helsinki.fi/fi/uutiset/demokratia/tekoalyasetus-paisumassa-vaikeasti-sovellettavaksi-moykyksi

In the EU, arduous work is being done to get the planned AI Act completed in the near future. However, as systems utilizing AI develop at a rapid pace, the legislative package threatens to become a difficult-to-implement whole.

– It's starting to look like the regulation will be very broad and will contain a huge number of articles. I find it highly unlikely that it would result in a clear entity.

This is what public law professor Susanna Lindroos-Hovinheimo says, who has followed the regulation's progress closely. She has become familiar with the regulation, among other things, as part of the Generation AI project, which researches AI regulation from the perspective of children's rights in particular.

Lindroos-Hovinheimo assesses that the regulation leaves far too much room for interpretation.

– Only when we receive the first decisions from the EU Court of Justice will we know what the regulation actually says. This will take years. Yet the regulation should be complied with continuously. This will be a really difficult situation for all actors, both private and public.

Someone unfamiliar with the matter might think that AI is not regulated at all at this time, since the AI Act is still in preparation. This is not the case, as numerous existing laws, starting from the constitution, set the boundary conditions for AI's operation. However, there is no single regulatory framework covering all technology. The need for such a framework is evident, and in the EU there seems to be strong political will to get the regulation done.

– Digitalization has advanced for a long time without the development being seriously questioned. We have decades of regulatory debt that is now being addressed both in the EU and at the national level, says Riikka Koulu, assistant professor of societal and legal impacts of AI and director of the Legal Tech Lab.

Riikka Koulu and Susanna Lindroos-Hovinheimo

Riikka Koulu and Susanna Lindroos-Hovinheimo

GDPR Suitable Comparison

The most appropriate comparison for the AI Act being finalized is the EU's General Data Protection Regulation, or GDPR.

One obvious unifying factor is the scale. Just like with GDPR, the scope of application of the AI Act is vast and it affects, simply put, all areas of society. Another point of contact is supervision.

– The regulation will by no means remain a dead letter; rather, quite effective supervision mechanisms will be created for it, Susanna Lindroos-Hovinheimo explains.

The EU has repeatedly imposed fines of hundreds of millions of euros on platform companies that have violated GDPR rules. The AI Act will also come with the threat of fines and the amounts will be of the same magnitude, so they form a real deterrent that affects companies' operations.

On the other hand, implementation, application and supervision are also expensive. A new authority may be established for supervision, or alternatively new tasks may be created for old authorities. GDPR is supervised by the data protection authority, and presumably a similar authority will be established to supervise the AI Act. The legislative work will also be extensive and require large amounts of work, effort and money.

A clear difference between these two regulatory frameworks is that GDPR was created on existing foundations, whereas the AI Act is created from scratch. GDPR was built on top of an old directive, and a significant part of the regulation's content was taken directly from the old directive, which had existed for decades and been refined over the years through case law.

– Because the AI Act has no predecessor, there is also no legal tradition or precedents that courts can rely on. Therefore, it is to be expected that its application will be challenging, Lindroos-Hovinheimo says.

Defining AI is Difficult

One of the major stumbling blocks with the AI Act has been how AI should be defined in a legal sense. This is one of the decisive questions for which no consensus has yet been found in negotiations. It is telling that even data scientists, engineers and other professionals dealing with AI have not found an answer that satisfies everyone on what is AI and what is not.

However, the EU's aim has been to keep the definition of AI broad, and the regulation contains numerous general guidelines that would apply to all AI. On the other hand, the regulation is in many respects very detailed, and it concerns, for example, the technical operating mechanisms of individual systems. In this respect, Lindroos-Hovinheimo believes the regulation is badly imbalanced, as it operates at too many different levels.

Difficulties have also arisen from the fact that technology changes so rapidly that draft laws need to be modified on the fly. Examples include Chat GPT-like large language models, which made a major breakthrough at the turn of the year. Applications of AI based on this technology were not considered in the European Commission's original draft regulation, but they were added to the Parliament's version during preparation.

– Laws should be drafted in such a way that they stand the test of time and keep up with technological development. GDPR seems better in that respect, as it is technology-neutral – the same rules apply whether data is processed with a pen and paper, Lindroos-Hovinheimo says.

To solve this problem, a model has been proposed in which the Commission could issue supplementary regulations and updates after the regulation comes into force without affecting the actual law. This would be a practical way to keep up with development, but it is problematic from a parliamentary democracy perspective.

Large technology companies also readily point to this fundamental problem between technology and law – namely, that technical development progresses faster than regulation. Silicon Valley technology companies have argued that it's actually better not to regulate at all, but to let development proceed at its own pace and rely on industry self-regulation.

– When it comes to the market for products and services, we have seen so many times that this cannot be relied on, and regulation is necessary. This is also what the EU has done in recent years with large platforms like Google and Facebook, Riikka Koulu points out.

Regulation Brings Security to Ordinary People

Despite the difficulties, the regulation is being created with considerable certainty. Legislative packages have certainly been abandoned in the middle of preparation before, but in the case of the AI Act, the stakes are so high that it can't really be rejected anymore – it's such a significant matter and the political pressure to act is so strong.

If the regulation were to fail, it is inevitable that member states would begin legislating their own laws one by one.

– This would be a big problem for trade and free movement, i.e., an economic problem within the union. Creating common rules within the EU is a kind of capitalist goal, not just improving the world, Lindroos-Hovinheimo states.

Although the legislative package is problematic in many respects and its application will undoubtedly reveal ambiguities, it is, according to Lindroos-Hovinheimo, nonetheless a necessary and welcome package.

– From the perspective of an ordinary person, the regulation is probably a positive thing. Through it, a certain level of security can be created for people, as there are rules for AI systems and they cannot be just anything, and above all they could not be used for just any purpose.