Kansanuutiset (KU) published an article by Toivo Haimi on 27.8.2023. The original text can be found without a paywall at this address: https://www.ku.fi/artikkeli/4906774-tekoaly-on-jo-taalla-eika-siihen-pida-suhtautua-kuin-luonnonvoimaan-sanoo-tietojenkasittelytieteen-professori-teemu-roos

Artificial intelligence is not science fiction, it is already present in our everyday lives, says a professor of computer science at the University of Helsinki. He leads AI education at Finland's AI center FCAI and serves as the responsible instructor for the popular Elements of AI online course.

Computer Science Professor Teemu Roos, before we begin, it's probably worth clarifying some concepts. What exactly are we talking about when we talk about artificial intelligence in popular discussion?

Good point! The thing is, what is meant by artificial intelligence can be quite a few different things. For me as a researcher, it is a scientific discipline, just like physics, biology, or theology. And then if you think about it as a subject of study from a social sciences perspective, for example, we're already talking about technology that is being applied. Especially when it comes to commercial interests related to artificial intelligence or certain digitalization steps or aspects.

Artificial intelligence has been discussed a lot as a service or tool that is used. Then artificial intelligence has been discussed as a kind of assistant that helps with various tasks.

Of course. Services can be built on the basis of artificial intelligence, for example, ChatGPT is certainly a good example of a tool that through the media has brought artificial intelligence into people's attention and interest over the past year. And in that sense, ChatGPT is certainly literally a service that can be used. It's a tool, but I'd like to somewhat problematize the idea or metaphor that it would be some kind of assistant.

I understand that metaphors are quite useful in discussion. Philosopher Susan Sontag has spoken about this wisely. Metaphors can be a quite useful way to explain artificial intelligence without going too much into detail or too technically. But it's also important to remember that a metaphor is always a metaphor, that it's meant to be one version.

The term "assistant" is specifically problematic to me in that we are used to thinking of an assistant as another person. Another person who has their own life, their own agenda, feelings, the capacity for reciprocity and the ability to act responsibly. Then we can expect some kind of moral agency from our assistant. That's what you won't find in artificial intelligence.

Over the past year, DALL-E, a tool for generating images with AI, has been published, another similar service called Midjourney, and ChatGPT, an AI-based language model. In my view, the end of 2022 was a turning point when artificial intelligence finally struck public consciousness. Did it change the discussion about artificial intelligence in your opinion?

In a way, yes. For example, for the first time last Christmas I discussed my work with my mother-in-law at the dinner table at her initiative! That's quite a change.

It's still worth being aware of the fact that it wasn't just the arrival of these tools that shaped that conversation. A kind of shift has occurred, and it is related to the fact that these tools are clever and you can do things with them that come to everyone's attention. That's why they feel so meaningful. However, from a scientific or technical perspective, there has not been such a significant shift that would alone explain the current amount of hype.

However, public discussion clearly has agendas. It's about what kind of narratives are brought to this public discussion. It's good to think about this in relation to journalistic choices as well. What we talk about, and who gets to direct what we talk about. And especially whether we talk about things that are felt to be important to talk about or whether perhaps some other party is setting those narratives.

As technology has developed, it has been thought that the more technology develops, the less human work is needed. As machines end up doing heavy and tedious work, humans are freed to do things they enjoy: for example, sitting in a café, or perhaps painting paintings or writing poems. When I first saw DALL-E's artwork or ChatGPT's poems, I was seized with a kind of panic that it turned out just the opposite: artificial intelligence does art, while humans are forced to keep struggling in shitty jobs with miserable pay.

I recognize this line of thinking. This is fueled by a Washington Post article that told about two people whose jobs ChatGPT had taken. Both had been creative writers, and the article said that now one of them works as a dog walker and the other is studying to become an air conditioning installer.

"Artificial intelligence is not some force of nature that comes from somewhere external."

This story specifically seems to confirm that fear that soon we won't be able to do literary art or write poetry anymore, but we'll have to install air conditioning pipes, while artificial intelligence tells us to install them. And this is certainly scary.

I think more optimistically, perhaps naively in some people's opinion. I see artificial intelligence and technology as still tools. People still decide what we want to apply them to. Artificial intelligence is not some force of nature that comes from somewhere external.

In the same way, you can think about market economics. There are also mechanisms in it that seem to externalize market forces beyond the hands of people as some kind of natural laws. That's not how it is. These are also about political decisions to which the entire market economy is subordinate.

I would believe that technology can in that sense be seen as a kind of phenomenon corresponding to market mechanisms, which certainly can and are constantly being done to it.

Regulation of technology is often discussed pessimistically. It's thought that technology develops faster than regulation and that by the time regulation has finally been achieved, technology has already developed further.

I question this idea too. Regulation is in general a hot potato or rather a can of worms that is terribly difficult to discuss with feet on the ground and fact-based. The fact that artificial intelligence regulation would already be obsolete at birth is presented as some kind of inevitable fact, and it's not really questioned whether this is really the case. This of course serves the interests of tech giants in the sense that it erodes, for example, the ability and mandate of the EU and its member states to regulate. Google, Meta, and Microsoft have enormous machinery whose job is specifically to slow down regulation and put all sorts of obstacles in the way.

– When discussing artificial intelligence, only engineers or nerds like me get to be heard, even though that discussion should also include the voices of lawyers, sociologists, and other social scientists, Roos says. 

For example, legislation like the EU's General Data Protection Regulation or GDPR is absolutely incredibly good, and it shouldn't even be really thought about as long as it is in force and everything is fine.

I was just an examiner in an Introduction to Artificial Intelligence course. Students had to read material about artificial intelligence from EU websites and for example the new AI Act that was voted on in Parliament in June. There was a sentence that said that the EU's AI Act creates the first legislation regulating the field of artificial intelligence.

"It's not really true that legislation would be created as obsolete."

In the exam, the question was asked that since the EU Commission's website states this, it must mean that artificial intelligence is not regulated by any legislative package at the moment. So could students, for example, cheat on this exam and use artificial intelligence for it? I asked students to consider whether this really means this.

Well, that's not how it is. I wanted to emphasize that regulation of artificial intelligence does not come on a clean slate as some futuristic entity. That's a wrong starting point, because we already have legislation that also covers artificial intelligence.

I'll give an example: if I had a self-driving car that hit a cyclist during a lane change, it's of course a crime, even if there isn't a law that specifically prohibits artificial intelligence from driving over people. At that point you can't say that "artificial intelligence did it".

Or for example violating people's privacy or smear campaigns done with deepfake videos. You can't do those. So it's not really true that legislation would be created as obsolete.

I have recognized two ways to talk about artificial intelligence in public discussion. One of them is technologically optimistic, in which we talk about a new technological revolution that will overturn everything.

The other is then an alarmist way, in which we talk about risks: that we will soon all be unemployed and become slaves of companies that own AI technology. These are two completely different discussions that are not much in interaction with each other. What do you think about this?

Of course, that's how it is. This question has been asked of me several times in the past couple of days. The problem with this approach, however, is that it assumes that artificial intelligence is now some kind of brand new and wonderful thing that hasn't existed in any way before and that appears from some other planet.

In this kind of discussion on a utopia-dystopia axis, the ordinariness of artificial intelligence and its existing, current impacts are forgotten. Such as how the benefits of artificial intelligence are distributed very, very unevenly, and they can increase, for example, economic and regional inequality. The fact that if artificial intelligence is thought about at the sci-fi level, it makes it difficult to discuss things that have already happened, those duller topics like who the well-being created by artificial intelligence accumulates to.

It's important to remember, for example, all the "click work" behind artificial intelligence, in which people produce data for artificial intelligence and "teach" artificial intelligence to recognize images by clicking on, for example, pictures of "buses" among other pictures. These people get a laughably small salary for it, which isn't enough to live on.

Such work doesn't develop these people's skills in any direction but rather enslaves them. They are forced to live in a cycle where they can't study or live a normal life or even have a proper daily rhythm. They just click.

"I would like it not to be thought that artificial intelligence is some kind of strange vision of the future."

Another example is ChatGPT. Human feedback has been used in "teaching" it. Artificial intelligence produces text, and people evaluate whether it was a good or bad answer. Was there hate speech or incitement to violence, and so on. It fine-tuned ChatGPT.

That work was done by Kenyan workers at an absolutely miserable salary, and those people had to watch and read absolutely really sick hate speech and violent text from there, which was very traumatic for these workers. The pay and compensation for this job were completely ridiculous. All these kinds of examples are somewhat "boring" when compared to the fact that we can throw the idea into the air of a shiny robot that comes and takes everyone's jobs.

That's why I'm concerned that the narrative in the media has shifted so much away from these things where there are really important, acute problems. There is also a lot of expertise in various places in society about these.

Yet when artificial intelligence is discussed, only engineers or nerds like me get to be heard, even though that discussion should also include the voices of lawyers, sociologists, and other social scientists.

Last question. What should we all keep in mind about artificial intelligence?

I would emphasize that it's not any kind of really weird sci-fi, which looks like brains in a blue circuit. I would like it not to be thought that artificial intelligence is some kind of strange vision of the future, but that it is already now smart devices used in everyday life that are full of applications using artificial intelligence. They affect the lives of people and their families right now.

Artificial intelligence is not some kind of incomprehensible rocket science, but anyone can understand it if they want to.