Artificial super intelligence is possible

Could we ever control a super intelligence?

In the science fiction film "I, Robot", in which humanoid robots are used in almost all areas of life in Chicago in 2035, every robot is subject to three laws. First: A robot must not harm people or allow people to be harmed by inactivity. Second, a robot must carry out every command given by a human, but only if it does not break the first law. Third: A robot must preserve its own existence unless this speaks against the first or second law. This is to prevent robots from opposing humans. Of course, the situation in the film soon degenerates, in the end the catastrophe and the subjugation of people can only be prevented by a so-called super intelligence.

What is already possible in the film, in reality (fortunately, some would say) is still a bit of a long time coming. A "super intelligence", that is, a being or a robot that is many times ahead of humans in terms of intelligence, does not yet exist. Nevertheless, there are some developments that one day will at least come close to such an artificial intelligence: house robots that do the cleaning or look after relatives, self-driving cars or (at least somewhat intelligent) language programs already exist, and they should always " get smarter.

Dangerous development

For this reason, too, scientists are already warning of the dangers of such overpowering software and programs. In a new study, an international research group has investigated how and whether superintelligence can be controlled at all. The short answer: no. There are already machines that carry out important tasks independently without the programmers being one hundred percent clear how they taught themselves this task, the researchers write - a development that could one day be dangerous for mankind.

Scientists are not the first to look at the question of controlling artificial intelligence. In an interview with Die Zeit, the Swedish philosopher Nick Bostrom describes the problem of control as follows: "Imagine a machine that was programmed with the aim of producing as many paper clips as possible, for example in a factory. This machine hates them Not humans. Nor does she want to free herself from her subjugation. All she drives is to produce paper clips, the more the better. (...) In order to achieve this goal, the machine has to remain functional. That knows So it will keep people from turning it off at all costs. It will do whatever it takes to secure its energy supply. And it will grow - and will not stop even if it turns humanity, the earth and the Milky Way into paper clips That follows logically from her target, which she does not question, but rather fulfills as best as possible. "

Check the machine

According to Bostrom, it must be ensured on the one hand that the developers of such a machine are not out to pursue personal advantages. Second, control measures must be built into the machine that cannot be accessed by the machine itself and that can be used to control both the machine's capabilities and motivation.

The scientists of the current study are more skeptical. They experimented with two ways to control artificial intelligence: The first is to disconnect the machine from the Internet and other devices. In doing so, however, it would also lose all of the other abilities for which it was created. For the second option, a "safety algorithm" could be programmed that forbids the machine from ever causing harm to people - similar to the three laws from the film "I, Robot".

Algorithm not working

According to researchers, however, this algorithm would not be able to determine whether artificial intelligence actually intends to harm humanity - and could just as easily be deceived by them. In addition, another problem arises for the researchers: It is possible that mankind does not even know when there is a superintelligence, since merely understanding such a being would exceed human intelligence.

So what remains to be done? Perhaps we should follow the advice of the Russian computer scientist Roman Yampolskiy, who said: "We should never free a superintelligence from its 'virtual box'" - a kind of cage that restricts the functions of the system - not even if it one day would be able to solve the world's biggest problems such as poverty, pandemics or climate change. But even then, according to other experts, we would not be one hundred percent protected: A super-intelligent being could perhaps also manage to break out of the most secure "prison". Would we know before it's too late? (Jakob Pallinger, January 19, 2021)