Mind Matters Natural and Artificial Intelligence News and Analysis
shot-of-corridor-in-working-data-center-full-of-rack-servers-and-supercomputers-with-pink-neon-visualization-projection-of-data-transmission-through-high-speed-internet-stockpack-adobe-stock.jpg
Shot of Corridor in Working Data Center Full of Rack Servers and Supercomputers with Pink Neon Visualization Projection of Data Transmission Through High Speed Internet.

Would Super AI Cure Cancer — or Destroy the Earth?

Max Planck Institute computer scientists say that we not only don’t but can’t know
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An international team of computer scientists associated with the Max Planck Institute concluded that, given the nature of computers, there is no way of determining what superintelligent AI would do:

An international team of computer scientists used theoretical calculations to show that it would be fundamentally impossible to control a super-intelligent AI

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity”, says study co-author Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines, Max Planck Institute for Human Development

Newsroom, “We wouldn’t be able to control superintelligent machines” at Max Planck Institute (January 11, 2021)

They considered ways to hobble such a computer, including

➤ Wall it off from the internet — but that would limit its usefulness. (One might add, of course, that if it is superintelligent, it might find a way to outsmart the guards and get on the internet anyway.)

➤ Program it to pursue only goals that are in the interests of humanity. (Again, is it bound by mere human programming?)

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful. But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable”, says Iyad Rahwan, Director of the Center for Humans and Machines.

Based on these calculations the containment problem is incomputable, i.e. no single algorithm can find a solution for determining whether an AI would produce harm to the world. Furthermore, the researchers demonstrate that we may not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is in the same realm as the containment problem.

Newsroom, “We wouldn’t be able to control superintelligent machines” at Max Planck Institute (January 11, 2021)

There are good reasons for thinking none of this will ever be a concern. First, many intelligent operations of the human mind are not computable, which means that the computer cannot do them. These operations are not “genius” ideas either; the computer can’t even do common sense and there is no clear way to enable it to do so.

Among the other limitations of AI are the fact that it is not creative, it doesn’t experience things, and its achievements are narrowly focused. For example, AI wins at chess because in chess, the map is the territory. That is hardly true of real life. Building a bigger computer will not help; these are built-in limitations of AI.

It’s also worth pondering whether we could build a machine that is actually smarter than ourselves, as opposed to building one that performs one or another function more efficiently.

Could superintelligent AI evolve all by itself from less intelligent AI? As noted in an earlier post: “Some things that appear very simple if we look only at small tasks or small numbers prove immensely more complex when we ask for big results. The difficulties increase by orders of magnitude.” In that case, a task like becoming superintelligent is probably not evolvable.

So if the world is destroyed, it won’t likely be a supercomputer’s fault.

The paper Superintelligence cannot be contained: Lessons from Computability Theory Journal of Artificial Intelligence Research
is open access.


You may also want to look at:

Six limitations of artificial intelligence as we know it. You’d better hope it doesn’t run your life, as Robert J. Marks explains to Larry Linenschmidt.

and

Can AI really evolve into superintelligence all by itself? We can’t just turn a big computer over to evolution and go away and hope for great things.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Would Super AI Cure Cancer — or Destroy the Earth?