Mind Matters Natural and Artificial Intelligence News and Analysis
Anubis of Ancient Egypt (God of Death). Dark abstract Egyptian background, dark room with smoke, sparks from lights, rays of light.

“Friendly” Artificial Intelligence Would Kill Us

Is that a shocking idea? Let’s follow the logic
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

What if we could create “god” in our image? Founder of modern psychiatry Sigmund Freud hypothesized that man invented the concept of god. Ever the killjoy in his quest for joy, the philosopher Nietzsche famously went so far as to say that modern man “killed god.”

Since their time, modern science and technology have sought to resurrect god. But, if god does not really exist, then it is necessary to create god. Life without ultimate meaning has no proximate meaning. Plus, everyone would like to have an ultimate parent to look out for them and love them. So let us look at our options.

The best bet we currently have is artificial intelligence. An idol, the old-fashioned sort of man-made god, is pretty boring. We can make believe it does all sorts of things, but in the end, it is a big doll. Nowadays, even kids are not happy with those old-fashioned lifeless dolls. They want a doll that speaks and walks, an action figure Same with our gods. If we are going to make our own god, we want one that interacts with us. It must be at least as good as a video game.

Enter the dilemma. What if we create a mean god? Or even worse, a catastrophically stupid god who accidentally turns the universe into grey goo or paperclips? We need to guarantee that the god we create is both smart and good.

This pursuit has spawned the field of research known as “friendly AI,” where the goal is to construct an AI that is guaranteed to be “omnibenevolent.” That is, it always wishes things that are good for us.

Enter the horns of the dilemma. If we want a smart AI, then we must give it the power to make decisions. And if it has the power to make decisions, then it must ultimately decide between good and evil. If it chooses evil, then too bad for us.

Thus, friendly AI is not possible. Any god we create in our image will be just as incompetent and evil as we are, if not more so.

Wait, a reader might ask. Why must the AI make decisions?

Consider this scenario. We will use a little bit of computer code. But just a little bit, so stay with us.

Let’s say we have a robot. We will name him Alf, short for Alfalfa and Omega—a fitting name for our “godbot.”

Because Alf is a robot, we can exactly simulate his circuitry and predict what he will do.

We run an analysis on Alf, and then (here is the rub) we tell Alf what he will do. Let’s simplify things and say that Alf has only two possible actions: output a zero (0) or a one (1).

If Alf is a mere automaton, and blindly follows orders, there is no problem. We tell Alf that he will output a 0 and Alf will output a 0. We tell Alf he will output a 1 and Alf will output a 1.

But, if we want Alf to be a worthy god, to have intelligence on par with humans, Alf cannot just blindly do as he is told. Alf must make decisions and use judgment.

So, now let’s rerun our scenario with a newly installed decision-making Alf.

If Alf can make decisions, then when presented with a set of alternatives, he can choose between them.

Here we arrive at the key point, the fundamental contradiction in friendly AI, so pay attention:

Alf can choose the opposite of what we tell him he will do.

We have carefully analyzed all of Alf’s circuitry and tell him what his next action will be, with complete certainty. Yet, Alf can make decisions. And if he can make decisions, he can decide what he will do. Thus, Alf can choose to do the opposite of what we told him he will do. And, at the very same time, our analysis has told us that Alf will necessarily do the very action we said he will do.

Because Alf cannot both do and not do the same thing at the same time, we have a contradiction.

What do we conclude? We conclude it is impossible to construct a godbot that has both of these properties at the same time:

a) complete predictability
b) ability to make decisions

We will call these the “Alf Criteria.”

We can make this scenario more rigorous with a short Alf Python program, as follows.

def Alf(input):
if input == 0:
return 1
if input == 1:
return 0

The above program is guaranteed to always do the opposite of its input, making a very simple decision. Thus, it is impossible to tell the program what it will do next.

If it is impossible to tell our short program what it will do next, then it is certainly impossible to tell a much more complex decision-making godbot what it will do next. We are at least aiming for a godbot that can make smart decisions and not blindly turn the world into interoffice memos.

Now, how does this relate to our friendly AI, the omnibenevolent godbot we started with?

Remember, the goal of friendly AI is to create a godbot that is guaranteed to be kind and good, to never do anything bad, and not be stupid. Now in order to guarantee that the bot will always be good, it must be completely predictable, so that we can predict with 100% accuracy that it will never be bad. This programming fits the “Alf Criterion A,” that the godbot be completely predictable.

But the second point is that, in order to not be stupid, the godbot must be able to make decisions and not just blindly do what it is told. This is “Alf Criterion B.”

Therefore, our friendly AI, the omnibenevolent godbot, must fulfill both Alf Criteria.

However, we have just seen that the Alf Criteria logically contradict each other and thus are mutually exclusive.

As a result, we must conclude that friendly AI, the omnibenevolent godbot, is logically impossible.

The necessary corollary is that we can never create a god in our own image. Or rather, we can create a god that is at least flawed and malevolent as we are, except much more powerful. And if the past couple centuries have shown us anything, it is that concentrating too much power into the hands of too few people results in a great many people dead.

If friendly AI is impossible, can we expect anything better out of our algorithms? I, for one, am highly doubtful.


If you are worried about things like this happening, check out Eric Holloway’s Could AI think like a human, given infinite resources? Given that the human mind is a halting oracle, the answer is no.

Whew.

Some do worry about an AI takeover. Check out, for example, Tales of an invented god


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

“Friendly” Artificial Intelligence Would Kill Us