Mind Matters Natural and Artificial Intelligence News and Analysis
silver paper clips
An extreme close up image of metal paper clips
Photo licensed via Adobe Stock

Santa Fe Prof Dissects End-of-World Super-AI Claims

There seems to be little communication, she notes, between people concerned about sci-fi AI risks and people concerned about predictable everyday risks
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
Melanie Mitchell

Santa Fe Institute professor of complexity Melanie Mitchell takes issue — in a gentle way — with those who warn about the dangers of superintelligent machines (AI alignment) destroying us all:

In one scenario, for example, Oxford Future of Humanity Institute’s Nick Bostrom developed a scenario by which a super AI, told to make paper clips, might use up the world’s resources in doing so. Her comment:

To many outside these specific communities, AI alignment looks something like a religion — one with revered leaders, unquestioned doctrine and devoted disciples fighting a potentially all-powerful enemy (unaligned superintelligent AI). Indeed, the computer scientist and blogger Scott Aaronson recently noted that there are now “Orthodox” and “Reform” branches of the AI alignment faith. The former, he writes, worries almost entirely about “misaligned AI that deceives humans while it works to destroy them.”

Melanie Mitchell, “What Does It Mean to Align AI With Human Values?” at Quanta (December 13, 2022)

As the author of Artificial Intelligence: A Guide for Thinking Humans ( Farrar, Straus and Giroux, 2019), she offers reasons for thinking that we are not doomed by our silicon.

First, she notes, there seems to be little communication between people concerned about the sci-fi AI risks and people concerned about more predictable, everyday risks: “What about the more immediate risks posed by non-superintelligent AI, such as job loss, bias, privacy violations and misinformation spread?” What’s interesting here is that the real-world problems Mitchell lists are all ones we can do something about using conventional methods, whereas the superintelligent AI that some worry about doesn’t even exist at present — and maybe never will. Many predicted apocalypses never occur, for a variety of reasons.

Then there’s the question of what super AI is actually supposed to do:

Many researchers are actively engaged in alignment-based projects, ranging from attempts at imparting principles of moral philosophy to machines, to training large language models on crowdsourced ethical judgments. None of these efforts has been particularly useful in getting machines to reason about real-world situations. Many writers have noted the many obstacles preventing machines from learning human preferences and values: People are often irrational and behave in ways that contradict their values, and values can change over individual lifetimes and generations. After all, it’s not clear whose values we should have machines try to learn.

Melanie Mitchell, “What Does It Mean to Align AI With Human Values?” at Quanta (December 13, 2022)

Some are trying to teach machines ethics anyway. But she notes,

Ethical notions such as kindness and good behavior are much more complex and context-dependent than anything IRL [inverse reinforcement learning] has mastered so far. Consider the notion of “truthfulness” — a value we surely want in our AI systems. Indeed, a major problem with today’s large language models is their inability to distinguish truth from falsehood. At the same time, we may sometimes want our AI assistants, just like humans, to temper their truthfulness: to protect privacy, to avoid insulting others, or to keep someone safe, among innumerable other hard-to-articulate situations.

Melanie Mitchell, “What Does It Mean to Align AI With Human Values?” at Quanta (December 13, 2022)

Maybe, don’t fire the human staff that can do this stuff just yet? After all, they have a natural advantage in understanding human beings.

Mitchell has argued elsewhere that getting AI to understand human-like concepts is still “AI’s most important open problem.

She leaves readers with a significant philosophical problem: How could AI become superintelligent without having any goals or values of its own? Can pure intelligence be separated from goals and values? And she warns, “… without a better understanding of what intelligence is and how separable it is from other aspects of our lives, we cannot even define the problem, much less find a solution.”

Mitchell hopes for a “scientifically based theory of intelligence.” But that may not be possible, or at least not in the form she seems to be seeking.

Generally, we can’t have a theory of human intelligence without a theory of human consciousness. The theory of human consciousness that researchers are seeking would account for human consciousness as a sort of ramped-up version of lab mouse consciousness. But what if it isn’t? Human consciousness is little understood at present. The origin of something as basic as human language is likewise unknown. A theory that shed light on either topic might not be a materialist theory and might not hold out much hope for artificial intelligence that far surpasses human intelligence. Many researchers will surely prefer to go with a buzzy new (materialist) theory every few years that doesn’t really work and eventually yields to — another buzzy new theory.

Here’s a comic take on the super-AI takeover concept:

You may also wish to read: Two computer doomsday scenarios: How likely are they? One features a computer superintelligence beyond human comprehension and the other features a computer that destroys the world for an algorithmic reward. Scary, but the human race is not likely to unite to produce the supercomputers envisioned. Mutually hups will destroy each others’ machines long before that.

and

Oxford’s John Lennox busts the “computer takeover” myth. AI is here to stay, he says, but in addition to doing a great deal of good, it raises vast problems we must address. Addressing questions like transhumanism and conscious AI, Lennox reminds us of the sort of limitations posed by, for example, Gödel’s Theorem.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Santa Fe Prof Dissects End-of-World Super-AI Claims