Mind Matters Natural and Artificial Intelligence News and Analysis
risk-of-artificial-intelligence-stockpack-adobe-stock
Risk Of Artificial Intelligence

Silicon Valley Insider: Why Friendly Super-AI Won’t Happen

Venture capitalist Peter Thiel talks about the Great Filter hypothesis: Why should we assume a superior artificial intelligence would be friendly?

In a wide-ranging talk at the recent COSM 2021 conference (November 10–12), Peter Thiel (a PayPal and Facebook founder) expressed concern that people worry a great deal about artificial intelligence that thinks like people (AGI) but the real push now is for massive “dumb” surveillance AI peering into every detail of our lives, for the benefit of either government or the corporate world.

He went on to say that he doubts that artificial general intelligence (AGI) — “superhuman software that can do everything that we can do” — would, in any event, be “friendly.” That is, that it “won’t kill us.”

If it is intelligent enough to be independent, why should we assume so? “Friendly” is a human value, hard to quantify, and thus hard to program:

If it’s a really a superior mind, it might surprise us … maybe it it’ll just want to turn people into dinosaurs instead of curing cancer.

Thinking of the question as a search problem, he notes that — assuming that there could be a large variety of minds, of which human minds are a tiny subset — we might be looking at a very large search space where it’s hardly clear that a friendly AGI would emerge from our programming efforts. And it might be too advanced for us to understand.

Some, he said, argue that the universe is so fine-tuned that we will get to friendly AGI safely. The trouble is there is a difference between fine-tuning arguments with respect to the origin of the universe and fine-tuning arguments with respect to its future:

It’s much crazier by the way, than the fine tuning argument in cosmology, because… either God fine tuned things, or we’re in a multi-verse where everything possible happened. But fine tuning is at least, in cosmology, a problem in the past. And the fact that we’re here, you know, there was some Great Filter, but we survived. With friendly AGI, the fine tuning is in the future.

If so, he thinks “the odds are massively against us. Maybe somewhere in the multiverse, there’ll be a friendly AGI, but the prospects don’t look terribly good.” Even the people promoting the the Singularity (we merge with supercomputers by 2045) are less buoyant. As a Valley maven, Thiel has spent twenty years talking to people about these things:

I was talking to these people and it’s like, wow, they don’t actually want any of this stuff to happen anymore. And they wanted to just slow down and they’re all talking about existential risks. They don’t want anything to happen.

That may explain the popularity of the Great Filter Hypothesis: We don’t see extraterrestrials because civilizations disappear somewhere between where we are now and the advanced state needed for intergalactic travel — possibly destroyed by their own AI.

Artificial intelligence concept. Robotic hand is holding human brain. 3D rendered illustration.

But then, those Singulatarians who believe that we live in a Matrix designed by intelligent extraterrestrials may not heed any warnings because they assume that the Matrix is bound to result in friendly AI: “That’s very different from what I would describe as the reductionist atheist evolutionary system that dominates Silicon valley.”

In the end, does Thiel think super AI, friendly or otherwise, is ever going to happen? As noted earlier, he says knowing that is “above my pay grade” which — given that he’s worth $3.7B — means it’s hardly a sure thing. He made his money betting on new ideas and he, noticeably, isn’t betting on this one.

Maybe the ultimate effect of the controversy over whether friendly AI can run the world for us is that we are thinking less often of how to run it better ourselves.

You may also wish to read: Does science fiction hint that we are actually doomed? That’s the implication of an influential theory, the Great Filter hypothesis, as to why we never see extraterrestrials. Depending how we read the Kardashev scale, civilizations disappear somewhere between where we are now and the advanced state needed for intergalactic travel.

and

Peter Thiel: Artificial General Intelligence isn’t happening. That whole transhumanist movement is slowing down, he told COSM 2021. But, he adds, What IS happening should sober us up a lot. There’s no road to computers that think like people that wouldn’t take us through 24/7 computer surveillance first. Thiel says. Is that what we want?


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Silicon Valley Insider: Why Friendly Super-AI Won’t Happen