Mind Matters Natural and Artificial Intelligence News and Analysis
ai-concept-humanoid-robot-with-artificial-intelligence-fictional-woman-android-generative-ai-stockpack-adobe-stock
AI concept, humanoid robot with artificial intelligence, fictional woman android, generative AI
Image licensed via Adobe Stock

The Singularity — When We Merge With AI — Won’t Happen

Futurist predictions depend on the assumption that the human brain is like a machine, says, computer scientist Erik Larson. But it isn’t
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Erik J. Larson, who writes about AI here at Mind Matters News, spoke with EP podcast host Jesse Wright earlier this week about the famed/claimed Singularity, among other things. That’s when human and machine supposedly merge into a… Super Humachine (?).

Inventor and futurist Ray Kurzweil has been prophesying that for years. But philosopher and computer scientist Larson, author of The Myth of Artificial Intelligence (Harvard 2021), says… not so fast.

The podcast below is nearly an hour long but it is handily divided into segments, a virtual Table of Contents. We’ve set it at “The Fallacy of the Singularity,” with selections from the transcript below. But you can click and enjoy the other parts at your convenience.

00:00 Intro 01:10 Misconceptions about AI Progress

 11:48 Bias and Misinformation in AI Models

21:52 The Plateau of Progress & End of Moore’s Law

31:30 The Fallacy of the Singularity

47:27 Preparing for the Future Job Market

Note: Larson blogs at Colligo, if you wish to follow his work.

And now…

AI Myths – Explained by AI Scientist || El Podcast EP67

Decades ago, Larson says, programmers were focused on getting computers to win at complex board games like chess. One outcome was that their model of the human mind was the computer. And that, he says, became a “narrative” in our culture.

Larson: [33:19] You know, people are kind of just bad versions of computers. If you look at all the literature coming out of psychology and cognitive science and these kind of fields, they’re always pointing out how we’re full of bias… jumping to the wrong conclusions. We can’t be trusted. Our brains are very very Yesterday’s Tech so to speak.

Choking off innovation?

Larson sees this easy equation of the mind and the computer as choking off innovation, at which humans excel. It encourages people to believe that computers will solve our problems when there are major gaps in their ability to do so. One outcome is that — contrary to cliché — this one of the least innovative periods in a while.

Larson: [34:25] The last decade is one of the least innovative times that we’ve had in a long time and it’s sort of dangerous that everybody thinks the opposite. If people said, wait a minute, we’re just doing tweaks to neural networks; we’re just doing extensions to existing technology… Yes, we’re making progress but we’re doing it at the expense of massive amounts of funding, massive amounts of energy consumption, right? …

Instead he sees conformity everywhere, accompanied by a tendency to assume that incremental improvements amount to progress in fundamental understanding.

So how does our self-contented mediocrity produce an imminent, unhinged Singularity?

Well, a pinch of magic helps!

Larson: [37:49] What’s underlying that is this idea that once you get smart enough, you also become alive. And that’s just not true. A calculator is extremely good at arithmetic. No one can beat a calculator on the face of the planet but that doesn’t mean that your calculator has feelings about how it’s treated. In a sense, there’s just a huge glaring error philosophical error that’s being made by the Superintelligence folks, the “existential risk” folks. That’s wasted energy in my view. That’s not what’s going to happen.

If a more powerful computer is not like a human mind, what’s really going to happen?

Larson: [38:40] Very bad actors are going to use very powerful machines to screw everything up… Somebody gets control of these systems and directs them towards ruining Wall Street, ruining the markets, bringing down the power grid. That’s a big threat. The machines themselves… I would bet the farm that they’re not going to make the leap from being faster and calculating more complicated problems to being alive in any sort of sense or having any kind of motivations or something that could misalign like that. That’s the Sci-Fi Vibe that’s getting pushed into a scientific discussion.

The Singularity depends on a machine model of the mind

Larson: [46:17] If we’re just a complicated machine, then it stands to reason that at some point we’ll have a more complicated machine. It’s just a continuum and we’re on that. But if you actually remove that premise and say, look we’re not machines, we’re not computers… then you have an ability to talk about human culture in a way that can actually be healthy. We think differently, we reason differently, we have superior aspects to our behavior and performance, and we actually do care and have motivations about how things turn out — unlike the tools we use.

So it looks as though the transhuman could go extinct without ever existing.

You may also wish to read: Tech pioneer Ray Kurzweil: We will merge with computers by 2045. For computers, “Even the very best human is just another notch to pass,” he told the COSM Technology Summit. Kurzweil explained, “To do that, we need to go inside your brain. When we get to the 2030s, we will be able to do that. So a lot of our thinking will be inside the cloud. In another ten years, our non-biological thinking will be much better than our biological thinking.” In 2017, he predicted 2045 for a total merger between man and machine.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

The Singularity — When We Merge With AI — Won’t Happen