Get the FREE DIGITAL BOOK: The Case for Killer Robots
Mind Matters Reporting on Natural and Artificial Intelligence
machine-learning-technology-diagram-with-artificial-intelligence-aineural-networkautomationdata-mining-in-vr-screenbusinessman-hand-working-with-modern-technology-and-digital-layer-effect-stockpack-adobe-stock.jpeg
Machine learning technology diagram with artificial intelligence (AI),neural network,automation,data mining in VR screen.businessman hand working with modern technology and digital layer effect.

Can Human Minds Be Reduced to Computer Programs?

In Silicon Valley that has long been a serious belief. But are we really anywhere close?

In the recent podcast, “Can We Upload Ourselves to a Computer and Live Forever?”, Walter Bradley Center director Robert J. Marks and computer scientist Selmer Bringsjord discuss whether we could achieve immortality by uploading our minds to computers.

In Silicon Valley, that’s a serious belief. Famous futurist Ray Kurzweil has said,

The year 2029 is the consistent date I’ve predicted, when an artificial intelligence will pass a valid Turing test — achieving human levels of intelligence.

“I have also set the date 2045 for singularity — which is when humans will multiply our effective intelligence a billion fold, by merging with the intelligence we have created.

Dom Galeon & Christianna Reedy,, “Ray Kurzweil claims singularity will happen by 2045” at Futurism (March 20, 2017)

Indeed, when Kurzweil (left) became a director of engineering at Google in 2012, he not only mainstreamed the basic idea but he “heralded, for many, a symbolic merger between transhumanist philosophy and the clout of major technological enterprise.” (The Guardian, 2017).

Beyond the Valley, the project gets more ambitious. In a recent piece at Gizmodo, Toronto-based writer George Dvorsky advocates uploading our minds to supercomputers somewhere in the universe, a proposal he calls Distributed Humanity: “Entire civilizations could live on a single supercomputer, enabling the existence of potentially trillions upon trillions of individuals, each of them a single brain emulation. These supercomputers could in turn be duplicated and sprinkled across the galaxy and beyond, in what could be described as Distributed Posthumanity.” ((Gizmodo, March 6, 2020).

Yes, it’s easy to dismiss what sounds bizarre. But the space shuttle would have sounded bizarre in, say, 1775. Perhaps we should be clear about why something can or can’t happen in principle, whether or not it sounds odd to us. So our first question should be “Can the human mind be reduced to computation?” If so, our minds must be “computable.” Are they? To assess that, we must start with a definition of “cognition,” the way our minds know things. A partial transcript of the podcast follows (start at 4:40).

04:40 | The definition of cognition

Robert J. Marks: You wrote a paper called “Cognition is not Computation.” Could you, first of all, define “cognition” and then we’ll talk about why it’s not computable.

Selmer Bringsjord (right): … Proving things. Discovering things. I don’t think that even a hair’s width of these things have been simulated in computational and cognitive science and in AI.

I remember asking James Moor, the Dartmouth professor who’s written quite a bit on AI: “You know. Jim, you really are a true believer in this stuff but can you tell me how much time you’re willing to give these AI people?”

I mean, if we give them another thousand years, and we still don’t have cognition as I’ve characterized it, captured computationally with the relevant artifacts and outputs produced… Are you going to be skeptical now?

He was, I suppose, as an academic, predictably clever and evasive, but the bottom line is, we don’t have this cognition captured. So you can either say “It’ll eventually happen,” as a true believer, or you can start taking seriously the possibility that there is something deeper going on. There might be some mathematical gap that we just don’t get; there might be some kind of qualitative gap in the case of consciousness. A lot of what I enumerated requires consciousness. You can’t hope deeply that something is going to happen without being a conscious creature.

So that’s, of course, where I stand. I think the math is — people don’t want to confront it but it’s very stark — we do things beyond what Turing machines are capable of doing and we’re also conscious. We exploit our consciousness to do things every day. Every day that we’re healthy and neurobiologically normal, we do all kinds of things that exploit the fact that we’re conscious.

Robert J. Marks (right): Some of that posturing for things that are happening in the future is pretty old stuff. I actually saw a video of Claude Shannon — I believe it was around 1960 — who said “I believe in a few years that artificial intelligence will be able to duplicate the mind and the sense of the robots that we see in science fiction films.” So even Shannon, back then, was posturing about something happening in the future (Shannon begins | 01:50 below).

In a way, it’s scrutiny avoidance. You say something is going to happen in the future — well, Ray Kurzweil says that Singularity is going to be here in 2049. Well, goodness, in 2049, Kurzweil will probably have passed, I’ll probably have passed, he’s just going to be some blip, very dim image in the rear view mirror that nobody’s going to care about, but he gets a lot of attention now. So I think this “delayed scrutiny” is something that’s been happening for a heck of a long time.

Note: Tech pioneer Ray Kurzweil told the COSM technology Summit in October that, by 2045 for computers, “Even the very best human is just another notch to pass,” as we become mere apps of our smart machines. But wait. We invented them. They didn’t invent us. And “delayed scrutiny” just means an endless IOU with no backing.

Is there something more to know?


Next: Why our minds can’t really be uploaded to computers

Earlier, Robert J. Marks and Selmer Bringsjord were discussing issues around human vs. computer thinking abilities:

Thinking machines? The Lovelace test raises the stakes. The Turing test has had a free ride in science media for far too long, says an AI expert. (This is the partial transcript and notes to the earlier part of the podcast.)

and

Thinking machines? Has the Lovelace test been passed? Surprising results do not equate to creativity. Is there such a thing as machine creativity? The feats of machines like AlphaGo are due to superior computational power, not to creativity at originating new ideas. Dr. Bringsjord sees the ability to write, say, a novel of ideas as a more realistic test of human vs. computer achievement.

Show Notes

  • 00:39 | Introducing Selmer Bringsjord, Professor — Rensselaer Polytechnic Institute (RPI)
  • 01:18 | Can computers fake all human behaviors?
  • 02:02 | Can computers duplicate all human behaviors?
  • 03:10 | Subjective measurement
  • 04:40 | The definition of cognition
  • 08:50 | Is consciousness is a special case of cognition?
  • 13:16 | Are consciousness and cognition non-algorithmic?
  • 14:48 | Examples of cognition that are not computable

Additional Resources


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Can Human Minds Be Reduced to Computer Programs?