Researcher: AI Can’t Be Conscious Because It Is Not Alive
Consciousness is not computation. Without the ability to experience events from one phase to the next (sentience), we could not really be consciousRemember HAL 9000, David from Prometheus (2012) and all those other fun, evil intelligent robots and computers? It’s great fiction but Marc Wittmann research fellow at the Institute for Frontier Areas in Psychology and Mental Health in Freiburg, Germany, says it’s definitely fiction. He offered an account earlier this month at Psychology Today, of why AI cannot actually become conscious:
Equating the brain with a computer because both have been referred to as machines is an erroneous assumption. You can easily label two different objects with the same word: “machine.” That does not change the fact that the brain and a metal-containing machine are two very different entities. Computers operate based on the flow of electricity through their components. But the components themselves always stay the same. In principle, you could shut down a computer and store it in a dust-free environment. A hundred years later you could switch it on again and it could continue processing data.
Marc Wittmann, “A Question of Time: Why AI Will Never Be Conscious,” Psychology Today, August 3, 2024
Of course, life forms are necessarily in a constant state of change. If the change is not growth or managed stasis, it is decay, leading eventually to disintegration. Consciousness is, among other things, awareness of this constant change.

Wittmann cites microprocessor pioneer Federico Faggin, quoting from his book Irreducible (Essentia 2024), “A living organism is never the same physical and psychological entity from one instant to the next. The computer hardware, on the other hand, remains the same physical structure from the moment it leaves the factory until it stops working or is discarded.”
Wittmann adds,
Physical time as change and becoming is mirrored by physiological time and is in turn reflected by the conscious experience of constant transition, as felt passage of time. Consciousness as we know it is embedded in the principles of life, which are dynamic states of becoming. We as humans are part of nature. That is what binds the time of physics with the time of biology and with conscious time.
Wittmann, “A Question of Time: Why AI Will Never Be Conscious“
Without the ability to experience what is happening from one such phase to the next (sentience), we could not really be conscious of our surroundings.
Consciousness is, admittedly, hard to explain and study. But it clearly isn’t merely computation. A GPS unit, however sophisticated, is calculating, not experiencing, a road trip. It’s the conscious passengers who experience the trip.
Some remain convinced, however, that conscious AI is imminent
Futurist and inventor Ray Kurzweil told The Guardian last June that nonetheless we’ll be there in five years:
So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
Zoë Corbyn, AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’, The Guardian, June 29, 2024
AI could not, of course, be doing these things without consciousness. And in Kurzweil’s scenario, as with Hal 9000 and David, consciousness appears to simply be taken for granted.
It’s remarkable, when we consider that we can’t even define consciousness very easily, that some are sure that — given a certain level of sophistication — computation will somehow just transform into consciousness. The fact that such a view is not based on any demonstrable premise is precisely what makes it difficult to argue against.
Why are obvious barriers to AI consciousness not often discussed?
At ZME Science, Tibi Puiu reflects on Kurzweil’s predictions (and transhumanist vision generally):

Kurzweil’s predictions are bold, and while they have not always been spot-on, they push the boundaries of how we think about the future. As we edge closer to the dates he has forecasted, the debate over the singularity will only intensify. Whether his vision becomes reality remains to be seen, but it’s clear that the questions he raises are more relevant now than ever.
“AI Expert Ray Kurzweil Says We’re Just Years Away from Human-Level AI (And This Could Change Everything),” August 9, 2024
Well, maybe. But wouldn’t it be a good idea to start by taking a sober look at the reasons why conscious AI may not be possible? Are the actual barriers surmountable?
There is a difference between saying “We can send humans to Pluto!” and “We can build a time machine!” The barriers to Pluto tourism may all be practical and technical. The barriers to time travel are probably bound up with the nature of our universe.
There is something wrong with the discussion around conscious AI if we don’t want to tackle the question of which type of problem we are facing.
Wittmann is the author of Altered States of Consciousness: Experiences Out of Time and Self (MIT Press 2018).
You may also wish to read: When materialist assumptions about the mind begin to sound dated… A 2017 profile of Roger Penrose and his consciousness theory was penned before slow changes and major tumults rocked the discipline — and it shows. Paulson’s Penrose profile is written as if materialism is going to prevail but that seems much less likely now than it perhaps did in 2017.