Could Super Artificial Intelligence Be, in Some Sense, Alive?An AI theorist makes the case to a technical writer…
Tech writer Ben Dickson poses the question:
Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us?Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
Philosopher Borna Jalšenjak (above right) of the Luxembourg School of Business has been thinking about that. He has a chapter, “The Artificial Intelligence Singularity: What It Is and What It Is Not,” in Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, in which he explores the case for “thinking machines” being alive, even if they are machines. The book as a whole “presents unique perspectives on ideas in deep learning and artificial intelligence, and their historical and philosophical roots.”
Singularity is a term that comes up often in discussions about general AI. And as is wont with everything that has to do with AGI, there’s a lot of confusion and disagreement on what the singularity is. But a key thing that most scientists and philosophers agree that it is a turning point where our AI systems become smarter than ourselves. Another important aspect of the singularity is time and speed: AI systems will reach a point where they can self-improve in a recurring and accelerating fashion.
“Said in a more succinct way, once there is an AI which is at the level of human beings and that AI can create a slightly more intelligent AI, and then that one can create an even more intelligent AI, and then the next one creates even more intelligent one and it continues like that until there is an AI which is remarkably more advanced than what humans can achieve,” Jalsenjak writes.Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
No. Wait. Is there clear evidence that less intelligent entities can simply create more intelligent ones? Consider,
A recent paper on the evolution of learning “explores how computers could begin to evolve learning in the same way as natural organisms did.” The authors use Avida, a software program for simulating evolution, to support their claim.
Avida was originally intended to demonstrate how Darwinian evolution, which could occur without design in nature, is supposed to work. However, as many have shown, the program actually ended up demonstrating quite conclusively the need for design. This latest paper on using Avida to simulate the evolution of learning has shown the same thing.Jonathan Bartlett, “Can computers simply evolve greater intelligence?” at Mind Matters News
Many people do sincerely believe that higher intelligence can just somehow evolve from lower intelligence. But sincere belief isn’t evidence. And Dickson stresses, “To be clear, the artificial intelligence technology we have today, known as narrow AI, is nowhere near achieving such feat.” So we are talking about whether superintelligent AI, if it ever arrives, can be considered alive.
123 definitions of life
And that is a more complex question than we might at first suppose. First there are 123 definitions of life out there, with different sciences tending to prefer their own:
It is surprisingly difficult to pin down the difference between living and non-living things …
To make matters worse, different kinds of scientist have different ideas about what is truly necessary to define something as alive. While a chemist might say life boils down to certain molecules, a physicist might want to discuss thermodynamics. …
The classic borderline case is viruses. “They are not cells, they have no metabolism, and they are inert as long as they do not encounter a cell, so many people (including many scientists) conclude that viruses are not living,” says Patrick Forterre, a microbiologist at the Pasteur Institute in Paris, France.
For his part, Forterre thinks viruses are alive, but he acknowledges that the decision really depends on where you decide to place the cut-off point.Josh Gabbatiss, “There are over 100 definitions for ‘life’ and all are wrong” at BBC Earth (January 2, 2017)
Arguing that for the panpsychist view that electrons may be conscious, Tam Hunt makes the point that
Many biologists and philosophers have recognized that there is no hard line between animate and inanimate. J.B.S. Haldane, the eminent British biologist, supported the view that there is no clear demarcation line between what is alive and what is not: “We do not find obvious evidence of life or mind in so-called inert matter…; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.
Niels Bohr, the Danish physicist who was seminal in developing quantum theory, stated that the “very definitions of life and mechanics … are ultimately a matter of convenience…. [T]he question of a limitation of physics in biology would lose any meaning if, instead of distinguishing between living organisms and inanimate bodies, we extended the idea of life to all natural phenomena.”Tam Hunt, “Electrons May Very Well Be Conscious” at Nautilus (May 14, 2020)
So there isn’t a simple rule we can apply.
That said, some of the arguments for AI as a form of life sound suspiciously like the arguments around extraterrestrial beings:
There’s great tendency in the AI community to view machines as humans, especially as they develop capabilities that show signs of intelligence. While that is clearly an overestimation of today’s technology, Jasenjak also reminds us that artificial general intelligence does not necessarily have to be a replication of the human mind.
“That there is no reason to think that advanced AI will have the same structure as human intelligence if it even ever happens, but since it is in human nature to present states of the world in a way that is closest to us, a certain degree of anthropomorphizing is hard to avoid,” he writes in his essay’s footnote.Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
Very well, but that’s what they tell us about the so-far undetected extraterrestrials: They might be a form of life we don’t recognize as such. One can never disprove such a proposition but, as before, it does not amount to evidence for anything.
Plants are not a good analogy
Then there is the question of “purpose”:
There are different levels to life, and as the trend shows, AI is slowly making its way toward becoming alive. According to philosophical anthropology, the first signs of life take shape when organisms develop toward a purpose, which is present in today’s goal-oriented AI. The fact that the AI is not “aware” of its goal and mindlessly crunches numbers toward reaching it seems to be irrelevant, Jalsenjak says, because we consider plants and trees as being alive even though they too do not have that sense of awareness.Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
Again, wait. Sophisticated computers have exclusively the purposes that humans program into them in our own interests, as do smart ovens and self-driving cars. These objects have no intrinsic purpose.
Plants have their own intrinsic purposes, which humans did not create, of growing and producing seeds. Humans can use plants and even trick them into doing something that is not part of their intrinsic purpose (seedless grapes, for example). But the original purpose is theirs. So we can give plants, but not computers, credit for purpose in life.
Jalšenjak goes on to argue that AI can be alive even though it does not need to reproduce itself because it can, after all, just replace worn-out parts. But that fact alone is evidence that an AI entity is not alive. Life forms must reproduce themselves in a vast variety of ways because they are, generally, unitary beings, not a collection of swappable parts.
And what about “self-improvement,” which is regarded by some as part of a definition for life?
Today’s machine learning algorithms are, to a degree, capable of adapting their behavior to their environment. They tune their many parameters to the data collected from the real-world, and as the world changes, they can be retrained on new information. For instance, the coronavirus pandemic disrupted may AI systems that had been trained on our normal behavior. Among them are facial recognition algorithms that can no longer detect faces because people are wearing masks. These algorithms can now retune their parameters by training on images of mask-wearing faces. Clearly, this level of adaptation is very small when compared to the broad capabilities of humans and higher-level animals, but it would be comparable to, say, trees that adapt by growing deeper roots when they can’t find water at the surface of the ground.Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
Tree roots? Digging deeper for water is hardly their greatest accomplishment. They are very complex systems, used by the trees for, among other things, exchanging information with other trees:
Researchers are unearthing evidence that, far from being unresponsive and uncommunicative organisms, plants engage in regular conversation. In addition to warning neighbors of herbivore attacks, they alert each other to threatening pathogens and impending droughts, and even recognize kin, continually adapting to the information they receive from plants growing around them. Moreover, plants can “talk” in several different ways: via airborne chemicals, soluble compounds exchanged by roots and networks of threadlike fungi, and perhaps even ultrasonic sounds. Plants, it seems, have a social life that scientists are just beginning to understand.Dan Cossins, “Plant Talk” at The Scientist
Plants are not thought by botanists to be conscious but they do communicate extensively without a mind or brain. Nor, and this is the main point, do they need humans to program them or teach them anything. It all happens with or without our knowledge, let alone our involvement.
Do we really need to take this apocalypse seriously?
Jalšenjak seems undeterred. He challenges us, “Are characteristics described here regarding live beings enough for something to be considered alive or are they just necessary but not sufficient?”
And Dickson responds,
Having just read I Am a Strange Loop by philosopher and scientist Douglas Hofstadter, I can definitely say no. Identity, self-awareness, and consciousness are other concepts that discriminate living beings from one another. For instance, is a mindless paperclip-builder robot that is constantly improving its algorithms to turn the entire universe into paperclips alive and deserving of its own rights?Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
So Dickson doesn’t seem convinced. Still, he offers,
But like many other scientists, Jalsenjak reminds us that the time to discuss these topics is today, not when it’s too late. “These topics cannot be ignored because all that we know at the moment about the future seems to point out that human society faces unprecedented change,” he writes.Ben Dickson, “What will happen when we reach the AI singularity?” at TheNextWeb, July 7, 2020
Maybe. But then again, maybe not.
“The time to discuss this is now!” implies that the scenario described must happen so we have no choice but to prepare. Perhaps the discussion we should have first is, how plausible are the arguments that whatever AI apocalypse is proposed must happen? In this case, Jalšenjak didn’t succeed in convincing Dickson that super AI should be considered alive. Maybe we don’t need to have the discussion now, except as Sci-Fi Saturday food for thought.
The whole field could probably benefit from a dose of common sense and skepticism.
You may also enjoy:
Which is smarter? Babies or AI? Not a trick question Humans learn to generalize from the known to the unknown without prior programming and do not get stuck very often in endless feedback loops.
AI expert: Artificial intelligences are NOT electronic people. AI makes mistakes no human makes, so some experts are trying to adapt human cognitive psychology to machines. David Watson of the Alan Turing Institute fills us in on some of the limitations of AI and proposes fixes based on human thinking.
AI will fail, like everything else, eventually The more powerful the AI, the more serious the consequences of failure Overall, we predict that AI failures and premeditated malevolent AI incidents will increase in frequency and severity proportionate to AIs’ capability.