Mind Matters Natural and Artificial Intelligence News and Analysis
visualizatipon-of-human-consciousness-in-artificial-intelligence-process-of-thinking-robot-humanoid-android-on-dark-background-with-neurals-connection-created-with-generative-ai-stockpack-adobe-stock
Visualizatipon of human consciousness in artificial intelligence, process of thinking robot. Humanoid android on dark background with neurals connection. Created with Generative AI
Image licensed via Adobe Stock

Artificial Consciousness Remains Impossible (Part 2)

A machine no more “does things on its own” than a catapult flings by itself.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Read part one of this article series here.

The following segments are responses to specific categories of counterarguments against my thesis, which you can read in part one of this article HERE. Please note that these responses do not stand on their own and can only be seen as supporting my main arguments from the first part. Each response only applies to those who hold the corresponding objections.

“You can’t provide proof of intentionality and qualia because they’re subjective”

If your mind doesn’t possess intentionality, “the power of minds to be about, to represent, or to stand for, things, properties, and states of affairs,” then you wouldn’t be able to understand any word on this screen in front of you because none of these words would refer to anything at all. If qualia don’t exist, then no subjective feelings exist. There’d be no coldness of the cold, no feelings of skepticism or the whiteness, goldness, blueness, or blackness of anything. There is a difference between the nature of a phenomenon and the nature of a phenomenon’s existence, and the existence of intentionality and qualia is self-evident.

”Any argument against the possibility of a machine being able to do something that human beings can do is special pleading!”

Performance has to do with intelligence, not consciousness. See the definitions. Why would anyone assume consciousness is something that is “done”? What supports that assumption? The assumption of consciousness-as-act isn’t in any way axiomatic, and an AGI could theoretically perform any and all of its tasks without ever being conscious.

Circularity

From the conclusion, operating beyond syntax requires meaning derived from conscious experience. This may make the argument appear circular (assuming what it’s trying to prove) when conscious experience was mentioned at the very beginning of the argument as a defining component of meaning.

However, the initial proposition defining meaning (“Meaning is a mental connection with a conscious experience”) wasn’t given validity as a result of the conclusion or anything following the conclusion; it was an observation independent of the conclusion.

Functionalist Objections

Many objections come in one form of functionalism or another. That is, they all go something along with one or more of these lines:

  • If we know what a neuron does, then we know what the brain does.
  • If we can copy a brain or reproduce collections of neurons, then we can produce artificial consciousness
  • If we can copy the functions of a brain, we can produce artificial consciousness

No functionalist arguments work here, because to duplicate any function there must be ways of ensuring all functions and their dependencies are visible and measurable. There is no “copying” something that’s underdetermined. The functionalist presumptions of “if we know/if we can copy” are invalid.

Underdetermination entails no such exhaustive modeling of the brain is possible, as explained by the following passage from SEP (emphasis mine)[14]:

“…when Newton’s celestial mechanics failed to correctly predict the orbit of Uranus, scientists at the time did not simply abandon the theory but protected it from refutation…

“…This strategy bore fruit, notwithstanding the falsity of Newton’s theory…

“…But the very same strategy failed when used to try to explain the advance of the perihelion in Mercury’s orbit by postulating the existence of “Vulcan”, an additional planet…

“…Duhem was right to suggest not only that hypotheses must be tested as a group or a collection, but also that it is by no means a foregone conclusion which member of such a collection should be abandoned or revised in response to a failed empirical test or false implication.

In short, we have no assurances that we could engineer anything “like X” when we can’t have total knowledge of this X in the first place. There could be no assurance of a complete model due to underdetermination. Functionalist arguments fail because correlations in findings do not imply causation, and those correlations must be 100% discoverable to have an exhaustive model. There are multiple theoretical strikes against a functionalist position even before looking at actual experiments such as this one:

Repeat stimulations of identical neuron groups in the brain of a fly produce random results. This physically demonstrates the underdetermination[15]:

“…some neuron groups could elicit multiple behaviors across animals or sometimes even in a single animal.

Stimulating a single group of neurons in different animals occasionally resulted in different behaviors. That difference may be due to a number of things, Zlatic says: “It could be previous experience; it could be developmental differences; it could be somehow the personality of animals; different states that the animals find themselves in at the time of neuron activation.”

Stimulating the same neurons in one animal would occasionally result in different behaviors, the team found.”

In the above-quoted passage, note all instances of the phrases “may be” and “could be.” They are indications of underdetermined factors at work. No exhaustive modeling is possible when there are multiple possible explanations from random experimental results.

Functionalist Reply: “…but we don’t need exhaustive modeling or functional duplication.”

Yes, we do, because there isn’t any assurance that consciousness is produced otherwise. A plethora of functions and behaviors can be produced without introducing consciousness. There are no real measurable external indicators of success.

Behaviorist Objections

These counterarguments generally say that if we can reproduce conscious behaviors, then we have produced consciousness. For instance, I completely disagree with a Scientific American article claiming the existence of a test for detecting consciousness in machines[16].

Observable behaviors don’t mean anything, as the original Chinese Room argument had already demonstrated. The Chinese Room only appears to understand Chinese. The fact that machine learning doesn’t equate to actual learning also attests to this.

Emergentism via Machine Complexity

Counterexamples of complexity emergentism include the number of transistors in a phone processor versus the number of neurons in the brain of a fruit fly. Why isn’t a smartphone more conscious than a fruit fly? What about supercomputers that have millions of times more transistors? How about space launch systems that are even more complex in comparison … are they conscious? Consciousness doesn’t arise out of complexity.

Cybernetics and Cloning

If living entities are involved then the subject is no longer that of artificial consciousness. Those would be cases of manipulation of innate consciousness and not any creation of artificial consciousness.

“Eventually, everything gets invented in the future” and “Why couldn’t a mind be formed with another substrate?”

The substrate has nothing to do with the issue. All artificially intelligent systems require algorithms and code. All are subject to programming in one way or another. It doesn’t matter how far in the future one goes or what substrate one uses; the fundamental syntactic nature of machine code remains. Name one single artificial intelligence project that doesn’t involve any code whatsoever. Name one way that an AI can violate the principle of noncontradiction and possess programming without programming (see section “Volition Rooms” above.)

“We have DNA and DNA is programming code”

DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

“But our minds also manipulate symbols.”

Just because our minds can deal with symbols doesn’t mean it operates symbolically. We can experience and recollect things for which we have yet formulated proper descriptions[18]. In other words, we can have indescribable experiences. We start with non-symbolic experiences, then subsequently concoct symbolic representations for them in our attempts to rationally organize and communicate those experiences.

A personal anecdotal example: My earliest childhood memory was that of laying on a bed looking at an exhaust fan on a window. I remember what I saw back then, even though at the time I was too young to have learned words and terms such as “bed”, “window”, “fan”, “electric fan’, or “electric window exhaust fan”. Sensory and emotional recollections can be described with symbols but the recollected experiences themselves aren’t symbolic.

Furthermore, the medical phenomenon of aphantasia demonstrates visual experiences to be categorically separate from descriptions of them[19].

Randomness and Random Number Generators

Randomness is a red herring when it comes to serving as an indicator of consciousness (not to mention the dubious nature of all external indicators, as shown by the Chinese Room Argument). A random number generator inside a machine would simply be providing another input, ultimately only serving to generate more symbols to manipulate.

“We have constructed sophisticated functional neural computing models”

The existence of sophisticated functional models does in no way help functionalists escape the functionalist trap. Those models are still heavily underdetermined as shown by a recent example of an advanced neural learning algorithm[20].

The model is very sophisticated, but note just how much underdetermined couching it contains:

”possibly a different threshold”

”may share a common refractory period”

”will probably be answered experimentally”

Models are far from reflecting functioning neural groups present in living brains; I highly doubt that any researcher would lay such a claim, for that’s not their goal in the first place. Models can and do produce useful functions and be practically “correct”, even if those models are factually “wrong” in that they don’t necessarily correspond to actuality in function. In other words, models don’t have to 100% correspond to reality for them to work, thus their factual correctness is never guaranteed. For example, orbital satellites could still function without considering relativistic effects because most relativistic effects are too small to be significant in satellite navigation[21].

“Your argument only applies to Von Neumann machines”

It applies to any machine. It applies to catapults. Programming a catapult involves adjusting pivot points, tensions, and counterweights. The programming language of a catapult is contained within the positioning of the pivots, the amount of tension, the amount of counterweight, and so on. You can even build a computer out of water pipes if you want[22]; The same principle applies. A machine no more “does things on its own” than a catapult flings by itself.

“Your thought experiment is an intuition pump”

In order to take this avenue of criticism, one would have to demonstrate the alleged abuse in the reasoning I supposedly engage in. Einstein also used “folk” concepts in his thought experiments regarding reference frames[23], so are thought experiments being discredited en masse here, or just mine? It’s a failure to field a clear criticism, and a vague reply of “thought experiments can be abused” is unproductive. Do people think my analogy is even worse than their stale stratagem of casting the mind as an analog of the prevailing technology of the day – first hydraulics, then telephones, then electrical fields, and now computers[24]? Would people feel better if they performed my experiment with patterned index cards they can hold in their hands instead? The criticism needs to be specific.

Lack of explanatory power (My response: Demonstrating the falsity of existing theories doesn’t demand yet another theory)

Arguing for or against the possibility of artificial consciousness doesn’t give much of any inroads as to the actual nature of consciousness, but that doesn’t detract from the thesis because the goal here isn’t to explicitly define the nature of consciousness. “What consciousness is” (e.g., its nature) isn’t being explored here as much as “what consciousness doesn’t entail,” which can still be determined via its requirements. There have been theories surrounding differing “conscious potential” of various physical materials but those theories have largely shown themselves to be bunk[25]. Explanatory theories are neither needed for my thesis nor productive in proving or disproving it. The necessary fundamental principles were already provided (see section “Requirements of consciousness.”)

In the next part, we’ll address “panpsychism” and offer some concluding thoughts.

Originally published here: Artificial Consciousness Is Impossible | by David Hsing | Towards Data Science (See list of cited sources there)


David Hsing

David Hsing is a microprocessor circuit layout mask design engineer who has worked in the semiconductor manufacturing industry for over 20 years.

Artificial Consciousness Remains Impossible (Part 2)