Mind Matters Natural and Artificial Intelligence News and Analysis
ai-machine-learning-hands-of-robot-and-human-touching-big-data-of-global-network-connection-internet-and-digital-technology-science-and-artificial-intelligence-digital-technologies-of-futuristic-stockpack-adobe-stock
AI, Machine learning, Hands of robot and human touching big data of Global network connection, Internet and digital technology, Science and artificial intelligence digital technologies of futuristic.
Image licensed via Adobe Stock

Artificial Consciousness Remains Impossible (Part 1)

The cherished fiction of conscious machines is an impossibility
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

This article is an attempt to explain why the cherished fiction of conscious machines is an impossibility. The very act of hardware and software design is a transmission of impetus as an extension of the designers and not an infusion of conscious will. The latter half of the article is dedicated to addressing counterarguments. Lastly, some implications of the title thesis are listed.

Intelligence vs. Consciousness

Intelligence is the ability of an entity to perform tasks, while consciousness refers to the presence of a subjective phenomenon.

Intelligence[1]:

“…the ability to apply knowledge to manipulate one’s environment”

Consciousness[2]:

“When I am in a conscious mental state, there is something it is like for me to be in that state from the subjective or first-person point of view.”

Requirements of Consciousness

A conscious entity, i.e., a mind, must possess:

1. Intentionality[3]:

“Intentionality is the power of minds to be about, to represent, or to stand for, things, properties, and states of affairs.”

Note that this is not a mere symbolic representation.

2. Qualia[4]:

“…the introspectively accessible, phenomenal aspects of our mental lives. In this broad sense of the term, it is difficult to deny that there are qualia.”

Meaning and Symbols

Meaning is a mental connection between something (concrete or abstract) and a conscious experience. Philosophers of Mind describe the power of the mind that enables these connections intentionality. Symbols only hold meaning for entities that have made connections between their conscious experiences and the symbols.

The Chinese Room, Reframed

The Chinese Room is a philosophical argument and thought experiment published by John Searle in 1980[5]:

“Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese, and yet, by following the program for manipulating symbols and numerals just as a computer does, he sends appropriate strings of Chinese characters back out under the door, and this leads those outside to mistakenly suppose there is a Chinese speaker in the room.”

As it stands, the Chinese Room argument needs reframing. The person in the room has never made any connections between his or her conscious experiences and the Chinese characters, therefore neither the person nor the room understands Chinese. The central issue should be with the absence of connecting conscious experiences, and not whether there is a proper program that could turn anything into a mind (Which is the same as saying if a program X is good enough it would understand statement S. A program is never going to be “good enough” because it’s a program as I will explain in a later section). This original vague framing derailed the argument and made it more open to attacks. (One of such attacks as a result of the derailment was Sloman’s[6])

The Chinese Room argument points out the legitimate issue of symbolic processing not being sufficient for any meaning (syntax doesn’t suffice for semantics) but with framing that leaves too much wiggle room for objections. Instead of looking at whether a program could be turned into a mind, we instead delve into the fundamental nature of programs themselves.

Symbol Manipulator

The basic nature of programs is that they are free of conscious associations which compose meaning. Programming codes contain meaning to humans only because the code is in the form of symbols that contain hooks to the readers’ conscious experiences. Searle’s Chinese Room argument serves the purpose of putting the reader of the argument in place of someone that has had no experiential connections to the symbols in the programming code. Thus, the Chinese Room is a Language Room. The person inside the room doesn’t understand the meaning behind the programming code, while to the outside world it appears that the room understands a particular human language.

The Chinese Room Argument comes with another potentially undermining issue. The person in the Chinese Room was introduced as a visualization device to get the reader to “see” from the point of view of a machine. However, since a machine can’t have a “point of view” because it isn’t conscious, having a person in the room creates a problem where the possible objection of “there’s a conscious person in the room doing conscious things” arises.

I will work around the POV issue and clarify the syntax versus semantics distinction by using the following thought experiment:

  • You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?

All programs manipulate symbols this way. Program codes themselves contain no meaning. To machines, they are sequences to be executed with their payloads and nothing more, just like how the Chinese characters in the Chinese Room are payloads to be processed according to sequencing instructions given to the Chinese-illiterate person and nothing more.

Not only does it generalize programming code, the Symbol Manipulator thought experiment, with its sequences and payloads, is a generalization of an algorithm: “A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.[7]”

The relationship between the shapes and sequences is arbitrarily defined and not causally determined. Operational rules are what’s simply programmed in, not necessarily matching any sort of worldly causation because any such links would be an accidental feature of the program and not an essential feature (i.e., by happenstance and not necessity.) The program could be given any input to resolve and the machine would follow not because it “understands” any worldly implications of either the input or the output but simply because it’s following the dictates of its programming.

A very rough example of pseudocode to illustrate this arbitrary relationship:

  • let p= ”night”
  • input R
  • if R=”day” then print p+”is”+R

Now, if I type “day”, then the output would be “night is day”. Great. Absolutely “correct output” according to its programming. It doesn’t necessarily “make sense” but it doesn’t have to because it’s the programming! The same goes with any other input that gets fed into the machine to produce output “nLc is auS”, “e8jey is 3uD4”, e.g., and so on.

To the machine, codes and inputs are nothing more than items and sequences to execute. There’s no meaning to this sequencing or execution activity to the machine. To the programmer, there is meaning because he or she conceptualizes and understands variables as representative placeholders of their conscious experiences. The machine doesn’t comprehend concepts such as “variables”, “placeholders”, “items”, “sequences”, “execution”, etc. It just doesn’t comprehend, period. Thus, a machine never truly “knows” what it’s doing and can only take on the operational appearance of comprehension.

Understanding Rooms — Machines Ape Understanding

The room metaphor extends to all artificially intelligent activities. Machines only appear to deal with meaning, when they ultimately translate everything to machine language instructions at a level that is devoid of meaning before and after execution and is only concerned with execution alone (The mechanism underlying all machine program execution illustrated by the shape memorization thought experiment above. A program only contains meaning for the programmer). The Chinese Room and the Symbol Manipulator thought experiments show that while our minds understand and deal with concepts, machines don’t and only deal with sequences and payloads. The mind is thus not a machine, and neither a machine nor a machine simulation could ever be a mind. Machines that appear to understand language and meaning are by their nature “Understanding Rooms” that only take on the outward appearance of understanding.

Learning Rooms: Machines Never Actually Learn

The direct result of a machine’s complete lack of any possible genuine comprehension and understanding is that machines can only be Learning Rooms that appear to learn but never actually learn. Considering this, “machine learning” is a widely misunderstood and arguably oft-abused term.

AI textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:

“For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word “learning,” we will simply adopt our technical definition of the class of programs that improve through experience.”

Note how the term “experience” isn’t used in the usual sense of the word, either, because experience isn’t just data collection. The Knowledge Argument shows how the mind doesn’t merely process information about the physical world[9].

Possessing only physical information and doing so without comprehension, machines hack the activity of learning by engaging in ways that defy the experiential context of the activity. A good example is how a computer artificially adapts to a video game with brute force instead of learning anything[10].

In the case of “learning to identify pictures”, machines are shown a couple hundred thousand to millions of pictures, and through lots of failures of seeing “gorilla” in bundles of “not gorilla” pixels to eventually correctly matching bunches of pixels on the screen to the term “gorilla”… except that it doesn’t even do it that well all of the time[11].

Needless to say, “increasing performance of identifying gorilla pixels” through intelligence is hardly the same thing as “learning what a gorilla is” through conscious experience. Mitigating this sledgehammer strategy involves artificially prodding the machines into trying only a smaller subset of everything instead of absolutely everything[12].

“Learning machines” are “Learning Rooms” that only take on the appearance of learning. Machines mimic certain theoretical mechanisms of learning as well as simulate the result of learning but never replicate the experiential activity of learning. Actual learning requires connecting referents with conscious experiences. This is why machines mistake groups of pixels that make up an image of a gorilla with those that compose an image of a dark-skinned human being. Machines don’t learn- They pattern match and only pattern match. There’s no actual personal experience associating a person’s face with that of a gorilla’s. When was the last time a person honestly mistakes an animal’s face with a human’s? Sure, we may see resemblances and deem those animal faces to be human-like, but we only recognize them as resemblances and not actual matches. Machines are fooled by “abstract camouflage”, adversarially generated images for the same reason[13]. These mistakes are mere symptoms of a lack of genuine learning; machines still wouldn’t be learning even if they give perfect results. Fundamentally, “machine learning” is every bit as distant from actual learning as the simple spreadsheet database updates mentioned in the AI textbook earlier.

Volition Rooms — Machines Can Only Appear to Possess Intrinsic Impetus

The fact that machines are programmed dooms them as appendages, extensions of the will of their programmers. A machine’s design and its programming constrain and define it. There’s no such thing as a “design without a design” or “programming without programming.” A machine’s operations have been externally determined by its programmers and designers, even if there are obfuscating claims (intentional or otherwise) such as “a program/machine evolved,” (Who designed the evolutionary algorithm?) “no one knows how the resulting program in the black box came about,” (Who programmed the program which produced the resultant code?) “The neural net doesn’t have a program,” (Who wrote the neural net’s algorithm?) “The machine learned and adapted,” (It doesn’t “learn…” Who determined how it would adapt?) and “There’s self-modifying code” (What determines the behavior of this so-called “self-modification,” because it isn’t “self.”) There’s no hiding or escaping from what ultimately produces the behaviors- The programmers’ programming.

Let’s take another look at Searle’s Chinese Room. Who or what wrote the program that the man in the Chinese Room followed? Certainly not the man because he doesn’t know Chinese, and certainly not the Chinese Room itself. As indicated earlier in the passage regarding learning, this Chinese Room didn’t “learn Chinese” just by having instructions placed into the room any more than a spreadsheet “learns” items written onto it. Neither the man nor the Chinese Room was “speaking Chinese;” They were merely following the instructions of the Chinese-speaking programmer of the Chinese Room.

It’s easy to see how terms such as “self-driving cars” aren’t exactly apt when programmers programmed their driving. This means that human designers are ultimately responsible for a machine’s failures when it comes to programming; Anything else would be an attempt to shirk responsibility. “Autonomous vehicles” are hardly autonomous. They no more learn how to drive or drive themselves than a Chinese Room learn Chinese or speak Chinese. Designers and programmers are the sources of a machine’s apparent volition.

Machines Can Only Appear Conscious

Artificial intelligence that appears to be conscious is a Consciousness Room, an imitation with varying degrees of success. As I have shown, they are neither capable of understanding nor learning. Not only that, they are incapable of possessing volition. Artificial consciousness is impossible due to the extrinsic nature of programming which is bound to syntax and devoid of meaning.

Originally published here: Artificial Consciousness Is Impossible | by David Hsing | Towards Data Science (See list of cited sources there)


David Hsing

David Hsing is a microprocessor circuit layout mask design engineer who has worked in the semiconductor manufacturing industry for over 20 years.

Artificial Consciousness Remains Impossible (Part 1)