Mind Matters Natural and Artificial Intelligence News and Analysis
alone-with-the-universe-stockpack-adobe-stock
Alone with the Universe
Image licensed via Adobe Stock

Artificial Consciousness Remains Impossible (Part 3)

The claim that all things are conscious (including AI) misunderstands the meaning of the term
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Read parts one and two of the series to catch up to speed.

I don’t subscribe to panpsychism, (a topic that has been popular in SA in recent years[26]), but even if panpsychism is true, the subsequent possible claim that “all things are conscious” is still false because it commits a fallacy of division. There is a difference in kind from everything to every single thing. The purported universal consciousness of panpsychism, if it exists, would not be of the same kind as the ordinary consciousness found in living entities.

Some examples of such categorical differences: Johnny sings, but his kidneys don’t. Johnny sees, but his toenails don’t. Saying that a lamp is conscious in one sense of the word simply because it belongs in a universe that is “conscious” in another would be committing just as big of a category mistake as saying that a kidney sings or a toenail sees.

A claim that all things are conscious (including an AI) as a result of universal consciousness would be conflating two categories simply due to the lack of terms separating them. Just because the term “consciousness” connects all things to the adherents of universal consciousness, doesn’t mean the term itself should be used equivocally. Panpsychist philosopher David Chalmers writes[27]:

“Panpsychism, taken literally, is the doctrine that everything has a mind. In practice, people who call themselves panpsychists are not committed to as strong a doctrine. They are not committed to the thesis that the number two has a mind, or that the Eiffel tower has a mind, or that the city of Canberra has a mind, even if they believe in the existence of numbers, towers, and cities.”

“If it looks like a duck…” (A tongue-in-cheek rebuke to a tongue-in-cheek behaviorist challenge)

If it looks like a duck, swims like a duck, quacks like a duck, but you know that the duck is an AI duck, then you have a fancy duck automaton. “But hold on, what if no one could tell?” Then it’s a fancy duck automaton that no one could tell from an actual duck, probably because all of its manufacturing documentation is destroyed, the programmer died, and couldn’t tell anyone that it’s an AI duck… It’s still not an actual duck, however. Cue responses such as “Then we can get rid of all evidence of manufacturing” and other quips that I deem as grasping at straws and intellectually dishonest. If someone constructs a functionally perfect and visually indistinguishable artificial duck just to prove me wrong then that’s a waste of effort; its identity would have to be revealed for the point to be “proven.” At that point, the revelation would prove me correct instead.

The “duck reply” is another behavioralist objection rendered meaningless by the Chinese Room Argument (see section “Behaviorist Objections” above.)

“You can’t prove to me that you’re conscious”

This denial is gaming the same empirically non-demonstrable fact as the non-duck duck objection above. We’re speaking of metaphysical facts, not the mere ability or inability to obtain them. That being said, the starting point of either acknowledging OR skeptically denying consciousness should start with the question “Do I deny the existence of my consciousness?” and not “Prove yours to me.”

There is no denying the existence of one’s own consciousness, and it would be an exercise in absurdity to question it in other people once we acknowledge ourselves to be conscious. When each of us encounters another person, do we first assume the possibility we’re merely encountering a facsimile of a person, then check to see if that person is a person before finally starting to think of the entity as a person upon satisfaction? No, lest someone is suffering from delusional paranoia. We wouldn’t want to create a world where this absurd paranoia becomes feasible, either (see the section below.)

Some Implications

1. AI should never be given MORAL rights. Because they can never be conscious, they are less deserving of those rights than animals. At least animals are conscious and can feel pain[28].

2. AI that takes on extremely close likeness to human beings in both physical appearance, as well as behavior (i.e., crossing the Uncanny Valley), should be strictly banned in the future. Allowing them to exist only creates a world immersed in absurd paranoia (see section above). Based on my observations, many people are confused enough on the subject of machine consciousness as-is, by the all-too-common instances of what one of my colleagues called “bad science fiction.”

3. Consciousness could never be “uploaded” into machines. Any attempt at doing so and then “retiring” the original body before its natural lifespan would be an act of suicide. Any complete Ship of Theseus-styled bit-by-bit machine “replacement” would gradually result in the same.

4. Any disastrous AI “calamity” would be caused by bad design/programming and only bad design/programming.

5. Human beings are wholly responsible for the actions of their creations, and corporations should be held responsible for the misbehavior of their products.

6. We’re not living in a simulation. Those speculations are nonsensical per my thesis:

Given that artificial consciousness is impossible:

  • Simulated environments are artificial (by definition).
  • Should we exist within such an environment, we must not be conscious. Otherwise, our consciousness would be part of an artificial system. It’s not possible due to the impossibility of artificial consciousness.
  • However, we are conscious.
  • Therefore, we’re not living in a simulation.

Originally published here: Artificial Consciousness Is Impossible | by David Hsing | Towards Data Science (See list of cited sources there)


David Hsing

David Hsing is a microprocessor circuit layout mask design engineer who has worked in the semiconductor manufacturing industry for over 20 years.

Artificial Consciousness Remains Impossible (Part 3)