In a recent, well-organized paper, neuroscientist Christopher Tyler of the Smith-Kettlewell Eye Research Institute in San Francisco offers not only ten features that comprise consciousness but also empirical tests for such features. He hopes to finally crack the Hard Problem of Consciousness by dividing consciousness up into component parts and studying associated brain functions.
He calls his approach Emergent Aspect Dualism. He hopes to reconcile monism (physical nature is all there is) with dualism (consciousness is not physical). With that in mind, he hopes to identify the physical machinery that rolls out consciousness, the “neural substrate for conscious processing (NSCP).” But he also hopes to borrow as much from dualism as he can, perhaps in part in order to avoid Egnor’s Dilemma (if your proposition is that your mind is an illusion, then you don’t have a proposition).
From his language, in any event, he seems fully naturalist (“Specifically, the evidence from our general experience of human mortality, and from neurosurgery in particular, supports the concept that consciousness is an emergent property of the physical activity of the neurons of the brain.”)
While the careful breakdown of consciousness into ten different qualities (privacy, unity, interrogacy, extinguishability, iterativity, operationality, multifacetedness, complex interconnectivity, autosuppressivity, and self-referentiality) is informative, the paper reads like an ambitious but hopeless project that offers some genuinely interesting moments.
In particular, Tyler points out that “interrogacy,” the ability to formulate questions, “seems unique to a conscious mind.” Yet, he notes, it has not so far been investigated:
Though not widely recognized, a defining property of C*[consciousness] is the ability to generate questions and represent potential answers. Complex systems other than the brain, such as galaxies, biological organs and the Internet, incorporate extensive recursive interactions and consist of energy processes that undergo development and evolution comparable to those in the brain. Although these systems can be said to process information, however, they cannot meaningfully be said to ask questions. It seems to be a unique property of a conscious system to formulate questions, and a function that gets switched on in humans at about the age of a year. This capability also entails (though perhaps not until a later age) the ability to envisage possible answers in an indeterminate superposition of their probabilistic states of likelihood.Tyler CW (2020), “Ten Testable Properties of Consciousness.” at Front. Psychol. 11:1144. doi: 10.3389/fpsyg.2020.01144 (open access)
One suspects that interrogativity has not been investigated precisely because minds (Tyler insists on calling them “brains”) question things and galaxies don’t. That reveals — by its very nature — the hopelessness of the monist project. There is no circumstance under which a galaxy or a kidney can be got to question anything. A brain might not do so either, apart from the mind it instantiates.
What empirical test for interrogacy does he think useful?
The first requirement is to develop a protocol for putting an individual in a controlled state of question generation. Participants would be asked to think of a question about some topic that they have not previously formulated, and indicate when they have come up with a completed formulation. The panoply of brain imaging techniques can then be brought to bear on the issue of the particular substrate of the question-generation component of C*, based on the time period immediately preceding the question-generation completion time. The NSCP should be coextensive with the brain processes underlying the interrogacy activity, once it is studied.Tyler CW (2020), “Ten Testable Properties of Consciousness.” at Front. Psychol. 11:1144. doi: 10.3389/fpsyg.2020.01144 (open access)
The problem, of course, is that the real world of question generation is not like that. It’s more like hunger. We generate questions when we need answers, just as we generate a search for food when we need food. The need controls the process. How we look for answers and what we regard as answers is, as with food, all over the map. But such a study might still be an interesting project.
Tyler describes his tenth quality, self-referentiality, as follows: “Human C* has the capability of representing itself within itself, so its substrate has to be able to exhibit the corresponding capability.”
Thus he hopes to find “consciousness” spots and networks in the brain, on the assumption that consciousness must work the way other things do, even though it is — by his own admission — like nothing else. But monism requires such faith of its adherents. He elaborates,
A final property of C* is its ability to represent itself as a component of the conscious field. This property harks back to Russell’s Paradox as a seemingly impossible feat: what is the set of entities that includes itself as a member? But this is a common experience, that we can be (acutely!) aware of ourself as a participant in the field of C. This property goes beyond the primary quality of the external referentiality of C, that it has the inherent quality of referring to some form of object outside itself (or what philosophers misleadingly term “intentionality”). C* is experienced as the continuous journey of an identified self, or ego, through the succession of states of experience; that is, not simply an undifferentiated stream of consciousness, but a series of actions and experiences from the viewpoint of an internal entity identified as “me.”Tyler CW (2020), “Ten Testable Properties of Consciousness.” at Front. Psychol. 11:1144. doi: 10.3389/fpsyg.2020.01144 (open access)
All true and all unique to consciousness. So what empirical test does he suggest?
Computationally, it is not difficult to construct a computer program that includes itself as a component in its representation. Indeed, the representation of the external player as an element in the programmed domain is a common feature of computer games known as an “avatar.” Such an avatar escapes Russell’s Paradox by not being a full representation that actually contains itself, but only a reduced representation of the major features of itself in model form. It is not so clear how the neural implementation of an avatar could be achieved, but to do so is a further prerequisite of the NSCP. Note that this concept, of self-referentiality being a testable aspect of the NSCP while referentiality per se is not, is itself paradoxical. Self-referentiality can be tested by identifying a brain process that switches on and off concurrently with the switch between awareness of the self “avatar” and of other content, whereas referentiality cannot be tested because it is an unavoidable property of C*, and there is no nonreferential form of C* against which to test the “off” state of a candidate process.Tyler CW (2020), “Ten Testable Properties of Consciousness.” at Front. Psychol. 11:1144. doi: 10.3389/fpsyg.2020.01144 (open access)
Again, Tyler might learn some interesting things from such a test. The obvious problem is that an avatar is a construct of the imagination created and controlled by the self and — as he says — less than the self. Very much less, actually, and probably a distorted picture. The brain switches that are active when the player imagines herself an immortal and invincible warrior may tell us what parts of the brain are active when we are composing fiction. But it is unclear how much more we can learn.
As might be expected, Tyler makes the usual foray into imagining the evolution of consciousness. That is an interesting but problematic concept. Consciousness is unique and we are not sure what it is or how it is produced.
He neatly dispenses with the common but unsatisfactory claim that consciousness enables organisms to respond better to their surroundings (“allowing the organism to superpose its goal-directed needs into the situational response”). In popular science venues, we are also frequently informed that human consciousness developed so as to enable humans to hunt together more efficiently in groups. But as Tyler points out, cows can eat grass and microorganisms can hunt each other without making exceptional demands on consciousness. One might add, wolves hunt efficiently in packs without requiring anything like a human level of consciousness.
For his evolution theory, Tyler suggests,
In summary, the evolutionary function of consciousness may be not so much a mechanism to introduce goal-directed aspects into the control of behavior as one to function as the gatekeeper for memory storage, such that only aspects of the sensory input that pass the criterion for reaching consciousness can be stored in memory, while all other aspects are lost.Tyler CW (2020), “Ten Testable Properties of Consciousness.” at Front. Psychol. 11:1144. doi: 10.3389/fpsyg.2020.01144 (open access)
Actually, the very concept of an “evolutionary function” of consciousness may be meaningless. If we do not know what consciousness is, exactly, we do not know whether it “evolved” at all, let alone how. Monists assert that consciousness must have evolved from some lower form of thought (?) as an article of faith, not because of any particular characteristic of consciousness.
Overall, empirical tests for the brain wiring that is thought to produce — as opposed to instantiate or accompany — consciousness will likely prove yet another instance of great advances claimed with little progress made. But, in truth, so long as the research never challenges monism, the consciousness research community will mostly be happy with it.
Further reading on consciousness:
If your brain were cut in half, would you still be one person? Yes, with minor disabilities. Roger Sperry’s split-brain research convinced him that the mind and free will are real.
Neuroscientist Michael Graziano should meet the p-zombie (Michael Egnor) To understand consciousness, we need to establish what it is not before we create any more new theories. (Michael Egnor)