The Devalorization of Consciousness
Some time ago in The Prodigal Philosopher Returns I discussed my belated realization that Anglo-American analytical philosophy of mind leaves off where the phenomenological treatment of the mind begins; the two approaches to consciousness both study the human mind, but they study such different manifestations of mind — and, it might be said, different conceptions of the mind — that there is also a sense in which they are not even studying the same thing. But even though Anglo-American analytical philosophy got itself hung up on the existential question, western thought found a way around this impasse by way of cognitive science, which does not study the existence question, but takes consciousness as a given and attempts to understand and to describe cognition. This is much closer to the phenomenological project. A philosophy of cognitive science (as a special case of the philosophy of science) would overlap considerably with phenomenology.
It has been hard enough to get Anglo-American analytical philosophers simply to recognize consciousness as a legitimate constituent of the world; to go beyond the mere recognition of consciousness to understanding the fundamental agency of consciousness in nature requires a further step, and this step will be a bridge too far for many philosophers in the tradition of post-positivist thought.
Because contemporary philosophy of mind in analytical philosophy more-or-less ends with a recognition (or a denial) of consciousness, and has not yet gone further in order to elaborate what exactly consciousness is like, as in the structures of consciousness analyzed by phenomenology, the former tradition, while debating the existence of consciousness, has not even progressed to the point where it can consider any detailed discussion of what this disputed consciousness is like.
If we leave analytical philosophy of mind where we find it today, allowing only a minimal recognition of conscious without any examination of its competencies, capabilities, and capacities, it would be hard to avoid some kind of epiphenomenalism, in which consciousness is some pointless and useless manifestation of life that happens to exist but which plays no substantive role in the world. I completely reject this idea.
If, however, consciousness is epiphenomenal, we can easily understand the devalorization of conscious that is increasingly prevalent as a social consequence of the rise to prominence of computer science. How are these connected? The fascination with the Turing Test has led many down Turing’s path, which suggests that if an imitation of consciousness is indistinguishable from consciousness, then it is legitimate to identify the indistinguishable imitation as consciousness, and to regard the question of the ontology of consciousness as useless at best, illegitimate at worst.
The critique and devalorization of consciousness in the spirit of Turing can be considered to be one aspect of the Copernican-scientific lesson to human hubris (and I believe that there are passages in Daniel Dennett’s work that make this explicit, but I will save a detailed treatment of Dennett’s work for another time). I am thoroughly sympathetic to Dennet’s Copernican motivations in his devalorization of consciousness when he refers to consciousness as a “charmed circle,” but to deny the existence of consciousness because the idea has been used for essentially ideological purposes is to throw out the baby with the bathwater — something unfortunately common in intellectual history, and today becoming a barrier to clear thinking. We should not treat consciousness as a charmed circle that marks off a particularly privileged class of organisms that happens to possess consciousness; conscious organisms only exist in virtue of an extensive ecosystem that is crucially dependent upon unconscious organisms, which are for this reason the foundation of our being. But we should not deny or disparage consciousness, i.e., we should not devalorize consciousness, because it is reliant upon other forms of being that are not conscious.
The bridge too far that I mentioned above is that what consciousness does is to create and manipulate meanings, values, and ideas. Consciousness as consciousness does not deal with “sense data” — like “a red patch,” which is a typical example of analytical philosophy — it deals with meaningful entities. For example, and of great evolutionary benefit, is that consciousness recognizes threats. What is a threat? There is no single kind of thing that is a threat. A movement in a dark corner of a room may be a threat; a rustle in the long grass may be a threat; the crooked smile on a stranger’s face may be a threat. A threat is an object, a meaningful object, and moreover a meaning that is highly valuable to consciousness. Consciousness identifies a threatening object as threatening. Henceforth, it is a threat.
The world as experienced by conscious beings is a world saturated through-and-through with meanings and values. We evaluate almost everything in our sensorium first of all as either important or unimportant, that is to say, we value everything that we encounter. If unimportant, it can safely be ignored. (It is relegated to what has been called the margin of consciousness.) What is it for an object to be ignored? It is to shift consciousness away from the object. Consciousness has limited resources of attention. It must save its focus only for that which is most important (such as existential threats).
A connectionist neural network could identify a pattern in the world, and perhaps an advanced system could identify a given pattern as a particular object, following it through the world as it changes but retains its identity as that particular object. This much artificial intelligence as it is currently known can do, and will continue to do better in the future. But artificial intelligence is not yet artificial consciousness, and because our attempts at artificial intelligence are so far not yet conscious, they cannot yet associate a meaning or a value with an object.
The arguments over whether some mechanical system is or can be conscious — a common constituent of responses to Searle’s Chinese Room thought experiment — are misconceived. One hears claims such as, “We can’t know that machines, in executing their programs, are not conscious, etc.” This kind of argument presupposes the ideal privacy of consciousness, which is exactly what mechanistic accounts of consciousness (those that identify indistinguishable imitations of consciousness as consciousness) deny. Cartesian privacy is an idealization, i.e., a formalization of an aspect of conscious, like the Cartesian Formalism of mind-body dualism. If human consciousness fails to exemplify an idealized model of Cartesian privacy we should not be surprised by this, but that “failure” should not be taken as an argument to deny the reality or the efficacy of consciousness.
Originally published at geopolicraticus.tumblr.com.