Language without Meaning, Understanding without Consciousness

Nick Nielsen
12 min readJan 14, 2024

--

The “mechanical Turk” wasn’t actually a machine that played chess, but it fooled many people.

A reader has asked a question based on my previous newsletter, in which I repeated my claim that the highest level central project of Western civilization is, essentially, a philosophical project. The question was phrased as follows:

“How will artificial intelligence — either at its current level of influence or at an evolved level where it dominates human life — affect and be affected by ‘the application of the fundamental metaphysical idea of western philosophy — the distinction between appearance and reality — to the true, the beautiful, and the good’.”

Since my formulation of the philosophical core of the Western civilization’s central project is couched in Platonic terms, it is a nice touch that this further formulation relates the central project to AI through affecting and being affected — which is Plato’s definition of being from his Sophist dialogue, i.e., the power to affect or be affected by some other being.

If a computer talks like a human being, does it think like a human being?

At its present stage of development, AI is a tool, not an agent. The language models that are now allowing for the automation of jobs once thought to lie outside the scope of likely automation are sophisticated tools, but still tools. Chat GPT is not going to become consciousness and assert its dominance over human beings, though many would say that it can comfortably pass the Turing test.

Some years ago I wrote a blog post — An Uncomfortable Realization of Human-Machine Equivalence — in which I noted that one of the perhaps unexpected consequences of the development of AI would be the realization not that machines are intelligent, but that a great many human beings operate at the functional level of machines (or below). If you operate at the level of Chat GPT, and your job only makes demands on you commensurate with the abilities of Chat GPT, then you are likely to be made redundant in the near future. As language models become more sophisticated, and this looks like it will happen pretty rapidly, jobs that involve manipulating language can be replaced by machines. If human beings want to remain in the loop, then they will need to do something more than manipulating language.

The Turing test suggests that, if we cannot distinguish between conversing with a computer from conversing with a human being, then it is a distinction without a difference.

This really isn’t terribly difficult. I heard one person describe their interaction with Chat GPT as being like talking to someone with severe cognitive decline who wanted to make the impression that they are still fully present in the conversation. One can sense the effort that is made, but one can also see (or, as the case may be, hear) the ellipses, and one notices the conceptual failures to add to the conversation in a substantive way. For those who have never added substantively to a conversation, Chat GPT and its ilk may perfectly pass the Turing test, but if you are in the habit of exchanging ideas with others, it will be as apparent that you are talking to a machine as it is apparent when you talk to someone suffering from cognitive decline (or a very young child prior to any significant degree of concept formation).

The most obvious form of cognitive decline is loss of short term memory. Most of us have had the experience of conversing with an aged family member who has lost much of their short term memory function, and you know that they ask a question, and, after it is answered, ask it again in a few minutes’ time. I haven’t myself had much interaction with language models, but I have heard from those who have interacted, that early iterations of the technology would forget conversations of the previous day, but that this has been addressed, and some customizable chatbots allow for the saving of past conversations so that future conversations can build on this material. I mention this example because, as I said, it is probably the most obvious example; even children will notice when grandparents ask the same question over and over again. Children may not notice, however, if grandparents can no longer make connections among ideas, or cannot draw on the ideas of the conversation, modify them, and then offer the modified ideas back into the conversation. This is how you know you are speaking with another being with a mind of its own: mental work is involved, and in language models, that mental work is missing, just as it is missing in conversations with Alzheimer’s patients.

If a machine can fool you into thinking it is a human being, is it intelligent? Is it conscious? Does it understand what it is doing? Do any of these labels matter?

To pursue this further would take us deep into philosophy of mind, and I find it interesting that the development of technology is now closely associated with an ancient branch of philosophy in which questions asked more than two thousand years ago are still not answered to the satisfaction of all. The scientific development of an artificial mind will necessitate engaging with these questions. However, artificial minds may come about not through scientific development, but through tinkering, in which we get a technological outcome that we cannot explain scientifically. The development of further and more sophisticated AI may come about without deeply engaging in the philosophical questions, but this comes at the cost of not being able to understand the results of one’s tinkering. The scenario that is widely discussed is that in which a machine recursively improves itself until it far exceeds the capacities of human technologists and engineers who produced the original iteration, and these technologists and engineers cannot understand the resulting recursively improved machine. Is this a form of tinkering or science? Certainly the production of the original machine is science, but whether the machine improving itself is tinkering, or a machine doing science that a human being cannot do (or cannot understand) is another question. I find this question interesting, but I am going to leave it hanging since I don’t have a way to answer it at the moment.

Even if we cannot give a full account of a philosophy of mind adequate to both biological minds and artificial minds, we can indicate some of the ellipses of our current understanding, i.e., failures of our conceptual framework for minds, which may someday be filled in and give us the account of mind that we need. First and foremost, we need a taxonomy of minds (i.e., an account of the kinds of minds there are) and a hierarchical taxonomy (or, if you like, a developmental taxonomy) of increasingly sophisticated functions of minds. I assume that different kinds of minds may reach different thresholds of cognitive achievement. We see this in the minds of the animal kingdom, in which human cognitive achievements outstrip the cognitive achievements of chimpanzees, but chimpanzee cognitive achievements outstrip the cognitive achievements of lemurs.

Searle’s Chinese Room argument is a response to some of the assumptions built into the Turing Test.

In parallel with this effort, we need a defined sequence of thresholds for AI. It’s not too difficult to name off the bigger and more obvious thresholds. Passing the Turing test is one such threshold, and that is where contemporary AI is hovering. Another equally obvious threshold is machine consciousness. Since we have no agreed upon definition of consciousness for animal minds, we have no agreed upon definition of machine consciousness, but we can see how there is a gap between producing a machine that can pass a Turing test and producing a machine that is consciousness, i.e., which has subjective experiences of its own. Here there is an obvious parallelism with a developmental taxonomy of (biological) minds: different stages in the development of mind (roughly) correspond to different stages of machine achievement.

As discussed above, machines now manipulate language, and some people are fooled into thinking that machines are conscious because they don’t themselves operate at a level beyond the manipulation of language. The next step would be to make machines mimic the manipulation of concepts. Our biological minds conceive concepts and manipulate them as concepts, but there are almost certainly workarounds for this. I have next to me, as I write this, Dictionary of Logic as Applied in the Study of Language: Concepts/Methods/Theories, which includes a technically difficult article on “Intension,” which is how logicians refer to meaning. As it turns out, there are ways to use a fully extensional logic to thematize intension, which is to say, encoding intension within an extensional calculus, or, if you like, rendering meaning in terms of logic. Someday the technologists and engineers will catch up with what logicians have been doing for the past century, and, using techniques such as these, there may be a robust way for a language model to manipulate concepts. At this point in the developmental taxonomy, machines might be able to converse with us on a conceptual level, even if they are not conscious in the way that biological minds are conscious. In other words, there may be alternative developmental pathways to conceptualization and conceptual thought.

Turing’s “Bombe” for decrypting Engima encoded messages.

At the threshold of conceptualization and conceptual thought (with or without consciousness), machines will then be in a position internalize the concepts of conceptual frameworks and to manipulate them much as they manipulate language today. At this stage (not yet attained), we can realistically discuss the possibility of machines being able to cope with abstract ideas like the good, the beautiful, and the true, and the distinction between appearance and reality. At present, machines can manipulate these words in a way that is consistent with current usage, but there is no engagement with meaning; the threshold I am talking about now is actual machine engagement with the meaning of language. Still, the question remains — and this is a paradigmatically philosophical question — whether a machine that manipulates concepts and works substantively with meanings understands what it is doing. Does understanding require consciousness, or is any consistent manipulation of concepts indistinguishable from understanding, and therefore to be counted as understanding? One could argue, and I probably would argue, that the manipulation of concepts is more sophisticated than the manipulation of language, but it is still an algorithm, and it isn’t conscious, and it does not involve understanding. At the same time, one could argue, and I may well myself be willing to argue, that there may be multiple senses of understanding that are equally valid, although with different extensions of validity.

Artificial intelligence at its current level of development can take in a phrase like “the central project of Western civilization is derived from the application of the fundamental metaphysical idea of western philosophy — the distinction between appearance and reality — to the true, the beautiful, and the good,” and spit it back at us in various forms, combined with other phrases and usages plucked from other texts that have been made available to the language model. Some will marvel at this stupor mundi, but I will remain unimpressed. At the next level of development (if indeed the engineers of artificial intelligence chose to take this route, which they may well not so chose), when machines can manipulate concepts, then we may be in for some surprises when the machines analyzes the concepts in the above phrase and then combines it with other concepts made available to it and gives us an account that we had never before occurred to us. That may cause us to rethink our tradition, and our relationship to the tradition of western civilization. It may even send the development of civilization in a new direction that is distinct from the direction of development that civilization appears to be taking, and it may be a direction for the development of civilization that only occurs in the light of the development of conceptually enabled machines.

Ernst Kapp formulated what may have been the first philosophy of technology in terms of “organ projection”

So far I have only discussed machines in their relationship to language, concepts, and understanding. Essentially, I have been writing about chatbots, and possible future chatbots that are more than mere chatbots. Machines, of course, are much more than chatbots. Machines without any pretensions to engagement with human beings other than being an extension of our bodies (the earliest philosophy of technology, authored by Ernst Kapp, was formulated in terms of “organ projection”) have allowed us to construct industrialized civilization. Now we are in the process of automating these machines, so that we have initially some relationship between machines that have agency in the world and language models that could direct these machines. Machines directed by language models would act as language models converse: they would act according to rules, but they would not be capable of manipulating concepts, nor would they be conscious, nor would they be aware, nor would they understand what they are doing.

If a conceptually enabled machine is installed in a machine with agency in the real world, then, again, we would start to run into some very interesting scenarios, only now played out in the real world instead of merely being a conversation about ideas. A conceptually enabled machine that had control of an motor vehicle, for example, might speed, and, if given a speeding ticket, it might (rightly) make the argument that the laws don’t apply to machine drivers because a machine’s reaction time is much faster than that of human beings, therefore a machine-piloted vehicle can safely go much faster than a human-piloted vehicle. Alternatively, the machine might give some technically sophisticated legal argument as to why the speeding law does not apply to it, and it might win its case — at least until the laws are changed.

If an automated car could think, and not merely compute, what would it say?

Such machines if present throughout the infrastructure of civilization, might begin arguing with human beings about either the methods employed or the ends desired by the function of the machines. If present in sufficiently large numbers, these machines might, again, change the direction of civilization. But what direction would they take civilization? I believe that if machine conscious emerges, that it will be radically distinct from biological consciousness, because it would not share the biological developmental pathways of every other conscious agent. If machine consciousness is radically different from human consciousness, as I believe it would be, then it may well understand all the human concepts we employ — indeed, all the human concepts that we give to it as our legacy — but, even in understanding our concepts, it would likely attach very different values to these concepts. In other words, machine consciousness is likely to be unaligned with human consciousness, and therefore unaligned with human civilization and its goals.

There are, however, many human civilizations. A machine that becomes conscious may find one human civilization with which it is aligned, and may choose to take up this legacy, while showing no interest in the legacies of other civilizations. Or such a machine may find no human civilization aligned with its values, but may be intrigued with the very idea of civilization itself. In this latter case, a machine consciousness might choose to construct a civilization for itself, but a civilization not aligned with any known human civilization. Or, again, a machine consciousness might have no interest at all in civilization, or in the problems civilizations seek to solve or the aims that civilizations seek to achieve, and may then disengage itself entirely from human civilization and go on to become another kind of emergent complexity entirely distinct from the achievements of human beings.

There remains the interesting and still unexplored problems of the possibility of conceptually-enabled machines that are not conscious. I take this to be an unknown middle ground between linear developments of AI as we know it today, which, no matter how sophisticated it gets, it is still a language model, i.e., a syntactical model, that never achieves the manipulation of concepts, i.e., it never becomes a semantical model, and AI that is machine consciousness, and can therefore understand, and learn, and develop as a conscious biological mind understands and learns and develops. This middle ground of conceptual machines would represent a class of intelligence heretofore unknown to us, and therefore possibly as mysterious as machine consciousness itself.

I have written several pages, but I haven’t get really gotten to the core of the matter I wanted to discuss; I haven’t even used the notes that I took with the intention of writing this newsletter, so this will have to serve as prologue for some future newsletter in which I will continue this exploration and possibly even cut to the heart of the question of what it would be like for machines to come to grips with the problem of the appearance and reality of the good, the true, and the beautiful. Perhaps a machine wrestling with these questions might pursue one of those schematic alternatives to Western civilization mentioned in the previous newsletter, in which the central problem is not beauty but truth or goodness.

--

--

Nick Nielsen
Nick Nielsen

No responses yet