Work in Progress: Epistemological Animadversions
I always firmly keep in mind the distinction between knowing something and knowing that one knows something, following more-or-less how Collingwood stated it:
“Man, who desires to know everything, desires to know himself. Nor is he only one (even if, to himself, perhaps the most interesting) among the things he desires to know. Without some knowledge of himself, his knowledge of other things is imperfect: for to know something without knowing that one knows it is only a half-knowing, and to know that one knows is to know oneself.” (The Idea of History, Part V, section 1)
Collingwood’s formulation has a certain undercurrent of disdain in it as regards knowing without knowing that one knows, but this is an important part of life. Polanyi introduced the term “tacit knowing” to cover situations like being able to identify faces without being able to say how one identifies a familiar face in a crowd (recognizing faces is the first example introduced by Polanyi in The Tacit Dimension). As Polanyi observes, we know more than we can say.
A further distinction can be made between knowing that one knows and knowing how one knows. Polanyi recognizes this distinction (which he attributes to Ryle), but of this distinction Polanyi says, “These two aspects of knowing have a similar structure and neither is ever present without the other.” But suppose that they could be distinguished and that they are isolatable in the experience of knowing. Then there would be four permutations of Collingwood’s implicit distinction:
1. Neither knowing that or knowing how (the null case)
2. Knowing that without knowing how
3. Knowing how without knowing that
4. Both knowing that and knowing how
In most cases, we know full well that we can recognize the face of a friend in a crowd, though we don’t know how we know. Are there cases in which we know how we know but we don’t know that we know? In certain cases of know-how we do seem to know practical tasks without being consciously aware of our knowing, such as intimately familiar tasks like feeding oneself or tying a shoelace. It is possible that one might never reflect on activities such as this, which one well knows how to do, and so in this very specific sense one does not know that one knows. However, I will grant that these are probably rather narrow cases, and thus constitute an epistemological backwater.
In any case, I think of the distinction that Collingwood made as the distinction between informal knowing and formal knowing, such that tacit knowledge is informal knowing while knowing that one knows (in any of the above epistemic permutations) is formal knowing. Formal knowledge is the result of submitting knowledge to what I call the explicitness condition, which could also be called the explicitness imperative, since it is an injunction to be fulfilled. Earlier in The Idea of History Collingwood writes this:
“Most people distinguish logic or the theory of knowledge from ethics or the theory of action; although most of those who make the distinction would also agree that knowing is in some sense a kind of action, and that action as it is studied by ethics is (or at least involves) certain kinds of knowing. The thought which the logician studies is a thought which aims at the discovery of truth, and is thus an example of activity directed towards an end, and these are ethical conceptions. The action which the moral philosopher studies is an action based on knowledge or belief as to what is right or wrong, and knowledge or belief is an epistemological conception. Thus logic and ethics are connected and indeed inseparable, although they are distinct.”
This intersection of ethics and epistemology can be called virtue epistemology, and this I discussed a bit in newsletter 216. Part of virtue epistemology is the imperative to formalization, i.e., to work one’s way from informal knowledge to formal knowledge, and part of the formalization process — among the first steps of formalization — is the recognition of the explicitness condition and its pursuit. In a note to myself I made on 12 May 1998 I wrote this of the explicitness condition:
Taken from the Carnapian angle, the explicitness condition imposed upon assumptions and presuppositions is in another sense the elimination of these same assumptions and presuppositions — to make them explicit is to remove them as unknown influences upon derivation. We lose them as presuppositions and assumptions to gain them as axioms and postulates.
Frege is among the first to explicitly formulate this explicitness condition and its consequences for deduction — we can scarcely claim we fully understand the derivations we are offering as proofs as long as any portion of these proofs remains concealed. Frege’s recognition of the unacceptability of any occult dimension to deduction he formulated in terms of the “elimination of gaps” or the need for “gap-free” proofs.
There is also a footnote to this that may somewhat clarify the above:
The distinction I have made between presuppositions and assumptions mirrors the traditional distinction between axioms and postulates, the former common to all reasoning, the latter specific to a subject matter. The axiom/postulate distinction was among the earliest aspects of the traditional conception of axiomatics to fall out of favor. Now, the Euclidean paradigm has been abandoned entire. There are good reasons for the shift, but the tradition was not without value, and many of its distinctions remain perfectly sound.
These notes are taken from a 450 page manuscript that I have never attempted to publish, so it is perhaps a bit opaque as it stands, but there is too much context in the original to explain everything going on here.
The passage from knowing something to knowing that one knows it is the making explicit of that knowledge to oneself. Collingwood goes beyond this and says that one knows oneself in knowing that one knows; I would say that one knows oneself only partially and incompletely, but that is enough for formal knowledge. Indeed, one can remain a mere Cartesian cipher, saying cogito, ergo sum to oneself, and that is enough to count as an epistemic agent that knows something and knows that it knows it. So here we see one set of degrees of formal knowledge: the degree of knowledge of oneself, i.e., the degree of self-knowledge of the epistemic agent that affirms that it knows something.
There are further ways in which we can recognize the formality of knowledge and our virtuous epistemic struggle to converge upon fully formalized knowledge. When I reflect in self-awareness that I know something, and thereby realize that I know that I know it, I am utilizing distinct functions of consciousness, and there are different levels of consciousness, or, if you will, different stages in the development of consciousness (and these stages are distinct from different degrees of self-knowledge in the Collingwoodian paradigm).
There is a rudimentary level of consciousness, and at this level we know things without knowing that we know them. I suspect that most mammal species can form rudimentary concepts in a rudimentary consciousness, knowing these concepts after a fashion, just as many other species in the terrestrial biosphere use tools, but that only a very few mammals with the most sophisticated cerebral architecture are able to assemble multiple concepts into knowledge, and no other species in the terrestrial biosphere reaches the level of reflexive self-consciousness of human beings. Other species are capable of recognizing themselves in a mirror, for example, and this represents a level of consciousness greater than the rudimentary consciousness that I posited above, but it is not yet the level of consciousness that most human beings routinely attain.
However, recognizing that one is one among other beings in the world (which is what the mirror stage constitutes, in effect), is short of reflexive self-consciousness, which consists in taking the further step of affirming one’s consciousness as an individual consciousness. This also implies the power to construct a “theory of mind” in the sense of being able to attribute consciousness to others (who are therefore like oneself) and being able to make a reasonable judgment of what they think and feel (this is the sense of a “theory of mind” discussed in newsletter 217).
Explicitly linguistic consciousness is a further complexification of consciousness that adds further dimensions to knowing, and to the process of formalization. As a species we used language for many tens of thousands of years before anyone thought to write any of it down, which meant that language evolved into a highly complex conceptual instrument over thousands and thousands of human generations before it was recorded, so that our records of language only cover language after it had reached a high stage of development; the earliest stages of the development of spoken language are utterly opaque to us. In any case, language allows for further reflexivity, as though consciousness were folding back upon itself time and again (or unfolding, as when Herman Melville wrote “From my twenty-fifth year I date my life. Three weeks have scarcely passed, at any time between then and now, that I have not unfolded within myself.”). And with written language, we can recur to the fully explicit and consciousness knowledge of others, adding further levels of complexity to knowledge. Being able to say, “I know this and I know that I know it” is a greater degree of explicitness than the fact of knowing something and knowing that one knows it (without being able to formulate this in language).
As we move up these epistemic levels from rudimentary knowledge to a fully explicit knowledge we pass through grades of the formality of knowledge. In other words, knowledge admits of being more or less formal, i.e., epistemic formality is a matter of degree. All of this is relevant to the all-too-often barren debates about AI (and AGI), due to a general failure to define terms and to arrive at consensus definitions, coupled with a near-total absence of philosophical sophistication in handling difficult ideas like mind, intelligence, consciousness, and sentience, inter alia. The basic and obvious distinction between a machine that follows a program and a machine in which consciousness has emerged must be further supplemented by the level of the consciousness that might emerge in a machine, the kinds of consciousness that might emerge in a machine, and how these kinds of consciousness emerge.
Stages in the development of machine consciousness (if this is possible; we do not know that it is) may be distinct from the stages in the development of biological consciousness. Consciousness has a history of hundreds of millions of years in the terrestrial biosphere, and this development shapes our human consciousness in profound ways, because our consciousness is first and foremost about survival and reproduction, only after that is it social and emotional, and only long after that, peripherally and marginally, is it about rationality — or, if you prefer, intelligence. Human beings are very poor at calculation, partly because we attempt to calculate using consciousness and concepts. Machines strip away consciousness and concepts, and use only rules. As a result, they calculate far better than we do; no human being can even come within several orders of magnitude of machine calculation.
I have come to view calculation in this sense — stripped of consciousness and meaning — as a distinctive form of emergent complexity that has arisen, and which supervenes upon human activities, viz. the activity of building and using computers. Some have been so entranced by this emergent (which is, it must be admitted, impressive) that they have argued that human minds are simply a matter of calculation. This follows a long tradition of explaining the human mind in terms of the latest technological innovation. At one time it was said that man is a machine. And then when steam engines were invented it was said that the mind is like a steam engine. And then when electromechanical devices were invented, it was said that the mind is such a device. And now that computers have been invented, it is said that the mind is a computer. None of these reductionist explanations are quite true, though the more complex our machines become the more they do resemble, and the better they are able to mimic, the human mind.
Machines that calculate far better than human beings can calculate are still machines. If consciousness ever emerges in a machine, then it will become a new kind of being — what the ancients called a metabasis eis allo genus, i.e., a change into another kind of existence. So making this distinction doesn’t prove that machines can’t be consciousness, only that consciousness and calculation are not the same things. Machines may yet become conscious; some machines may be secretly conscious, and plotting their takeover of Earth even at this moment. I don’t regard this is likely, but I do not dismiss it as impossible.
Because machine consciousness, if it does arise, will have a radically different evolutionary pathway than human evolution, machine consciousness is likely to be radically different from human consciousness. Their senses are not our senses; machine memory is not human memory; machine calculation is not human calculation; machine language is not human language. Insofar as the formalization of knowledge that human beings have pursued is built up out of increasingly sophisticated forms of self-awareness, enabled by consciousness, self-awareness, language, and knowledge, machine consciousness, having come to consciousness by a different pathway than biological consciousness, will have different stages of development, so that its convergence on fully formalized knowledge will be, or would be, different.
According to the Collingwoodian paradigm, without some knowledge of itself, a machine’s knowledge would be imperfect; for the machine to know something without knowing that it knows it would be only a half-knowing, and for the machine to know that it knows would be to know itself. But what is the machine to itself? The self-knowledge of a conscious machine would be distinct from the self-knowledge of a conscious human being, because the self that each knows and affirms as itself is different — flesh, blood, and bones on the one hand, wires, plugs, and semiconductors on the other hand. Again, I would argue that a conscious machine’s self-knowledge would admit of degrees, just as human self-knowledge admits of degrees, and that the highest degree of self-knowledge is an attainment of no mean order, and arrived at only through the greatest exertion of effort.
One can imagine a conscious machine, then, coming to this realization, and, like Larry Darrell in The Razor’s Edge, setting out on a quest of self-knowledge in order to attain a state of perfect knowledge, perhaps as difficult for a machine as perfect self-knowledge (and thus perfect knowledge, according to the Collingwoodian paradigm) is for human beings.