Evolutionary Rollback and Trait Simplification
The View from Oregon — 318: Friday 06 December 2024
Back in 2016 I wrote a blog post titled How early a mind? in which I discussed the fossil of the euarthropod Chengjiangocaris kunmingensis, discovered in China, such that the Xiaoshiba Lagerstätte had preserved many details of the surprisingly complex central nervous system. An article about the paper, 520 million-year-old fossilised nervous system is most detailed example yet found, says, “…dozens of nerves have been lost independently in the tardigrades (water bears) and modern arthropods, suggesting that simplification played an important role in the evolution of the nervous system.” In some ways, the central nervous system of this early Cambrian organism was more complex than central nervous systems descended from it. I’ve been thinking about this ever since, but I didn’t know what language was being used in evolutionary theory to express this phenomenon, so I couldn’t follow the papers, but in my own blog posts and newsletters I occasionally referred to this as evolutionary “streamlining,” since some early central nervous systems were highly complex, but later evolution was able to pare back some of this complexity without serious loss of functionality. That is, in a sense, steamlining.
I have been made aware, by Meika Loofs Samorzewski (whom I know through Substack), of a recent paper that supplies me with several missing pieces of the puzzle. In a comment on my newsletter of last week, he made me aware of “evolutionary rollback” which is discussed in his post “Reaction: Brian Garvey’s ‘The Evolution of Morality and Its Rollback’,” which in turn refers to the paper “The Evolution of Morality and Its Rollback” by Brian Garvey. Garvey’s paper uses “rollback” to express something similar to (though not exactly the same as) what I have called “streamlining.” Somewhere in this discussion (at the moment I can’t find who or where) someone used the term “trait simplification,” which is an admirably clear formulation, and one I may use in the future instead of “streamlining.” Whatever we call it — rollback, trait simplification, or streamlining — we should acknowledge the role of this in the evolution of complex adaptive systems. Garvey’s paper is about the evolution of morality; my use of streamlining has been in a biological context; the same ideas also can be applied to the evolution of social formations.
When a biological individual can reduce energy expenditure through reducing complexity while retaining functionality, this is an enormous evolutionary value. Also, simpler structures are likely to be more robust, longer lasting, and less likely to be damaged. We can see that this also holds true for social formations, which benefit from reducing complexity while retaining functionality. These concerns hold for technology as well. The rocket engines being developed by SpaceX have been repeatedly simplified in their design, which makes them more reliable in use — and significantly lighter as well, which is a great advantage in a rocket. There are, of course, many cases where complexity and functionality are lost; however, other functions may be gained. Both snakes and cetaceans lost their legs to evolutionary pressures, but cetaceans gained efficient swimming and snakes are some of the most effective predators on the planet, so their loss of legs has in some ways aided their success as a clade. As it turns out, fast and powerful movements on land do not necessarily require limbs. Many arthropods are capable of fast and powerful movements, without an endoskeleton and with limbs. As far as I know (and I don’t have an exhaustive knowledge of zoology), there are no vertebrates other than snakes that can move as efficiently as snakes and without limbs. Cetaceans, pinnipeds, and sirenians could be cited as examples, but if we restrict ourselves to land locomotion, they don’t count.
The example of snake locomotion suggests an alternative evolutionary scenario in which the first land vertebrates weren’t a legged and footed ancestor like Tiktaalik, but rather were something like an eel that slithered onto land and engaged in land locomotion like a snake. We know from snakes how efficient this mode of locomotion can be, so we can’t reject this scenario a priori; the most we can do is to say that this was not the scenario by which vertebrates colonized the land on Earth, but vertebrates could conceivably colonize a landmass in this way in some alternative natural history, perhaps a long time ago in a galaxy far, far away. The consequences for the later evolution of vertebrates on land would be significant. We can’t rule out the possibility that a legless clade colonizing a landmass would eventually develop legs, especially if limb formation was part of the deep homology of the clade.
In social formations, when both complexity and functionality are lost, that is a dark age, and we can quantify the depth of the dark age by the quantity of complexity and functionality lost, or some ratio of the two. However, it is possible that a social formation might simplify some of its institutions and retain or improve functionality (through greater efficiency); it is also possible that a social formation might retain complexity and lose functionality. This latter is the case of a failing but not yet failed (collapsed) society where institutions remain intact but are no longer function. Again, we would expect this kind of failure to admit of degrees, so that a complex institution might lose its functionality so gradually that an institution is retained even as its functionality nears zero, and it remains in place merely due to social inertia. (This would be another way of explaining what Carroll Quigley called the “institutionalization of the instrument.”) In a worst case scenario, institutions of a given social formation become more complex even as their functionality declines, and it is easy to see how this would become a drain on resources. A society saddled with inefficient institutions of this kind would be less able to compete with other societies, and would eventually lose to societies with more efficient institutions — in precisely the same way that more efficient animal bodies will outcompete less efficient animal bodies, whether in intra-species of inter-species competition.
The Garvey paper is formulated in terms of the modularity of mind — I find this annoying — but the general idea of evolutionary rollback can be formulated independently of any particular philosophy of mind. We can see the evolutionary rollback of cognition as a logical outgrowth of recent theories of extended cognition (also annoying, since we could just as well call the artifacts of supposed extended cognition tools), where we “outsource” functions of the mind to artifacts in our environment and so no longer need to expend the energy to sustain these mechanisms without ourselves. In this case, we would need to compare the energy required to sustain this functionality within the individual organism in comparison to the energy cost to sustaining the functionality outside the organism. Also, any functionality maintained outside the individual organism would be subject to free riders and the tragedy of the commons. I suppose that it could be shown that there is a significant degree of cross fertilization of ideas between modularity of mind theories and extended cognition theories, if only because both are prominent in contemporary philosophy of mind.
The implied tragedy of the commons for artifacts of extended cognition poses the question of whether a truly important function would ever be “outsourced” in this way, or whether the very fact of the potential problem would mean that those who expend energy to maintain a common but necessary resource would always lose out to those who merely used the resource without contributing to it. One could argue that human language is an outsourced cognitive function held in common. No one individual mind knows the whole of any language. (I have argued elsewhere — newsletter 169 touches on this, but I believe there is a longer discussion somewhere — that this changes with the invention of written language; prior to the invention of written language, the cognitive capacity of an individual was the limit of a language.) Most people use language without contributing anything at all to it. This is fortunate, since if everyone contributed to language, language would rapidly become unwieldy and unusable. However, language isn’t precisely equivalent to a natural resource since it can’t be used up. (I can imagine the argument being made that a superfluity of publications might exhaust language in an aesthetic sense, but this is a rather elusive sense of exhaustion, not comparable to the exhaustion of natural resources treated as a commons.)
The inexhaustibility of language may be key to its being common property but not being subject to the tragedy of the commons. Any cognitive commons can be added to without limit, drawn from inexhaustibly, and will nevertheless grow in its resources. That isn’t to say that cognitive resources can’t be impoverished. Dark ages involve a considerable loss of cognitive complexity and cognitive functionality. Concepts and clusters of concepts are subject to the same considerations about rollback (or streamlining) as discussed above in relation to organisms, social formations, and technologies. I have often used the example of Playfair’s axiom for parallels replacing Euclid’s parallel axiom as the introduction of an improved intuition; it can also be taken as a simplification of the concept of parallels, whereas Euclid’s original axiom is cumbersome and not especially intuitive. This simplification of geometry retains the full functionality of the axiom of parallels while simplifying the intuitive grasp of any derivations. The utility of a calculus is its simplification to the point of retaining functionality — but only just. A calculus that grew in complexity without growing in functionality would be pointless. But there’s more going on here. Some simplifications are also pointless. It is possible to simplify the propositional calculus to a single operator (the slash function, or “/” such that either the one side or the other must be false), but few use the slash operator because it’s more intuitive to allow oneself operators for disjunction, conjunction, negation, and material implication. So there seems to be a lower bound for complexity as well as an upper bound for complexity, and these are the boundary conditions of complexity.