A Complexity Ladder for Big History

Part of a Series on the Philosophy of History

Nick Nielsen
9 min readApr 20, 2024

A paper of mine, “A Complexity Ladder for Big History,” has just appeared in the Journal of Big History special issue on complexity. This paper isn’t specifically about philosophy of history, but it does touch on some philosophical problems, so I will consider some of these problems in the context of philosophy of history. In particular, I want to discuss definitions and the use of scientific measurement in the increasing formalization of knowledge and what this could portend for history.

What is big history? This is from the big history website:

“Big History seeks to understand the integrated history of the Cosmos, Earth, Life, and Humanity, using the best available empirical evidence and scholarly methods. Beginning about 13.8 billion years ago, the story of the past is a coherent record that includes a series of great thresholds. Beginning with the Big Bang, Big History is an evidence-based account of emergent complexity, with simpler components combining into new units with new properties and greater energy flows.”

One of the pioneers of big history is Fred Spier. In his Big History and the Future of Humanity he gave his one sentence summary: “The shortest summary of big history is that it deals with the rise and demise of complexity at all scales.” (p. 21) The summary I usually give is that big history seeks to tell the story of the universe entire from the big bang through the present and into the distant future. There is a sense, then, in which big history is the alpha and omega of history; any lesser history can be nested within big history.

I don’t remember when or where I first heard about big history, but I went to the 2nd big history conference at the Dominican University at San Rafael in 2014, so I have been tangentially involved with big history for more than ten years. Back when the Great Courses series was called The Teaching Company and they distributed their courses on cassette tape, I acquired the 48 lecture series from David Christian on big history. Without knowing that there was anything called big history I had been converging on similar ideas, so when I found them already laid out by David Christian I recognized something I was already doing. I’ve been following the developments in big history since that time.

A distinctive feature of big history is the use of emergent complexity thresholds as a periodization device. What is emergent complexity? Emergence is when novel properties or entities appear in a context previously characterized by familiar properties or entities. This can occur when a system becomes sufficiently complex. When the hydrogen and helium of the early universe coalesced into stars, new properties and new kinds of entities emerged from a context that had not before known stars. The nucleosynthesis occurring within stars created new elements, and the following generations of stars and planets incorporated these new elements. As this process continues, the universe becomes a little more complex. When a new form of complexity appears out of a familiar form of complexity, this is emergent complexity.

The universe appears to manifest a sequence of complexities, from the formation of the elements, to stars and planets, more complex chemistry and minerals, life, consciousness, and intelligence. The problem my paper deals with is how we can measure complexity across multiple forms of emergent complexity. Is there one metric that will work for all forms of complexity? If we maintain that the sequence of emergent complexities represent qualitatively distinct forms of complexity, and it is their qualitative distinctiveness that marks them as a new emergent, it would seem that each qualitatively distinctive form of complexity would require its own qualitatively distinct form of measurement, and these forms of measurements would therefore be incommensurable.

If this were the case, then we could have one metric of complexity that could span the degrees of complexity of any one qualitative kind of complexity, but this metric would only be applicable to this one form of complexity, and would not be applicable to that form of complexity that immediately preceded or immediately followed a given form of complexity. From a logical point of view this looks like an insuperable problem, but this is partly an artifact of how I have stated the problem. In the real world, the anterior forms of complexity don’t simply disappear when new forms of complexity supervene upon them. Stars and planets form from pre-existing matter, and the matter doesn’t just disappear once the stars and planets form.

My solution to this problem is to formulate what I call a complexity ladder, which is analogous to the cosmological distance ladder employed by astronomers to measure the distance to astronomical objects, and hence to measure the size of the universe. In constructing the cosmological distance ladder, astronomers are always trying to measure the same thing — distance — but they employ different metrics and different techniques of measurement in order to measure different distances. A complexity ladder would always measure the same thing — complexity — but the techniques employed to measure complexity would change over the entire integrated complexity scale. The sciences, as befits methodological naturalism, can afford to remain agnostic on the exact nature of complexity.

Instead of defining and classifying, science measures and tests. Eventually in the sciences we may get around to definitions and classifications, but measurement is the real key to scientific thought. Alfred North Whitehead wrote in Science and the Modern World, “The popularity of Aristotelian Logic retarded the advance of physical science throughout the Middle Ages. If only the schoolmen had measured instead of classifying, how much they might have learnt!” (Chap. II)

In my paper I discuss this in relation to measuring complexity, but this idea has a far wider application. For example, I think that the study of civilization could greatly benefit from this attitude. If you talk with anyone for any length of time about civilization, they will ask you, probably sooner rather than later, what definition of civilization you are using. This is, of course, entirely reasonable. We don’t want to waste our time talking at cross purposes. But if we can’t agree on a definition, does the conversation end there? Should the conversation end there?

Don’t get me wrong: I love definitions. I have nine different definitions of civilization that I use. The point I am making is that I am unlikely to get many others who study civilization to agree with any or all of my definitions of civilization. A definition, then, can be a hindrance to collaboration and the emergence of a scientific research program studying civilization. Too often different definitions of civilization have presented an obstacle where, if the parties concerned could simply set aside their definition of civilization, they might still be able to make common cause with other researchers. We can remain ontologically agnostic about the nature of civilization and still study civilization. In my episode on Toynbee I talked about some of the problems of studying civilization, and this is one of those problems. I plan on a future episode that will consider this problem in more detail.

Another philosophical issue that I touch on is rationalizing the complexity ladder. What does this mean? There are four kinds of scales of measurement: nominal, ordinal, interval, and ratio. A nominal scale is essential a taxonomy, or, if you like, just a list of items. An ordinal scale arranges the nominal list of items in a definite order. An internal scale makes the interval between these ordered items significant, usually by standardizing the interval between them. A ratio scale adds a zero point to an interval scale, which offers us more opportunities for mathematically manipulating that which is ordered by a ratio scale.

Measurement is an abstraction used by science that ignores properties not being measured, but instead gives us a more precise account of the properties being measured. In the process of measuring anything, information is lost, meaning that there is a lot we ignore, but other information is gained. The more we can refine our measurements and bring them into systematic relationship with each other through increasingly integrated scales, the more knowledge we can construct with the information we derive from quantification.

In my paper I noted that the complexity ladder I was proposing was an ordinal scale, so an obvious question to ask is whether this ordinal scale can be transformed into an interval scale, and whether an interval scale of complexity can be transformed into a ratio scale. The transition from nominal scale through ordinal and interval scales to ratio scale represents an increasing formalization of knowledge. The increasing formalization of knowledge allows us to manipulate this quantified knowledge more effectively with mathematical tools, and to further integrate this knowledge with existing bodies of knowledge that have been formalized in this way.

Does this process ever lead us astray? Does the increasing formalization of knowledge lead us away from the concreteness of lived experience, and, if it does, does the resulting knowledge then become less relevant to us as human beings? In my paper I said that transforming a complexity ladder into ratio scale would force us to identify a state of zero complexity, and this seems like an explicitly metaphysical concept and not a concept of the empirical sciences.

Of course, many physical sciences have ratio scales with a zero point. Celsius and Fahrenheit temperature scales have a zero point that is essentially a conventional zero point. The Kelvin temperature scale attempts to identify a zero point based on fundamental physics. But complexity presents us with problems that temperature doesn’t present. The existence of anything can be counted as being more complex than nothing at all, hence the zero point of a complexity scale analogous to the Kelvin temperature scale would involve us in a metaphysical claim that the Kelvin scale does not involve. We could argue that this metaphysical problem of measurement is a kind of recovery of the human dimension of what has been transformed in a scientific problem.

The larger problem here is the use of idealizations in the construction of knowledge. History would seem to be singularly resistant to use of idealizations in the construction of historical knowledge. Windelband’s distinction between idiographic and nomothetic sciences places history at the idiographic end of the spectrum, in which we are trying to precisely describe singular individuals rather to formulate predictive laws. But appearances can be deceiving. History makes use of our existing conceptual framework, and this means making use of abstract ideas like government, war, revolution, population, and so on. How we make use of these ideas, how we translated lived human experience into these abstractions, is already a process of the formalization of knowledge. History has largely resisted the formalization of knowledge found in the empirical sciences, but that is changing.

Cliodynamics is a contemporary school of historical thought that seeks to quantify historical knowledge and to treat it effectively how the empirical sciences treat the data of experiments. Will the body of knowledge constructed by cliodynamics stand in the same relationship to human experience as the traditional discipline of humanistic history? Can we legitimately formalize historical knowledge? If so, to what extent can we formalize historical knowledge? Ought we to formalize historical knowledge? This is related to my earlier remarks on the use of definitions, as the use of precise definitions is part of the increasing formalization of knowledge.

It isn’t only cliodynamics that is engaged in the formalization of historical knowledge. Big history identifies a zero point of time, the big bang, which transforms the time continuum into a ratio scale not unlike the Kelvin temperature scale. Does a scientific understanding of time in terms of a ratio scale contribute to history as we have historically understood the discipline? Or must we transform history into a new kind of discipline, regardless of what it was in the past?

I believe that answering these questions will define a new periodization in historical knowledge. That is to say, historical knowledge itself is becoming more complex, and, as we saw earlier with emergent complexity, if an underlying context becomes sufficiently complex, new properties and new entities can arise from this complexity. If historical knowledge becomes sufficiently complex, we could see the appearance of novel historical properties and novel historical entities. Of course, we might try to answer these questions and fail. We don’t know if we have yet converged on the boundary conditions for emergent complexity in historical knowledge. But if don’t try, we’ll never know.