Sitemap

Addendum to the Chefbot Thought Experiment

The View from Oregon — 351: Friday 25 July 2025

8 min readAug 21, 2025

--

Press enter or click to view image in full size
What is a machine? Is AI an machine, or something else? Does it belong to another class of technologies?

In last week’s newsletter I formulated what I called the Chefbot thought experiment, which imagines a cooking robot integrated with an LLM trained on cooking lore, and I used this as a springboard to consider some questions, one of which was, “What kind of an entity is Chefbot?” This question has a personal history behind it. I’ve been corresponding with a friend who shares my interest in philosophy of technology, and who’s very interested in the philosophical implications of AI and LLMs, and in one of my emails I said that I didn’t think that these pose any new philosophical problems. That in itself is a philosophical claim, and I realized later, after having already sent out the email, that there’s a lot more to this than I had realized at first. I see now that defending the sweeping claim that AI and LLMs present no new philosophical problems would have to be established by a pretty detailed argument. I still hold this view, but trying to defend it is another matter, and there are a lot of issues to be thought through.

A lot of books on philosophy of technology spend time on the problem of a definition of technology, and while I enjoy these discussions as much as I enjoy the problem of the definition of life or the definition of civilization, etc., there are other fundamental questions we can ask, so if we’re not making progress by one avenue, we can try another avenue. One of the fundamental questions of the philosophy of technology is, “What is a machine?” Is this the same question as “What is technology?” Certainly there’s a close relationship between machines and technology, but what exactly is this close relationship? Does technology consists exclusively of machines? What about social technologies like language or banking? Are we obligated to conceptualize these as abstract machines (sort of like how a search engine is an abstract engine), or is it better to acknowledge that machines are only a part of technology?

Press enter or click to view image in full size
A Jacquard loom uses encoded information on punched cards (abstract technology) to weave complex patterns in fabric (concrete technology). Is AI just a more complex instance of this?

These questions bear upon my thought experiment of last week, because the Chefbot is, prima facie, a machine. However, it’s a machine that’s mated to a computer program, an LLM, that, if it is a machine at all, is an abstract machine. And maybe it’s not a machine at all. If we use computer programs, from the simplest set of instructions encoded in machine language to the most elaborate AI models constructed to date, to direct the functions of a machine or machines, are we introducing a relationship between two forms of technology, a concrete technology of machines and an abstract technology of instructions? If not, what are we doing? If so, then we can make a tripartite distinction among purely concrete machines, purely abstract instructions, and entities that involve both. That makes three classes of machines (or, better, three classes of technology), and each class of technology represents, ontologically, a distinct form of being.

If we count Chefbot as the third class of technology, mixing abstract and concrete technologies, it’s not the first technology to do so. Nineteenth century technologies like the Jacquard loom were already doing this, so if, as human beings, we wanted to confront a new form of being that we had ourselves created, we could have already done this by encountering a Jacquard loom. Therefore (under this interpretation), Chefbot is not a new form of technology, and therefore it’s not a new kind of being. There is nothing new here with which human beings might enter into a relationship, which could include the relationship of philosophical schematization. Obviously, that’s not the only possible interpretation of the state-of-affairs represented by Chefbot.

Press enter or click to view image in full size
Would machine consciousness represent a genuinely novel technology, and no longer a machine simpliciter?

A reader has commented on last week’s newsletter that, “There is no intrinsic reason why ChefBot could not ultimately do the same thing with a newer architecture, perhaps one that allows ‘sensations’ and even consciousness.” The same thing, in this context, is what human beings do when they taste. If consciousness were to emerge in Chefbot, then we have all Chefbot’s functions plus consciousness on top of these functions, and this would definitely be a new class of technology (which an exception I will note below). Let’s say that Chefbot is built at t0 and becomes conscious at t1. This is as much as to acknowledge that there is a period of time between t0 and t1 when Chefbot was operating but was not conscious and when it constituted the third class of technology, and that in turn means that consciousness is distinct from this third class of technology and that the consciousness that inheres in Chefbot after t1 transforms it into a fourth class of technology and a new form of being appears in the world, now ontologically enriched by the activities of human beings.

The parenthetical exception I mentioned above is that, if consciousness emerges in a machine, it could be argued that the appearance of consciousness means that the machine is no longer a machine at all. Under this interpretation, the appearance of consciousness transforms a machine not into a fourth class of technology, but into a kind of being that is not technology at all. In this case, a conscious Chefbot is a new kind of being, and this answers the question I posed in the previous newsletter, namely, “With what kind of a thing are you interacting?” The answer to this is now, “A new kind of being.” Again, under this interpretation human beings would then establish a relationship with a new kind of being that did not previously exist. This question was motivated, as I mentioned above, by my correspondence about whether interaction with AI or LLMs pose any new philosophical problems. If this is the correct interpretation, then, yes, the interaction with a new form of being poses new and unprecedented philosophical problems — problems that are not posed by merely confronting a new kind of machine.

Press enter or click to view image in full size
Given the success of chatbots, is the Turing test no longer a big deal?

I wrote above that these questions have a personal history for me, and further context for this history is my paper “Space Philosophy: The Symmetry Hypothesis,” in which I asked whether space philosophy poses any genuinely new philosophical problems. If there is an exhaustive symmetry between human experience on Earth and human experience in space, then there is no new problem with which to engage as a result of space exploration. In that paper I suggested that any new activity or technology can be viewed from the perspective of the symmetry hypothesis, thus our interaction with artifical intelligence and large language models is another paradigm case of a new technology that can be viewed from the perspective of the symmetry hypothesis. Here there’s an interesting twist, however. A perfect replication of human intelligence or linguistic usage (i.e., passing the Turing test, and possibly crossing the uncanny valley) would be the most exhaustively symmetrical of AI and LLMs and seemingly would present no philosophical problems at all, but this would also be the threshold at which the most interesting philosophical question would be posed, and that would be whether a machine replication of human intelligence, if indistinguishable from human intelligence, would be an ontological novelty. If the two are indistinguishable, any distinction would seem to be a distinction without a difference.

In any case, I thought of a further permutation of the Chefbot thought experiment. Last week I suggested that a taster could be added to Chefbot, allowing Chefbot immediate feedback on its cooking that it could compare with the reports of human tasters, thereby making it possible for Chefbot to experiment with cooking while having some predictive power as to whether a human taster would enjoy the result. Still, human beings remain “in the loop” in this scenario, and insofar as human beings remain as part of the process, even if only as judging the results of Chefbot’s culinary creations, the process has not been (exhaustively) automated since the final editorial selection lies with consciousness and not with mechanism. At this point we could perform a further experiment: train an artificial taster so that it is closely predictive of human tastes, and then allow Chefbot to experiment with new recipes while only having the artificial taster taste the results for several culinary generations. Here the emphasis would be on culinary novelty and building on “successes” of previous cooking experiments, but with the success judged by the artificial taster without human intervention. After several generations of culinary selection by the artificial taster, then, and only then, have a human taster try the derived cuisine to see if the artificial taster has tracked human tastes independently of human correction. How many generations could this continue?

What if our cooking robot is supplemented by a tasting robot, taking human beings out of the loop?

There are several ways this experiment could go wrong. There are both personal and cultural tastes in food. Ethnic cuisines exist because of the availability of given cultivars in a given geographical region, and the technologies that have been built to prepare these cuisines, but also because of the taste preferences of the human beings in this geographical region. The human tasters who would eventually judge Chefbot’s derived cuisine would themselves be implicitly “trained” on particular cuisines, and this kind of preference should be controlled for in any such experiment. But part of the charm of culinary experimentation is in fusion foods that combine traditions. Here, human beings are likely to be as idiosyncratic as in any domain of life, with some tasters enjoying fusion foods while others condemn them as a betrayal of tradition. Again, tastes vary, not only in terms of the palette of the taster, but also in terms of the attitude that the individual brings to food and its preparation.

We could control for personal and cultural tastes in food by training several taster bots on several distinct cuisines, some trained narrowly on a particular tradition, and some trained very widely on a variety of cuisines including fusion foods. Similarly, a number of Chefbots could be trained under similar parameters, so there are a battery of Chefbots cooking experimental dishes, facing a battery of tasters sampling these experimental dishes. Here the outcome would be something like the restaurant reviews we all know: every major city has a great many restaurants, and every restaurant has many reviews. Sometimes the reviews range widely, from congratulatory to condemnatory. Usually, we trust the judgment of a friend. In absence of a friend who knows the restaurant scene, our choice may be nearly arbitrary, and governed not by the taste of the food but by the proximity of the restaurant and the prices on the menu.

Press enter or click to view image in full size
Would you follow the restaurant recommendations of a robotic taster?

--

--

No responses yet