Work in Progress: Virtual Reality and Simulation

Friday 01 July 2022

Nick Nielsen
14 min readJul 2, 2022

Last Wednesday’s Overview Round Table Zoom call provided much food for thought. Daniel Dyboski-Bryant of Virtual World Society gave a presentation about using virtual reality (VR) and augmented reality (AR) as a way to make the overview effect widely accessible. (I learned that “xR” is used to indicate either VR or AR, with “x” standing in for V or A.) Going into space is expensive and difficult, meaning that very few people are able to go into space and are thus able to experience the overview effect. Using VR potentially would make this experience available to the masses, and if one holds that the overview effect constitutes a cognitive shift of significant proportions, which could, if widely experienced, change the direction of human development, then one would want to see the overview effect made widely accessible.

Dyboski-Bryant’s presentation and the ensuing discussion triggered a number of ideas on my part. Everyone agreed regarding the potential use of the technology for education and inspiration. I agree with this also. Moreover, in discussion, Frank White noted that even if the simulation doesn’t represent a high degree of fidelity (in the discussion, Dyboski-Bryant introduced the term “consumer grade VR,” implying that there are higher degrees of fidelity available at different grades of the technology), if it gets an individual to experience the overview effect, then it is sufficient — sufficient, at least, for what I have called the overview imperative. This is a good point. Physicians are trained on cadavers, although one might say that a cadaver has a low degree of fidelity in comparison to performing a medical procedure on a living human being, but if that training is sufficient for the physician to save a life of a living patient, then the training was effective and has served its purpose.

All of these considerations are interesting and valid, and yet I had a moment of disquiet — or, rather, several moments of disquiet that I attempted to explain, but not too successfully. I can’t yet fully form my thoughts on VR and simulation, and indeed it has been quite some time since I have worked on these ideas, but this new stimulation suggested several thought experiments to me, and I regard thought experiments as a tool with which to explore ideas and, hopefully, to clarify one’s formulations of the problem and of any prospective solutions to the problem or problems.

The presentation included a segment of a live demonstration of the VR technology, which is a bit like a video game, but probably that much more immersive if one possesses the requisite technology (a VR headset, for example). This presentation mixed together realistic depictions of experiences in space, but freely mixed these realistic depictions with experiences that would be impossible in the real world. There was no boundary that noted when one crossed between an attempt to demonstrate a reasonably realistic depiction of the world and the computer-enabled fantasy based on the real world, but impossible to perform in the real world. This is what provoked my moment of disquiet, and I was completely unsuccessful in communicating why I see this to be a problem. The ideas involved are still elusive to me. I said at the time that there are social forces at play in the present that facilitate our confusing fantasy and reality (that’s a paraphrase; I don’t remember what I actually said). This is a nebulous claim, but I believe it to be true. It would take some work to flesh this out and to assemble the kind of examples I would want to use in order to demonstrate the danger of passing seamlessly between appearance and reality, which encourages a kind of disinterest in the boundary, and thus a conflation of the two. In this sense, my disquiet was a metaphysical disquiet, as the distinction between appearance and reality is fundamental to metaphysics in the western tradition.

One of the dangers is that of understanding VR as a substitute or a replacement for experience. I suggested that VR is in some senses the antithesis of experience. In actual human experience of the world, we come up against hard limits that the world imposes upon us, and much of our experience consists of overcoming these limits — as well as overcoming the limits that others attempt to place upon us. Depending on how the VR scenario is constituted, it can incorporate limits imposed upon us by the world, but ultimately in a simulated world anything is possible that is conceivable. In other words, the only limit that we encounter is the limitation of our own imagination. In actual experience, we run up against limits imposed upon us by the actual world, some of which might correspond with imaginative limits, and some of which would not.

Often we are surprised — and often unpleasantly surprised — by the limits that the world places before us (psychodynamic psychologists sometimes call this the reality principle). But even though the world limits us, it nevertheless is unlimited in a way that the imaginary world of the simulation is limited, because the limits of our imagination come from within and are our own, while the limits of the world come from without and appear to us as the other. The simulated world is an immersion in ourselves, in which the other is never truly other. This is admittedly an imperfect exposition of what I am trying to say, and it is made all the more difficult by the strange reflexivity that our imagination, which both facilitates and limits the virtual world, is among the limits that the world places upon us. The limits of our imagination are among the many limitations that the world places on us.

There is, I think, an implicit connection between the idea of virtual reality and the idea of the possibility of “uploading” one’s mind or consciousness into a computer — a familiar transhumanist scenario — and thus being able to live forever, or at least as long as the hosting computer can endure. One could express this implicit connection by the idea that a virtual experience is equivalent to an actual experience, or that the subject of experience (mind, consciousness, soul, spirit, etc.) is independent of its material implementation. More radically, we could formulate these two ideas as 1) all experiences are fungible, and 2) all bodies are fungible. It is interesting how this prima facie schematic take on the relationship between simulation and actuality falls out so neatly across a mind-body divide. Peter Godfrey-Smith’s book Metazoa: Animal Life and the Birth of the Mind (previously mentioned in newsletters 161 and 163), which can be considered a contribution to the venerable mind-body problem, constitutes a sustained argument against the idea of consciousness as independent of its material substrate, and I am sympathetic to his argument.

I noted above that Dyboski-Bryant mentioned “consumer grade VR,” and there was some discussion of the problem of degrees of fidelity of the simulation to actuality. Degrees of fidelity are routinely recognized and accepted as part of the bargain of technology: low degrees of fidelity are relatively easy and cheap to achieve; higher degrees of fidelity become increasingly more difficult and expensive, but the higher the fidelity of the simulation, the more immersive (and thus presumably also the more compelling and the more satisfying) the virtual experience. However, in discussions of the mind uploading — which I mentioned above as being implicitly connected to simulation — we don’t generally see this same ready recognition of degrees of fidelity. Robin Hanson’s book The Age of Em goes into excruciating detail about the possibility of whole brain emulation, which implies the possibility of partial brain emulation, but there doesn’t seem to be much interest in discussing how partial or fragmentary a brain emulation might be. Perhaps this is simply my unfamiliarity with the literature, but in that literature with which I am familiar, I just don’t see much in terms of treatment of the problem of the degree of fidelity of the simulation, not of an experience, but of a brain (mind, consciousness, etc.).

This train of thought led me to this thought experiment: Suppose we conceptualize the possibility of mind uploading in terms of degrees of fidelity. Further suppose that mind uploading becomes possible, but only at a level of fidelity roughly similar to a cartoon (applied analogously, again, not to experiences but to minds). If you had the chance to upload your “mind” into a computer at the level of fidelity of a cartoon, would you want to survive at a cartoonish level of existence? Moreover, would you want to live forever as the cartoon equivalent of yourself? I can easily imagine an individual who would respond that some form of survival is better than no survival at all (half a loaf is better than none), but I can equally well imagine another individual who would regard a cartoonish virtual existence as an affront to human dignity, and its indefinite extension in time as a form of never-ending torment. We can increase the stakes of this thought experiment by making adjustments to it, such as making a forced choice between continued existence in the actual world and uploading: upload now and lose the remainder of your actual life, or miss the chance at a virtual life entirely. This is a bit artificial, however, and smacks of William James.

I could make the cartoonish mind-uploading scenario sound horrific, but there might be some advantages to it. For example, in my own personal case, training an expert system in how I think and then giving it access to my hundreds of pages of notebooks and thousands of pages of unfinished manuscripts on my computer would be a way to make it possible for this material to all be edited and eventually made available in a semi-coherent form. That is a admittedly very attractive to me, but this isn’t conceived as a replacement or a substitution for my life, but only as a kind of reliable and interested secretary that would continue my work beyond the grave.

Another way in which the problems and potentials of virtuality come into play is with the possibility of remote experience (telepresence) facilitated by machines. I have had individuals argue to me that human space exploration is ultimately pointless or unnecessary, because we will eventually have machines that can be sent into space much more cheaply and easily than a human being, and that the experience that they will provide to us will be as good as or better than being there. This is clearly an instantiation of the idea of all experience being fungible.

This kind of argument reminds me of John Rawl’s famous political thought experiment of constituting a society from behind of veil of ignorance, so that one has no idea at what point one will be “born into” the society in question. Rawls assumes that everyone will be as risk averse as he is, and will choose the safest option, which he believes is a perfectly flat society in which there are no privileges or inequities. Needless to say, many would despise the society that would be constructed by risk averse presuppositions, and many would be willing to risk all for the chance to truly live. And this is equally true for virtual vs. actual experience. The objections to human space travel being too difficult and expensive almost always comes with the additional qualification that it is too dangerous. But too dangerous for whom? Many would choose to engage in an activity not in spite of its being dangerous but because it is dangerous. Danger is an attraction to human beings, and the endless potential hardships that space offers, far from being off-putting, is for many precisely the source of its potential value.

Being the first human being to set foot on Mars, or being the first human being to fly through the rings of Saturn, will be a feat certain accomplished in the face of danger and difficulty, but it is and will be rewarding precisely because it is difficult and dangerous. “We do these things not because they are easy but because they are hard.” These accomplishments will be difficult and dangerous because of the obstacles that nature will place in the way of anyone attempting to do this, which is what makes actual experience the antithesis of virtual experience. Anyone today could perform all manner of derring-do within an elaborate video game scenario, but we wouldn’t attach much value to these accomplishments because they aren’t “real.” This is the point of Robert Nozick’s Experience Machine thought experiment (though Nozick’s thought experiment was introduced to make a point about utilitarianism).

In the discussion over simulated experience during the Overview Round Table I made a joke about having a robot climb Mount Everest in one’s place, and offering to show friends “your” pictures from the top of Mount Everest. Needless to say, anyone who wants to climb Mount Everest would not be satisfied with the experience of sending a robot in one’s place, tracking its progress and admiring its photographs from the peak. If you just want the view, you can do what some people do, which is to have a helicopter set them down on the top of a mountain, where they can enjoy the view and then leave in as much comfort as they arrived. But this is not the same as climbing a mountain, nor is it a substitute for climbing a mountain. It might be an incredible experience to fly to the top of a mountain and maybe have breakfast served to you on top, but it isn’t the same experience as climbing a mountain; it is a different experience.

One could take this thought experiment further and reproduce (some of) the conditions of climbing Mount Everest while your technological counterpart is actually on the mountain. Thus you could put a stair stepper exercise machine in a freezer, and incrementally reduce the amount of oxygen in the freezer as you climb the stair stepper machine for days on end. Of course, you would sleep in the freezer in a tent, and only eat food that you could carry with you. All the time you would have a screen in front of you (or maybe you would be wearing a VR headset) and you would be looking at the views of the robot that is climbing Mount Everest in your place. Certainly this would be safer than actually climbing Mount Everest. If your robot fell into a crevasse, the robot would be a loss, but the virtual climber could simply start over, like starting a video game over after one has been “killed” and re-playing the scenario until one manages to get to the next level.

This virtual climb of Mount Everest would have a higher degree of fidelity than merely looking at pictures that your climbing robot sends back to you, but I don’t think that anyone would compare it to the actual experience of climbing Mount Everest. Higher degrees of fidelity could be viewed as a kind of infinitely converging series that always approximates its convergent value but never actually converges on that value (i.e., the value of the actual experience) in a finite experience. Of course, one could increase the stakes to the point that, if your climbing robot falls into a crevasse, you die too (like the Star Trek holodeck with safety protocols turned off), but then why engage in a life-threatening activity for the reward of a simulated experience? One could argue that, by this means, one could engage in a far greater number of dangerous experiences, and probably at a lower cost, and maybe that would justify the exercise, but I doubt that few would choose the simulated experience, with potentially fatal consequences, over the actual experience, also with potentially fatal consequences.

SETI isn’t going to be a conversation.

Another interesting facet of virtual experience that also touches on my thought experiment above of a cartoonish degree of fidelity to actuality, is the idea of a Bracewell probe. Ronald Bracewell sought a compromise between SETI and actual human exploration. What do I mean by this? In SETI we also get the familiar arguments that interstellar exploration will be expensive, difficult, and dangerous, therefore (obviously) no intelligent species will engage in interstellar exploration, but will instead safely broadcast from its homeworld. Again, we see the presumptive risk aversion and the failure to understand that the most interesting thing about exploring the universe would be the dangers that it presents to us. (Even the intrinsic risk aversion of SETI is an insufficient level of risk aversion for those who oppose METI, or messaging ETI.) SETI fulfills some of the risk aversion that many feel the need to build into the actual danger of life (which perhaps is a psychological compensation for dangers that cannot be mitigated), but it means that communication cycles extend over centuries or millennia or longer. In other words, it isn’t going to be a conversation. Bracewell proposed sending an automated spacecraft to other worlds (again, cheaper and easier than actually going there), with AI sufficient that it could contact another intelligence species when that species demonstrates its ability or willingness to be in contact with the “galactic club” of advanced civilizations.

Consider, in light of what I wrote above, that the AI directing the operations of the Bracewell probe is not a perfect emulation of human intelligence, but only represents some degree of fidelity to actual human intelligence. The AI might seem sufficient to us, but what is essentially a cartoonish reproduction of the human mind might be tragically misunderstood by the intelligent species it is intended to contact, unintentionally misleading them. Human beings have a deep understanding of the context, expectations, and intention in any communication (or, at least, in most communication), so that we would readily understand a cartoonish representation — indeed, cartoons are often used for educational and safety purposes — but another intelligent species with a different evolutionary history and therefore different assumptions, could radically misinterpret anything and everything that a Bracewell probe might attempt to communicate. There is, of course, no assurance that an actual human presence would do better, but we cannot exclude the possibility that an actual human being might do better, and it will probably be demonstrable that the onboard AI on a Bracewell probe falls short of the full range of human abilities, meaning that the AI is more likely to fail than a human being, all other things being equal (which they never are).

A Bracewell probe.

--

--