Let’s do a simple thought experiment. Let’s assume that we currently have the hardware and software to build ourselves a matrix like virtual reality simulation that would run on your home PC. It is not a MMO system like the Wachowski Brothers matrix though – it is a single player world, but populated with very advanced, Turning grade AI’s that are almost indistinguishable from the real people. In fact, the simulation cheats a little bit by peeking at your long term memories. So for example it can realistically simulate a meeting with your old college buddy – and when you said “remember that time we got drunk and puked all over John’s car” it will actually look up that event, and extrapolate your friend’s relations based on them.
The whole simulation is built to be 100% realistic. If you cut yourself, you will feel genuine pain. If you die in the simulation, you probably won’t die IRL but it will feel real right up until the last second. You may even go into a shock because of the experience.
Let’s say one day you wake up, and you decide to simulate your life without any modifications. So you plug yourself into your machine, launch the VR program and find yourself back in your rum. You walk up to your computer, plug yourself in, launch another VR simulation, and you are back in your room. Assume that the hardware can take it – the simulation optimizes everything so even though you recursively jacked in 8 or 9 times, it is running only running 1 layer (I mean, there is nothing happening in all the other virtual worlds where you are just sitting in a chair with a wire sticking out of your head).
Repeat this bunch of times, going deeper and deeper into the simulated world. Don’t count how many times you do it. At a random spot stop, and start going back out. In this VR you don’t have an overlay HUD or any user interface – to exit you have to do something specific: for example click your hills 3 times while saying “there is no place like home”. Or just concentrate on the word “QUIT” real hard. Whatever – the point is that it is not a mechanical thing – it is an action or process, and it must be performed very exactly. If you are sloppy, the simulation may ignore the input (to many people were annoyed when they got booted out of their masturbatory virtual fantasies due to falsely interpreted input so the software was fine tuned to err on the side of caution).
Here is the question: given the imperfect exit routine, and the fact you were not counting how many times you jacked in, and that there is no user interface whatsoever, how do you know you are out of the simulation and in the real world?
I mean, how many times do you try the exit routine until you assume you are out? What if there is a parser glitch? What if you are somehow messing it up each time. What if someone accessed your machine while you were jacked in, and changed the exit routine to something else.
How do you know if the splitting headache or stomach pain is just an oridinary migraine or indigestion or a symptom of your body slowly getting dehydrated, or starving in the real world. How often do you check and do the exit routine?
I have another scenario for you, which is a variation on the one above. Let’s say we have a Total Recall like technology that can give you false memories that are nearly indistinguishable from the real ones. It is a bit more advanced than that however, since the memories don’t follow a rigid script but rather are built by a powerful AI director, basing your personality, memories and behavioral traits. Just like the VR technology above, the simulator can dig through all of your memories and dig out useful bits to build a very convincing and very real simulation.
You can of course specify parameters and desired outcome. It can be anything from spending the night with that girl you have a crush on, to becoming an astronaut and walking on the surface of Mars. The AI will take the starting point, and the desired outcome and figure out a realistic scenario in which you end up achieving that outcome. It is not a real time simulation, and you are not actually in control – the AI director manufactures the whole thing and makes the decisions for you to maximize the impact of the simulated scenario. This means you can’t possibly fuck up your dream date, or mess up that big audition that lands you a kick ass role in a great movie, and makes you a super star. When you later recall these memories you actually made all the relevant choices even though you didn’t. The whole process of implanting memories takes only couple of minutes, even though you may get a while lifetime of experiences that way.
So you buy yourself a fake memory implant, and you set it up to simulate a whole new life for you. It will start at early childhood, and then proceed until you are about 30 years old. You make the career choice to be random, and maximize random encounters and entropic events to make this experience as different and unexpected from your current life as possible. You also require that the last fake memory to be implanted is one in which you go and purchase a fake memory implant with the exact same set of instructions, and run it.
Assuming that the memory machine is perfectly capable of such recursion and it doesn’t crash or run out of resources, what will happen when you start the implant process? Since you are running this experiment at home, and no one will actually check up on you until some time the next day, you may actually get hundreds if not thousands of lifetimes recursively implanted. Each one feeds off of experiences and desires of the last ones and so the personality you assume in these memories starts diverging from your original one quite rapidly.
What happens when someone finally finds you and disconnects you? Who will you be? Will you even remember your original self, or will you be a mostly a product of the AI director that orchestrated your memories?
I guess to answer such question we would need to know what is the actual storage capacity of a human brain. Given such a torrent of new experiences, what would happen to the old ones. Would you completely forget your original life? Would you only remember the last few iterations? Let’s assume that there is no way to distinguish the implanted memories from the real ones – they are biologically identical and carry the exact same amount of emotional weight and meaning.
Anyways, those are my two interesting scenarios for you to ponder today.
For the recursive Matrix one, to be sure that the exit process is, in general, working, I would exploit the fact that in every universe up the chain, I’m sat in a chair and jacked in, whereas in the simulation I can be doing something else. So you do anything to change your position/the world around you, attempt the exit routine and if it goes away and you’re back in the chair then you successfully exited that layer of the simulation.
Knowing when to stop would be a harder thing, since you could never quite be certain of the difference between failing the routine and being in the top layer. But if I’d had a reasonable success rate at exiting all the other layers then after a while I’d assume there was no more exiting to be done. If the failure rate was high then it’d get tricky…
I suggest that any full-world Matrix simulations be designed with a simple means of testing whether or not you’re in the simulation; something that’s not complex that lets you know for sure whether you’re in reality or simreality. Could be anything – for example in simland you could always have a big blue dot on the back of your hand, or the word “Sim” hovering on the edge of your vision (although that last one might get annoying)
For the false memories… I’m not sure how such a thing would work – it’s designed to insert a couple of memories of some fun experience, but if you take it to an extreme and insert half a lifetime’s worth of memories then there’s going to be some conflict with your existing memories. If it overwrites conflicting memories then there’d be very little left of you by the time they shut it off.
On the other hand if it doesn’t then you’d be left with a horrific mess of patchwork/overlapping/conflicting memories limited only by the number of memories you’re physically capable of storing. You couldn’t sensible have full memories of 2 completely different lives all at once, much less so an even larger number of lives and I suspect it would do you immense psychological damage and leave you with some kind of disassociative disorder.
We don’t currently don’t have any kind of simulator available, but there’s something that can provoke a similar effect: lucid dreaming. There even ways to induce them and prolong it’s duration.
The difference is that our brain doesn’t produce a good simulation, so there’s some good reality checkers like turn a light switch on or looking at a watch
@Matt: of course, any simulation should have an indicator, but if someone can alter the EXIT routine they could probably eliminate any kind of reality checker from the simulation.
I don’t think there’s a fool proof way to be certain if we’re experience “real” sensations and we would probably accept it as real and continue with our lives until some projector falls from the sky.
I would also assume that a Matrix-like environment that is plugged to you would have, build in per default, a couple of routines to check your health and plug you out or call 911 (USA) or 112 (Europe) if you get dehydrated or starved.
Great matrix-world philosophical problem you pose there. With the poor design decisions we currently see everywhere, I’m sure the the exit routine would be “kill yourself”… :)
In any case, the only way to be sure that you are on level 0 is to figure out a limitation in the simulation. At least you can safely test any idea you have for such a limitation with the (possibly simulated) matrix machine in your room in whatever level that you are on right now. If your machine has the limitation, then the “outer” machine would have it as well – and if it doesn’t – you are finished.
Here is one thing to try that could *possibly* be one such limitation: find two computers that is on par with your matrix hardware in computational power. Run a hard task on both of them, e.g. video compression. In a simulation the simulated computers should not be able to keep up the speed when both run at the same time.
Pingback: Black Oceans (Czarne Oceany) by Jacek Dukaj | Terminally Incoherent