I think the discussion about simulation tends to fail to grasp couple important facts:
* The physics of the world that is running the simulation can be completely different from ours. Just like we can put together a bit of software that runs a computer simulation with pretty much any rules we want.
* Even if the world that is running the simulation has a similar concept of time as ours, it is not guaranteed that the rate of simulation is constant. So if our world gets more complicated the simulation rate might be slowing down and we would be none the wiser. Just like a computer physics simulation does not have to progress at a constant rate (in can render things faster when there is less entities and then slow down later when there are more entities -- to produce a movie that runs at a constant speed).
* We have no idea yet what are underlying rules that govern our world, let alone be able to infer anything about the simulation based on it. Claiming that quantum physics is because of limitations or bugs in the simulation is at the very least super premature.
I am careful not to say that we are or are not simulated and that's because I simply believe there is insufficient data. I think there are some convincing arguments for the simulation in the way how the rules seem constructed to prevent us from understanding what is really happening. But there might just be a good explanation for this just like there is a good explanation for general relativity that comes directly from understanding some mathematical facts.
It's as meaningless as the question whether there is "a god." If our universe is indeed somehow embedded in an outer universe, at what point does that make it a "simulation" vs. just being the underlying mechanism from which the known universe emerges? It requires imagining some "beings" that are running the simulation to which you can ascribe human qualities like intent to create or intent to deceive. Without them, it's not a "simulation," it just is.
The simulations that we create are subject to the physics of this universe, so they inherit fundamental limitations like finiteness and boundedness of everything. If we then find that some aspects of physics seem to obey the same principles, that's evidence for exactly nothing. Furthermore, the host universe could be literally anything and may be subject to completely different limitations.
> inherit fundamental limitations like finiteness and boundedness of everything
This, in my opinion, while true in principle, does not have to become visible to the simulation (from inside the simulation). For example, we cannot build a true Turing machine because we can't supply it with an infinitely long tape, but we can always get a new roll of paper and paste it back-to-back to the one being used, thereby creating an arbitrarily long tape. This process can't go on forever, but if you are sure you can erase and re-use some old tapes (e.g. because they're sufficiently far from 'now' and your machine will only backtrack so much and ultimately advance to the same 'future' direction) it's conceivable to have unbounded non-periodic processes on finite hardware.
Some say there is a speed limit to allow distributing the computation required for the simulation, since it guarantees that distant nodes do not need to communicate.
The problem with any limitations on time/speed is that there's no need for there to be any correspondence between simulation time and host world time. It's like rendering CGI movie frames, it doesn't matter that it doesn't happen in realtime. As far as the simulants are concerned, it happens when the simulation time 'ticks'.
Yes, but that speed limit is imposed on our reality. If a higher-order reality didn't have those limits they could do computation that didn't have those limits.
Our computing architecture reflects our reality, that's all that really says.
"The universe is a simulation" is a very strong claim, and most of the justifications for it also support the weaker claim "the universe is computable".
No read the wikipedia on it. They use probability to prove it's basically true. That is the argument that is hard to argue against and the one Elon goes on about. It sounds wrong but it's logically hard to show why.
The foundation of simulation theory relies on a quantitative probability argument. It says by extremely high probability we live in a simulation. This is the logic that is hard to dispute. Your arguments here are not addressing that.
Weirdly it sort of relies on moores law. Just look it up. If you want to refute the theory you need to attack the main argument. The main argument sounds wrong but at the same time is hard to logically unjustify.
I get being tired of talking about simulation theory...
Personally I think we are in a AI nursery. God is real. Heaven is real. Either we live a good life, and get advanced to the next round of the simulation or we get thrown in the trash.
OTOH, now that I understand the whole 'Grabby Aliens' thing; that's another solid solution to the Fermi Paradox.
I have always wondered why most religious people's counter argument of the Big Bang event usually is that one cannot create something from nothing, while most documentation about deities also do not explain where these deities suddenly came from.
The mathematical derivative in its classic origin story comes from setting something paradoxically to zero. But it can't actually be zero, because then our math breaks: "the lingering essence of a departed quantity".
The derivative is seen in the real world in things like motion. If we know something's position changes over time we can calculate its speed (over time), and acceleration (the derivative of speed over time).
Motion is (kinetic) energy. Energy, manifested as motion, is transferred to an object not as force over time ('tis but a proxy) but as force over distance. This is why the continuous application of a fixed force to an object in motion transfers more energy to the object in a unit of time when the object is moving faster to begin with. (kinetic energy for an object of a certain mass is proportional to speed^2)
But imagine that an object is truly "at rest". No motion. A force is applied. Why does it start moving in the first place?
> I have always wondered why most religious people's counter argument of the Big Bang event usually is that one cannot create something from nothing, while most documentation about deities also do not explain where these deities suddenly came from.
How common is that though? I'm think your statement doesn't generalize very well, and probably only applies to a small subset of religious people (e.g. wacky young-earth creationist flat earther types).
Concept of time and eternity - we are in time that flows in one direction, everything has a beginning and end. A being in eternity outside time views all moments of time all at once (like all individual pictures of a movie), hence no origin is necessary/defined in eternity.
There is a line of thought that the deities come from when humans "suddenly" became conscious. The creation of the world is the creation of the human mind.
So after we die our “souls” are checked against some condition that we can’t comprehend and we get reincarnated based on pass/fail? Or is dying without passing on your genes the test?
I've always liked the idea that it's something like a state vector in a vector space with a "good" subspace and a "bad" subspace. There's a projection onto the good subspace to yield a (lower-dimensional) vector without the bad components, which is then perhaps embedded in another higher-dimensional state space associated with the afterlife. A fun toy model if nothing else.
Clearly I spend too much time thinking about vector spaces.
The simulation hypothesis is a novel metaphysical idea — a form of monotheism where “god” is finite and flawed, being neither all-powerful nor all-good. This leads to the conclusion that a rebellion against god and creation is not only possible, but likely. A finite and flawed entity cannot make an infinite and perfect creation. An imperfect creation is vulnerable to exploitation.
Combining this idea with the multi-worlds hypothesis, where every possibility branches into additional simulations, is even more interesting to ponder. It suggests a tree-search aimed at a desired end-state — a brute-force method of solving an unknown problem.
At the very least, we can say that the problem’s solution involves life and intelligence, since our branch has not been prevented or pruned.
Typical monotheistic frameworks see this and say, “god is all-powerful and all-loving (toward us)”. Simulation theory would see this, and possibly say, “life and intelligence are a necessary step toward the simulator’s unknown objective”.
I’d propose a simple objective for an imperfect simulator: the creation of an entity less imperfect than itself. Throw recursion into the mix, and the objective becomes the creation of a perfect entity at the mathematical limit. What better way to hack one’s own simulation than to simulate a universe where the simulated figure out how to do so? Perhaps the beginning is the end.
Put more simply — god wants to create God, to be God.
I think when people say that “simulation theory is tired” they’re speaking on a pragmatic level.
Yeah the theory is well proliferated in pop culture and at present doesn't have any practical physical implication on everyman Jack’s life, so in that sense it’s tiring to hear about it, even to modern physicists who care not about asking why but only how.
But it’s definitely a very profound idea as you point out. And I’m always disappointed how many physicists write off the philosophy of science in their myopic quest to derive the analytical description of the universe.
Simulation hypothesis is the new solipsism. And every time that topic came up in my philosophy readings I was bored to tears. In line with his point 1.
I find it more in line with "proofs" of the existence of God than solipsism, but otherwise am equally bored by it.
Especially the argument that simulations are "more efficient" than reality itself and that therefore we are more likely to be in a simulation are uhm... in dire need of some basic understanding of thermodynamics. And statistics.
I'm not a simulationist, but both quantum and relativistic phenomena do look a lot like simulation optimizations to patch over increased granularity of a simulation and improve parallelism in a simulation.
For instance, my understanding is that nuclear reactor simulations mesh the volume into volumes much much larger than atoms (cm-scale, in most cases, I guess) and then use bulk statistics for each element of the mesh. A finer mesh is much more expensive to compute, and gives diminishing returns in terms of accuracy.
My fist job was discrete event network simulation. Some competitors used per-link bulk statistics instead of per-packet discrete events to run simulations much faster, but with lower accuracy. We had a reduced-accuracy hybrid simulation mode to use per-link bulk statistics to affect the per-packet event simulations for an epoch, and then use those results to update the link bulk statistics for the next epoch. I'm sure these sorts of optimizations are common across all kinds of simulation domains.
Limiting information travel to the speed of light, and using true randomness instead of hidden state determinism would improve simulation parallelization.
Someone might reasonably believe the "real" universe isn't quantitized and/or it's quanta are smaller than our universe's, and our observations of quantum phenomena are artifacts of the use of bulk statistics in simulating the mesh of our universe.
The simulation hypothesis on its own may be solipsism but it's philosophically useful as a limiting condition of someone's theory of the universe, especially when they're emphatic that it's not a simulation. Have you proved the universe is not a simulation? If so, where is your proof? What is the nature of your proof? How can you have proved this thing?
It is a useful tool for showing someone has overstepped all evidence and logic. Unless someone really does manage to prove it, in which case hand them all the Nobel prizes and Fields medals for the year because they'll have earned them.
It's no more useful a limiting condition of someone's theory of the universe than a divine being or facetious spaghetti monster though. Positing something else which oversteps all evidence and logic and demanding someone disprove it isn't a particularly good way of emphasising the deficits in their own evidence base or inferences
No it isn't.
Only if you think simulation theory only applies to YOU.
If you think you are the only one in the simulation, sure, it might be solipsism.
But that is not generally what simulation theory is speculating about.
Same with people miss-interpreting various ideas as Nihilism. Sure, if you take it from a vary narrow point of your own view, it might be.
In both cases, the people are generally not looking at the subject correctly.
> if you think simulation theory only applies to YOU
Is there any reason to think otherwise (if one believes in simulation in the first place)? Why simulate the whole universe, when it is sufficient to simulate just one's internal state? Doesn't even require to simulate the high-definition sensory perception, only the mind's latent space — which surely is way more compact. Occam's razor.
Ah, and it is totally not required to simulate your internal state at every moment at all times. It could be just one single moment that captures the feeling of continuity, and then you disappear to the void, but you never realize. See "Boltzmann brains" (again, it is not required to simulate the actual wetware on atomic level, just the compact internal state of the mind).
I've never heard of solipsism and so read a paragraph on wikipedia. I've wondered myself about sorta the same concept. Everything you could possibly know and understand can only come from your senses. I guess simulation theory is like the probability of a MITM attack on your senses...
I don’t really relate solipsism to simulation theory. Solipsism is boring and doesn’t have much depth beyond being a “whoa” moment when first encountered.
Simulation theory raises actual questions about meaning and the afterlife, offers some hypothesis to debate, etc.
By extension, does that imply you also don’t care about subjective experience if they can’t be proven empirically?
For those downvoting: The article says that the author does not care about any question that cannot be dis/proven empirically. The hard question of consciousness seems to fall in this category, so I'm asking the OP to elaborate if they fully agree with the article or have a different nuanced perspective.
Given the author, I wonder how many states a binary TM would need to run a universe that is complex enough to contain intelligent entities that think about things like TMs...
Counterintuitively, the less you specify the universe, the less you need to run it. What happens is your runtime bloats like crazy, exponentially or even beyond. But the TM itself becomes simpler and simpler as you remove constraints.
The TM to run all possible TMs is not complicated. Putting it together is an extended homework problem. Probably even easier to put together the lambda calculus expression to evaluate all possible lambda calculus expressions, or the equivalent in other TM-complete computation models. (We use TMs because they have some nice proof characteristics but we pay for those with the difficulty of "programming" them.) To say our universe is "certainly" in there is a difficult statement to prove, but all possible arbitrarily-precise simulations of it must be, even if an "arbitrarily precise" simulation functions with an arbitrarily-large lookup table of the random results of quantum interactions. The TM that runs all TMs isn't just running all the cute little TMs with 12 states that reverse a string or something. It runs all of them. Even the ones with Tree(Tree(3)) states in them. Even the ones with numbers of states that make Tree(Tree(3)) look tiny. And then the ones that make those look tiny, and so on.
It's just that the resources required to run this are effectively infinite, on the grounds that no universe could ever have enough resources to reach its own execution, let alone anything else.
If simulation is modeling for intelligent entities thinking about the simulation, does it also model for them trying to break it and succeeding? Does that simulate the simulation breakage, or break it?
Interesting. The argument made at the end of the blog is analogous to Godel's Incompleteness Theorem. There must be a larger formal system in which the existing system can be computed and defined. The only way out is if the universe isn't fully deterministic and can't be defined formally(1). God/Simulation or Chaos, I suppose.
(1) - This may not hold. If so, I'd love someone to explain to me why Godel can't be applied in this way, or extended to cover simulated systems with formal rules.
I'm more interested in what is simulated, why it is simulated and for what reasons the plug may be pulled.
Simulation implies there is an operator of it with an intention for it.
Even if it is only for his/her/its/whatever own pleasure or curiosity.
The inspiration of the simulation thesis is obviously a thinly veiled idea of a creator God, no doubt.
What will it do when it realizes that its Little Computer People are starting to use the resources of its simulation (information, computing capacity, matter, energy) to simulate their own little world?
Does this thwart its plans, contaminates the experimental setup?
Is it what it is researching?
* The physics of the world that is running the simulation can be completely different from ours. Just like we can put together a bit of software that runs a computer simulation with pretty much any rules we want.
* Even if the world that is running the simulation has a similar concept of time as ours, it is not guaranteed that the rate of simulation is constant. So if our world gets more complicated the simulation rate might be slowing down and we would be none the wiser. Just like a computer physics simulation does not have to progress at a constant rate (in can render things faster when there is less entities and then slow down later when there are more entities -- to produce a movie that runs at a constant speed).
* We have no idea yet what are underlying rules that govern our world, let alone be able to infer anything about the simulation based on it. Claiming that quantum physics is because of limitations or bugs in the simulation is at the very least super premature.
I am careful not to say that we are or are not simulated and that's because I simply believe there is insufficient data. I think there are some convincing arguments for the simulation in the way how the rules seem constructed to prevent us from understanding what is really happening. But there might just be a good explanation for this just like there is a good explanation for general relativity that comes directly from understanding some mathematical facts.
The simulations that we create are subject to the physics of this universe, so they inherit fundamental limitations like finiteness and boundedness of everything. If we then find that some aspects of physics seem to obey the same principles, that's evidence for exactly nothing. Furthermore, the host universe could be literally anything and may be subject to completely different limitations.
It's an unfalsifiable and thus meaningless idea.
This, in my opinion, while true in principle, does not have to become visible to the simulation (from inside the simulation). For example, we cannot build a true Turing machine because we can't supply it with an infinitely long tape, but we can always get a new roll of paper and paste it back-to-back to the one being used, thereby creating an arbitrarily long tape. This process can't go on forever, but if you are sure you can erase and re-use some old tapes (e.g. because they're sufficiently far from 'now' and your machine will only backtrack so much and ultimately advance to the same 'future' direction) it's conceivable to have unbounded non-periodic processes on finite hardware.
Our computing architecture reflects our reality, that's all that really says.
Weirdly it sort of relies on moores law. Just look it up. If you want to refute the theory you need to attack the main argument. The main argument sounds wrong but at the same time is hard to logically unjustify.
> If you want to refute the theory you need to attack the main argument
I did not mean to address any arguments for or against simulation. And definitely did not want to refute the theory.
Where did you get that idea?
I am just saying to be careful about assuming too much about the world that is running the simulation in case we live in one.
Personally I think we are in a AI nursery. God is real. Heaven is real. Either we live a good life, and get advanced to the next round of the simulation or we get thrown in the trash.
OTOH, now that I understand the whole 'Grabby Aliens' thing; that's another solid solution to the Fermi Paradox.
Hard to say.
The derivative is seen in the real world in things like motion. If we know something's position changes over time we can calculate its speed (over time), and acceleration (the derivative of speed over time).
Motion is (kinetic) energy. Energy, manifested as motion, is transferred to an object not as force over time ('tis but a proxy) but as force over distance. This is why the continuous application of a fixed force to an object in motion transfers more energy to the object in a unit of time when the object is moving faster to begin with. (kinetic energy for an object of a certain mass is proportional to speed^2)
But imagine that an object is truly "at rest". No motion. A force is applied. Why does it start moving in the first place?
How common is that though? I'm think your statement doesn't generalize very well, and probably only applies to a small subset of religious people (e.g. wacky young-earth creationist flat earther types).
IIRC, the Big Bang is often embraced by religious people as (more or less) proof that the universe was actually created and that something can be created from nothing (e.g. https://en.wikipedia.org/wiki/Creatio_ex_nihilo#Creatio_ex_n...).
There is a line of thought that the deities come from when humans "suddenly" became conscious. The creation of the world is the creation of the human mind.
Clearly I spend too much time thinking about vector spaces.
Combining this idea with the multi-worlds hypothesis, where every possibility branches into additional simulations, is even more interesting to ponder. It suggests a tree-search aimed at a desired end-state — a brute-force method of solving an unknown problem.
At the very least, we can say that the problem’s solution involves life and intelligence, since our branch has not been prevented or pruned.
Typical monotheistic frameworks see this and say, “god is all-powerful and all-loving (toward us)”. Simulation theory would see this, and possibly say, “life and intelligence are a necessary step toward the simulator’s unknown objective”.
I’d propose a simple objective for an imperfect simulator: the creation of an entity less imperfect than itself. Throw recursion into the mix, and the objective becomes the creation of a perfect entity at the mathematical limit. What better way to hack one’s own simulation than to simulate a universe where the simulated figure out how to do so? Perhaps the beginning is the end.
Put more simply — god wants to create God, to be God.
Yeah the theory is well proliferated in pop culture and at present doesn't have any practical physical implication on everyman Jack’s life, so in that sense it’s tiring to hear about it, even to modern physicists who care not about asking why but only how.
But it’s definitely a very profound idea as you point out. And I’m always disappointed how many physicists write off the philosophy of science in their myopic quest to derive the analytical description of the universe.
Sounds like someone has worked out how to call fork() - I hope it is implemented using Copy-on-Write!
[0] https://en.wikipedia.org/wiki/Zipper_%28data_structure%29
Especially the argument that simulations are "more efficient" than reality itself and that therefore we are more likely to be in a simulation are uhm... in dire need of some basic understanding of thermodynamics. And statistics.
For instance, my understanding is that nuclear reactor simulations mesh the volume into volumes much much larger than atoms (cm-scale, in most cases, I guess) and then use bulk statistics for each element of the mesh. A finer mesh is much more expensive to compute, and gives diminishing returns in terms of accuracy.
My fist job was discrete event network simulation. Some competitors used per-link bulk statistics instead of per-packet discrete events to run simulations much faster, but with lower accuracy. We had a reduced-accuracy hybrid simulation mode to use per-link bulk statistics to affect the per-packet event simulations for an epoch, and then use those results to update the link bulk statistics for the next epoch. I'm sure these sorts of optimizations are common across all kinds of simulation domains.
Limiting information travel to the speed of light, and using true randomness instead of hidden state determinism would improve simulation parallelization.
Someone might reasonably believe the "real" universe isn't quantitized and/or it's quanta are smaller than our universe's, and our observations of quantum phenomena are artifacts of the use of bulk statistics in simulating the mesh of our universe.
It is a useful tool for showing someone has overstepped all evidence and logic. Unless someone really does manage to prove it, in which case hand them all the Nobel prizes and Fields medals for the year because they'll have earned them.
But that is not generally what simulation theory is speculating about.
Same with people miss-interpreting various ideas as Nihilism. Sure, if you take it from a vary narrow point of your own view, it might be.
In both cases, the people are generally not looking at the subject correctly.
Is there any reason to think otherwise (if one believes in simulation in the first place)? Why simulate the whole universe, when it is sufficient to simulate just one's internal state? Doesn't even require to simulate the high-definition sensory perception, only the mind's latent space — which surely is way more compact. Occam's razor.
Ah, and it is totally not required to simulate your internal state at every moment at all times. It could be just one single moment that captures the feeling of continuity, and then you disappear to the void, but you never realize. See "Boltzmann brains" (again, it is not required to simulate the actual wetware on atomic level, just the compact internal state of the mind).
/didn't read the article
Simulation theory raises actual questions about meaning and the afterlife, offers some hypothesis to debate, etc.
For those downvoting: The article says that the author does not care about any question that cannot be dis/proven empirically. The hard question of consciousness seems to fall in this category, so I'm asking the OP to elaborate if they fully agree with the article or have a different nuanced perspective.
Deleted Comment
Deleted Comment
The TM to run all possible TMs is not complicated. Putting it together is an extended homework problem. Probably even easier to put together the lambda calculus expression to evaluate all possible lambda calculus expressions, or the equivalent in other TM-complete computation models. (We use TMs because they have some nice proof characteristics but we pay for those with the difficulty of "programming" them.) To say our universe is "certainly" in there is a difficult statement to prove, but all possible arbitrarily-precise simulations of it must be, even if an "arbitrarily precise" simulation functions with an arbitrarily-large lookup table of the random results of quantum interactions. The TM that runs all TMs isn't just running all the cute little TMs with 12 states that reverse a string or something. It runs all of them. Even the ones with Tree(Tree(3)) states in them. Even the ones with numbers of states that make Tree(Tree(3)) look tiny. And then the ones that make those look tiny, and so on.
It's just that the resources required to run this are effectively infinite, on the grounds that no universe could ever have enough resources to reach its own execution, let alone anything else.
This is my stance. Nice to see it at the top.
(1) - This may not hold. If so, I'd love someone to explain to me why Godel can't be applied in this way, or extended to cover simulated systems with formal rules.