I think it’s really interesting to see the similarities between what Wolfram is saying and the work of Julian Barbour on time being an emergent property. Both suggest a similar underlying ontology for the universe: a timeless, all-encompassing realm containing all possible states / configurations of everything. But what’s really fascinating is that they reach this conclusion through different implementations of that same interface. Barbour talks about a static geometric landscape where time emerges objectively from the relational (I won’t say causal) structures between configurations, independent of any observer. On the other hand, Wolfram’s idea of the Ruliad is that there’s a timeless computational structure, but time emerges due to our computational limitations as observers navigating this space.
They’ve both converged on a timeless “foundation” for reality, but they’re completely opposite in how they explain the emergence of time: objective geometry, vs. subjective computational experience
I was literally thinking of the same similarities. Barbour's exposition of the principle of least action as being time is interesting. There's a section in The Janus Point where he goes into detail about the fact that there are parts of the cosmos that (due to cosmic inflation) are farther apart in terms of light-years than the universe is old, and growing in separation faster than c, meaning that they are forever causally separated. There will never be future changes in state from one that result in effects in the other. In a way, this also relates to computation, maybe akin to some kind of undecidability.
Another thing that came to mind when reading the part about how "black holes have too high a density of events inside of them to do any more computation" is Chaitin's incompleteness theorem: if I understand it correctly, that basically says that for any formal axiomatic system there is a constant c beyond which it's impossible to prove in the formal system that the Kolmogorov complexity of a string is greater than c. I get the same kind of vibe with that and the thought of the ruliad not being able to progressively simulate further states in a black hole.
>There's a section in The Janus Point where he goes into detail about the fact that there are parts of the cosmos that (due to cosmic inflation) are farther apart in terms of light-years than the universe is old, and growing in separation faster than c, meaning that they are forever causally separated. There will never be future changes in state from one that result in effects in the other. In a way, this also relates to computation, maybe akin to some kind of undecidability.
Ho, I love this hint. However even taking for granted that no faster than light travel is indeed an absolute rule of the universe, that doesn't exclude wormhole, or entangled particles.
Actually, the parts of the universe receding from us faster than the speed of light can still be causally connected to us. It’s a known “paradox” that has the following analogy: an ant walks on an elastic band toward us at speed c, and we stretch the band away from us by pulling on the far end at a speed s > c. Initially the ant despite walking in our direction gets farther, but eventually it does reach us (in exponential time). The same is true for light coming from objects that were receding from us at a speed greater than c when they emitted it. See https://en.m.wikipedia.org/wiki/Ant_on_a_rubber_rope
The universe doesn't need to evolve for us to have experience. We would experience evolution through the state space because its structure is oriented such as to experience evolution through time. Each point in experience-time (the relative time evolution experienced by the structure) is oriented towards the next point in experience-time. Even if all such points happen all at once, the experience of being a point in this structure oriented towards the next point is experienced subjectively as sequential. In other words, a block universe would contain sequences of Boltzman brains who all subjectively experience time as sequential.
The real question is why would such a universe appear to evolve from a low entropy past following a small set of laws.
I'm not sure why experience requires the arrow of time or location. Your experience does, and it might seem that is a universal rule, but only because you can't possibly intuit a world in which time doesn't flow.
I think Dr. Manhattan is a good fictional reference. He existed in a timeless form. Everything was happening simultaneously for him. For everyone else they experienced him in a time like way, but only as a matter of perspective.
I think that time isn't what we think it is - but I don't think it's all already set; rather I think that the past can be constrained by the future just as the future is constrained by the past.
I don't think that there's spooky action at a distance (it's fundamentally equivalent to retrocausality, and the consequences of the distant foreign event cannot outpace its light cone anyway).
I think its a superposition of states of a closed time-like curve thing being fleshed out as its contradictions are resolved and interactions are permitted between its colocated non-contradictory aspects.
But I'm not a physicist, so that's probably all just bullshit anyway.
I don't think they are saying anything similar at all. Julian Barbour finds a way to get rid of Time completely (by saying every possible state exists and there must be some law that favours states that _seem_ to be related to _apparently_ previous states). Wolfram is more focused on making sense of 'time is change' through the lens of computation.
Idk, just looking at it now Barbour seems much, much more rigorous. The linked article is more “using scientific terms to muse about philosophy” than physics, IMHO. For example;
In essence, therefore, we experience time because of the interplay between our computational boundedness as observers, and the computational irreducibility of underlying processes in the universe.
His big insight is literally the starting point of Hegel’s The Science of Logic, namely that we are finite. That in no ways justifies all the other stuff (especially multiverse theory), and it’s not enough to build a meaningfully useful conception of time, at all. All it gets you is that “if you were infinite you wouldn’t experience time”, which is a blockbuster-sci-fi-movie level insight, IMO.
I can’t help but think of Kant as I write this; he wrote convincingly of the difference between mathematical intuition and philosophical conception, a binary Wolfram would presumably—and mistakenly-identify with solid logic vs meaningless buffoonery. But refusing to acknowledge our limits makes you more vulnerable to mistakes stemming from them, not less.
…the metaphysic of nature is completely different from mathematics, nor is it so rich in results, although it is of great importance as a critical test of the application of pure understanding—cognition to nature. For want of its guidance, even mathematicians, adopting certain common notions—which are, in fact, metaphysical—have unconsciously crowded their theories of nature with hypotheses, the fallacy of which becomes evident upon the application of the principles of this metaphysic, without detriment, however, to the employment of mathematics in this sphere of cognition.
Worth remembering at this point that Aristotle coined “physics” for the mathematical study of physis (nature), which was then followed up by a qualitatively different set of arguments interpreting and building upon that basis in a work simply titled metaphysics (after physics). We’ve learned infinitely more mathematical facts, but IMO “what is time, really?” will forever remain beyond their reach, a fact determined not by the universe but by the question itself.
TL;DR: if you’re gonna try to talk cognition you should at least admit that you’re writing philosophy, and ideally cite some philosophers. We’ve been working on this for a hot minute! Barbour seems to be doing something much less ambitious: inventing the most useful/fundamental mathematical framework he can.
I swear as I get older philosophy feels more and more like religion for intellectuals.
If you want to talk about cognition or time you should study science, not philosophy. You’re not going to learn about the universe in any significant way by studying hegel or aristotle or kant harder.
I have read and appreciated your writings going back to the comp.lang.lisp days, but a blog post that starts with “if you haven’t read the previous post, please do before reading the rest of this one” is not what I would consider accessible. …and that previous post then asks the reader to first read a paper or watch a video before proceeding. While a decade later than what you wrote, Wolfram’s article is much more self contained and complete.
Whenever people criticize Wolfram the comeback is often, he’s just trying to discuss big ideas that mainstream science won’t talk about. Of course that’s not the reason for the criticism at all and I think your work here shows that it’s totally fine to speculate and get a little philosophical. The results can be interesting and thought provoking.
There’s a difference between big ideas and grandiosity. It also shows big ideas can stay scientifically grounded and don’t require making up corny terminology (Ruliad? lol).
More than that, "ruliad" is complete vacuous, too. "All possible rules applied to all possible states infinitely many times", like, every possible theory, including the right one is in it, ok... thanks for defining this useless object.
That particular proposal was mathematically wrong for reasons I still find physically perplexing (it turns out that for some events quantum theory allows for stronger memory records - defined via classial mutual information - of entropy decreasing events!). A simple example is in here:
https://arxiv.org/abs/0909.1726
(I am second author).
It's sort of funny that where the title alludes to the arrow of time, opening with a quote asserting "all measurements are in principle reversible", it pretty quickly gets to a different arrow of time - that of comprehension:
> "If you haven't read the previous post ... this won't make any sense"
Could you have demonstrated, perhaps accidentally, an alternative organising principle allowing temporal ordering to emerge in a computationally oriented ontology? Can the future only "make sense" if it temporally follows the past?
That's actually a great question, and one I've been wrestling with for years. Why do we perceive time as a sort of continuous monotonic flow? And I think it can be explained in terms of perception and comprehension, which I have a gut feel can be formalized as a kind of preferred basis selection. But rendering that intuition into words (and math) has turned out to be quite challenging, which I why I haven't written about it yet. Maybe in the future :-)
Do physicists think time actually exists? I wonder if someone has reasoned that time is an accounting method that humans have developed to make sense of their experienced change of systems.
Wolfram uses the words progression and computation a lot in his essay, but there’s an implicit bias there of assuming a process is deterministic, or has some state it’s driving towards. But none of these “progressions” mean anything, I think. It seems they are simply reactions subject to thermodynamics.
If no one observed these system changes, then the trends, patterns, and periodicity of these systems would just be a consequence of physics. It seems what we call “time” is more the accumulation of an effect rather than a separate aspect of physics.
For example, I wonder what happens in physics simulations if time is replaced by a measure of effect amplitude. I don’t know, tbh, I am not a physicist so maybe this is all naïve and nonsense.
Time "exists" in physics in the same way everything else in physics does - namely, the value we measure with clocks in the real world satisfies all of the same properties (at least in certain regimes of the universe) as the thing we call "time" in various physics theories like relativity/classical mechanics. And those theories make (reasonably) correct predictions about the values we measure in the real world.
Is it possible that these properties are the result of some other interactions that have very different laws at a lower level? Absolutely! But the discovery of particles didn't cause the sun to disappear, if that makes sense.
Yes, spacetime is important for General Relativity, cosmology and thermodynamics. Whether it's fundamental or emerges from something more fundamental is an open question though.
I don't know the answer to your question but tangentially, many human concepts related to time definitely do not exist in a purely physical sense. Like being "late" or "early", things "taking too long" or "being slow". Being "out of time" or "just in time". These are all human concepts. Physically speaking (classically anyway), things all happen right when they are supposed to.
I find a lot of interesting links between spirituality and physics like this. One idea or message in spirituality is that everything happens exactly as as "the universe" intends it to. It's meant to be a comforting thought as events (good and bad) occur in one's life and to encourage one to detach from outcomes. Yet, it's more or less parallel to classical determinism as you mentioned.
> Physically speaking (classically anyway), things all happen right when they are supposed to.
- In a much larger universe, write down in a log book every event to every particle at every instant, from the Big Bang to the restaurant.
- Put it on the fireplace mantle and leave it there.
This is basically a log of a simulation. It exists in much the same way as an ongoing simulation would, except that its time dimension isn't shared with the simulating universe. But every observer within has had the same observations as if it did.
This assumes that a map, if sufficiently detailed, is identical to the territory.
Maybe it is, maybe it isn’t - but it is a highly debatable metaphysical assumption. I’m not sure how seriously we should take some people’s claims that they “know” that such an assumption is actually true
It's an argument about simulations, not about reality. If reality is a simulation, then arguments about simulations apply to it, but that's the big if.
Except for the randomness introduced in Quantum Mechanics.
If they ever solve the randomness, then if the map is down to every particle, then yes, the map and reality could be the same. But think at that point you need a computer the size of reality to keep track of every particle.
Or, maybe the entire universe is one giant wave equation. But again, I think you need a computer the size of the universe to solve it.
If I took the binary representation of that log and XOR'ed it with a random binary string, then would the result also have observers with the same observations?
Ok but the act of writing it down would always take longer than the actual unfolding of the universe itself. Just like the halting problem, we can’t skip ahead at any point and we have no idea what will come next.
Sure, but the timebases are different. Maybe it took the butterflypeople a thousand butterflyweeks to write it out.
Let me restate the metaphysics a bit differently. Let's say there's no us, no butterflypeople, nothing at all. Entropy reigns supreme, no information is organized.
Now add the butterflypeople. They write the humanpeople's log book. Information exists in organized form. The humanpeople's bits have been divined out of the great entropic nothing. Maybe that's all it takes?
A fun variant of this is that the log can be taken at variable intervals and as long as it is sufficiently detailed, it can still capture all salient details.
Similarly a simulation run at some "tick rate" can also be run at 2x the rate while taking 1/2 the step per tick. Within the universe nobody would notice, as long as the steps were fine enough to begin with.
I think it was in Diaspora (or Permutation City?) that Greg Egan proposed that any tick rate would be unnoticable to that simulated beings, including "none".
In other words, the movie Top Gun will continue to exist as-is, no matter how many copies are made of it, including none. Encoded as a digital file it is just a number, a pure timeless concept, it doesn't have to be written down to exist. It always existed on the number line, even before Tom Cruise was born. In fact, every encoding of Top Gun exists on the number line, in every compression format, in every resolution, even a future 16K resolution that was never filmed and has no display devices made for it yet. Its encoding as a 400GB long number is there, already, and will always be there.
In other words, and simulation, an log of events, any experience already exists in mathematics, in every encoding... somewhere on the number line. This includes the entire physical universe. This isn't hypothetical, it's necessarily true! Anything that can be represented by a finite amount of information must be on the number line.
Even if you assume the Universe lasts forever, you can break its history up into a sequence of states, each of which is finite. Then the series will exists on the number line as a set of points heading off to infinity.
This kind of thought experiment seems like it breaks down due to the uncertainty principle. We can't exactly specify the full state of every particle in the universe. The universe might also be infinite and you can't enumerate an infinite set even without uncertainty, though you can write a generating function or recurrence relation for it, which seems to be Wolfram's point.
But why bother with this kind of detail? What's the difference between what you're imagining here and a normal reel of film? It can be played back, but even if it isn't, it records the state of events that happened, including observers that once existed and no longer do, experiencing events that once happened but no longer do. It's possible for a record to describe a canonical sequence even if the record itself doesn't change. Somebody outside of the record can view it out of order, speed it up, slow it down, pause it, reverse it. A film reel doesn't share the time dimension of its own universe in that way.
I'm struggling to come up with what this implies and why.
To your first point, if it's a simulated universe, the simulators can just choose to make it finite, and come up with their preferred particle behavior rules.
As observers, we perceive time as passing, but is there anything special in this perception? Looked at another way, everything could be frozen in a 4D log book and we couldn't tell the difference, or could we? In this interpretation, Napoleon is as alive (in 1820) as we are (in 2024.) A film reel is a similar concept, except it's just a 3D projection rather than a complete detailed 4D account.
Now shred that log to particles and scatter them everywhere, and you have the "dust theory". Neither the time dimension or the log are shared with the simulating universe, and yet they are still valid for the observers within the universe.
If the sequence of the log states is entirely deterministic based on the initial state, then you don't even need to actually write down the entire log for it to "exist". This is Greg Egan's Permutation City.
Can we reduce this to an estimate of survivorship bias? If there is only one universe, then our survival is clearly explained: we're in the only reality there is. If all possible universes exist, then we really lucked out in ending up in this one (well, depending on who wins the election I guess.)
In the middle are the permutations selected through the filter of other realities, when they chose which universes to simulate. We lucked out but not as much, because uninteresting universes never made it out of the entropic soup.
It would have to be a conditional estimate of course, because our sentience biases our contemplation.
Is any of what he’s saying here, something he hasn’t essentially already said before?
The parts of this which were a little surprising to me (e.g. the bit about comparing time to heat, and the bit about running out of steps to do at an event horizon) iirc all linked to a thing posted a while ago?
I don’t share his enthusiasm for the phrase “computational irreducibility”. I would prefer to talk about e.g. no-speedup theorems.
I like thinking about hypergraphs that continually rewrite themselves. I've thought about it in terms of literary critique, or in "compiling" a novel. It reminds me of petri nets in a sense, where at any given moment, a character has a static model of the world, which can be diagrammed through a causal graph of conclusions and premises. Then, an event happens, which changes their understanding of the world; the hypergraph gets rewritten in response.
I've toyed with this with my own graph software when writing novels. It's of course impossible to fully document every characters' model before and after every event that affects them, but even doing so at key moments can help. I've wished more than once that I could "compile" my novel so it could automatically tell me plot holes or a character's faulty leap in logic (at least, one that would be out of character for them).
I've also tried the more common advice of using a spreadsheet where you have a column for each character, and rows indicating the passage of time. There you're not drawing hypergraphs but in each cell you're just writing raw text describing the state of the character at that time. It's helpful, but it falls apart when you start dealing with flashbacks and the like.
Every time I read stuff like this I get super drawn to thinking about Sunyata* - In Mahayana buddhism, my understanding is that Sunyata doesn't mean absolute nothingness or no existence, but all things are devoid of intrinsic, independent existence. Everything is empty of inherent nature because everything is interdependent... phenomena exist only in relation to causes and conditions. This relational existence assumes that things do not possess an unchanging essence... the ultimate sense, there is no fixed reality. What might seem like "everything" is actually permeated by "nothingness" or "emptiness" and that phenomena arise dependent on conditions, without intrinsic, permanent nature.
The all-time-all-space-all-branches brane of the Ruliad we call the Universe is the continuous one-ness and our selves are just the single-perspective projection models of that universe in our neurons that persist across edits to the neurons, until such as point as we update the model to see the larger picture and we can call that Nirvana, if we wish.
Indeed. Not only that, but it can be a lived experience. One sees that the need for something called "time" is actually an invention of the mind, and totally unnecessary. I know this sounds bizarre and like mystical woo-woo, but when it's seen, it's the simplest and most obvious thing in the world.
Seems like an appropriate post on a day when the Nobel of Physics was awarded not for Physics discoveries but for computer science...
But from Wheeler's "it from bit" to Wolfram's computational universes, the question is: where is the beef.
Now, there might be ultimately something worthwhile with the obsession with digi-physics. Mental models that seemed disparate may merge and become fruitful. It doesnt even have to be a fully formed toolkit. Newton's invention of calculus was kinda sketchy. But he was explaining things with it, things that were not undestood before.
Wolfram does offer an interesting alternative to viewing the universe as a manifold with a tensor (the GR view). He believes it's a graph with computational rules. Are they the same? Mathematically, manifolds have a clear notion of dimension. This affects things like the inverse square rule. Wolfram's view of the ruliad, an evolving graph with rules, does bring up the question of dimension.
But at the end of the day he needs to make a concrete prediction that differs than the current view in order to have people devote a lot of time studying his world view. He's a brilliant guy and the Wolfram Language is fantastic, but he really needs to humble himself enough to value the work of convincing others.
Worth noting this is ultimately the problem with string theory: String theory does provide a suite of mathematical tools which can solve real physics problems and give valid answers but they're known physics problems that can also be solved with other tools.
To be useful as a theoretical framework it always needed to be able to predict something which only string theory could - as a "more accurate view of reality".
Which is the same problem here: you've got to make a prediction, an accessible prediction, and ideally also not introduce any new incompatibilities.
> But at the end of the day he needs to make a concrete prediction that differs than the current view in order to have people devote a lot of time studying his world view
Even if it doesn't make any different concrete predictions, a new way of thinking about things can attract scientists' attention. The Many Worlds interpetation of QM is an example.
I honestly don't think he cares about 'mainstream acceptance'. He is a prolific publisher of his detailed thoughts, which in the pre-academic-gatekeeping-establishment era, was enough for any serious philosopher.
He's a hobbyist. That doesn't make him any less prestigious if his ideas are neat.
They’ve both converged on a timeless “foundation” for reality, but they’re completely opposite in how they explain the emergence of time: objective geometry, vs. subjective computational experience
Another thing that came to mind when reading the part about how "black holes have too high a density of events inside of them to do any more computation" is Chaitin's incompleteness theorem: if I understand it correctly, that basically says that for any formal axiomatic system there is a constant c beyond which it's impossible to prove in the formal system that the Kolmogorov complexity of a string is greater than c. I get the same kind of vibe with that and the thought of the ruliad not being able to progressively simulate further states in a black hole.
Ho, I love this hint. However even taking for granted that no faster than light travel is indeed an absolute rule of the universe, that doesn't exclude wormhole, or entangled particles.
https://scitechdaily.com/faster-than-the-speed-of-light-info...
You are assuming that the Principle of locality is true and proven. This is far from being the case from my understanding.
To have experience, requires position relative to the all, the traversal of the all is time.
More like a play head on a tape, you’re the play head traversing and animating your own projection.
The real question is why would such a universe appear to evolve from a low entropy past following a small set of laws.
I think Dr. Manhattan is a good fictional reference. He existed in a timeless form. Everything was happening simultaneously for him. For everyone else they experienced him in a time like way, but only as a matter of perspective.
You’re describing timelike experience. Photons “experience” events as in they are part of causality. But they do so in a non-timelike manner.
Like everything else that we "experience", maybe the perception that reaches our consciousness has nothing to do with what's actually out there.
There are no purple photons.
I don't think that there's spooky action at a distance (it's fundamentally equivalent to retrocausality, and the consequences of the distant foreign event cannot outpace its light cone anyway).
I think its a superposition of states of a closed time-like curve thing being fleshed out as its contradictions are resolved and interactions are permitted between its colocated non-contradictory aspects.
But I'm not a physicist, so that's probably all just bullshit anyway.
I can’t help but think of Kant as I write this; he wrote convincingly of the difference between mathematical intuition and philosophical conception, a binary Wolfram would presumably—and mistakenly-identify with solid logic vs meaningless buffoonery. But refusing to acknowledge our limits makes you more vulnerable to mistakes stemming from them, not less.
Worth remembering at this point that Aristotle coined “physics” for the mathematical study of physis (nature), which was then followed up by a qualitatively different set of arguments interpreting and building upon that basis in a work simply titled metaphysics (after physics). We’ve learned infinitely more mathematical facts, but IMO “what is time, really?” will forever remain beyond their reach, a fact determined not by the universe but by the question itself.TL;DR: if you’re gonna try to talk cognition you should at least admit that you’re writing philosophy, and ideally cite some philosophers. We’ve been working on this for a hot minute! Barbour seems to be doing something much less ambitious: inventing the most useful/fundamental mathematical framework he can.
If you want to talk about cognition or time you should study science, not philosophy. You’re not going to learn about the universe in any significant way by studying hegel or aristotle or kant harder.
You being you and you becoming a king might simply not be a combination which is compatible.
https://blog.rongarret.info/2014/10/parallel-universes-and-a...
Whenever people criticize Wolfram the comeback is often, he’s just trying to discuss big ideas that mainstream science won’t talk about. Of course that’s not the reason for the criticism at all and I think your work here shows that it’s totally fine to speculate and get a little philosophical. The results can be interesting and thought provoking.
There’s a difference between big ideas and grandiosity. It also shows big ideas can stay scientifically grounded and don’t require making up corny terminology (Ruliad? lol).
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.10...
That particular proposal was mathematically wrong for reasons I still find physically perplexing (it turns out that for some events quantum theory allows for stronger memory records - defined via classial mutual information - of entropy decreasing events!). A simple example is in here: https://arxiv.org/abs/0909.1726 (I am second author).
> "If you haven't read the previous post ... this won't make any sense"
Could you have demonstrated, perhaps accidentally, an alternative organising principle allowing temporal ordering to emerge in a computationally oriented ontology? Can the future only "make sense" if it temporally follows the past?
Only half kidding!
Dead Comment
Wolfram uses the words progression and computation a lot in his essay, but there’s an implicit bias there of assuming a process is deterministic, or has some state it’s driving towards. But none of these “progressions” mean anything, I think. It seems they are simply reactions subject to thermodynamics.
If no one observed these system changes, then the trends, patterns, and periodicity of these systems would just be a consequence of physics. It seems what we call “time” is more the accumulation of an effect rather than a separate aspect of physics.
For example, I wonder what happens in physics simulations if time is replaced by a measure of effect amplitude. I don’t know, tbh, I am not a physicist so maybe this is all naïve and nonsense.
Is it possible that these properties are the result of some other interactions that have very different laws at a lower level? Absolutely! But the discovery of particles didn't cause the sun to disappear, if that makes sense.
Yes, spacetime is important for General Relativity, cosmology and thermodynamics. Whether it's fundamental or emerges from something more fundamental is an open question though.
> Physically speaking (classically anyway), things all happen right when they are supposed to.
Deleted Comment
Deleted Comment
We are interested in a peculiar rate of time based on the heart beat of our experience.
- In a much larger universe, write down in a log book every event to every particle at every instant, from the Big Bang to the restaurant.
- Put it on the fireplace mantle and leave it there.
This is basically a log of a simulation. It exists in much the same way as an ongoing simulation would, except that its time dimension isn't shared with the simulating universe. But every observer within has had the same observations as if it did.
Maybe it is, maybe it isn’t - but it is a highly debatable metaphysical assumption. I’m not sure how seriously we should take some people’s claims that they “know” that such an assumption is actually true
If they ever solve the randomness, then if the map is down to every particle, then yes, the map and reality could be the same. But think at that point you need a computer the size of reality to keep track of every particle.
Or, maybe the entire universe is one giant wave equation. But again, I think you need a computer the size of the universe to solve it.
*May be subject to entropy over time.
How about an exact copy of the log book, but with one bit flipped. Voila, mostly universal physics.
Let me restate the metaphysics a bit differently. Let's say there's no us, no butterflypeople, nothing at all. Entropy reigns supreme, no information is organized.
Now add the butterflypeople. They write the humanpeople's log book. Information exists in organized form. The humanpeople's bits have been divined out of the great entropic nothing. Maybe that's all it takes?
Similarly a simulation run at some "tick rate" can also be run at 2x the rate while taking 1/2 the step per tick. Within the universe nobody would notice, as long as the steps were fine enough to begin with.
I think it was in Diaspora (or Permutation City?) that Greg Egan proposed that any tick rate would be unnoticable to that simulated beings, including "none".
In other words, the movie Top Gun will continue to exist as-is, no matter how many copies are made of it, including none. Encoded as a digital file it is just a number, a pure timeless concept, it doesn't have to be written down to exist. It always existed on the number line, even before Tom Cruise was born. In fact, every encoding of Top Gun exists on the number line, in every compression format, in every resolution, even a future 16K resolution that was never filmed and has no display devices made for it yet. Its encoding as a 400GB long number is there, already, and will always be there.
In other words, and simulation, an log of events, any experience already exists in mathematics, in every encoding... somewhere on the number line. This includes the entire physical universe. This isn't hypothetical, it's necessarily true! Anything that can be represented by a finite amount of information must be on the number line.
Even if you assume the Universe lasts forever, you can break its history up into a sequence of states, each of which is finite. Then the series will exists on the number line as a set of points heading off to infinity.
But why bother with this kind of detail? What's the difference between what you're imagining here and a normal reel of film? It can be played back, but even if it isn't, it records the state of events that happened, including observers that once existed and no longer do, experiencing events that once happened but no longer do. It's possible for a record to describe a canonical sequence even if the record itself doesn't change. Somebody outside of the record can view it out of order, speed it up, slow it down, pause it, reverse it. A film reel doesn't share the time dimension of its own universe in that way.
I'm struggling to come up with what this implies and why.
As observers, we perceive time as passing, but is there anything special in this perception? Looked at another way, everything could be frozen in a 4D log book and we couldn't tell the difference, or could we? In this interpretation, Napoleon is as alive (in 1820) as we are (in 2024.) A film reel is a similar concept, except it's just a 3D projection rather than a complete detailed 4D account.
If the sequence of the log states is entirely deterministic based on the initial state, then you don't even need to actually write down the entire log for it to "exist". This is Greg Egan's Permutation City.
In the middle are the permutations selected through the filter of other realities, when they chose which universes to simulate. We lucked out but not as much, because uninteresting universes never made it out of the entropic soup.
It would have to be a conditional estimate of course, because our sentience biases our contemplation.
How do you know every event to every particle?
The answer to that will literally change what gets written in the log book.
Deleted Comment
The parts of this which were a little surprising to me (e.g. the bit about comparing time to heat, and the bit about running out of steps to do at an event horizon) iirc all linked to a thing posted a while ago?
I don’t share his enthusiasm for the phrase “computational irreducibility”. I would prefer to talk about e.g. no-speedup theorems.
The connection between heat/entropy and time is well explored. E.g. https://en.wikipedia.org/wiki/Arrow_of_time and https://en.wikipedia.org/wiki/Entropy_as_an_arrow_of_time
But I don't think that's possible for him.
I've toyed with this with my own graph software when writing novels. It's of course impossible to fully document every characters' model before and after every event that affects them, but even doing so at key moments can help. I've wished more than once that I could "compile" my novel so it could automatically tell me plot holes or a character's faulty leap in logic (at least, one that would be out of character for them).
I've also tried the more common advice of using a spreadsheet where you have a column for each character, and rows indicating the passage of time. There you're not drawing hypergraphs but in each cell you're just writing raw text describing the state of the character at that time. It's helpful, but it falls apart when you start dealing with flashbacks and the like.
https://en.wikipedia.org/wiki/%C5%9A%C5%ABnyat%C4%81
The all-time-all-space-all-branches brane of the Ruliad we call the Universe is the continuous one-ness and our selves are just the single-perspective projection models of that universe in our neurons that persist across edits to the neurons, until such as point as we update the model to see the larger picture and we can call that Nirvana, if we wish.
Deleted Comment
But from Wheeler's "it from bit" to Wolfram's computational universes, the question is: where is the beef.
Now, there might be ultimately something worthwhile with the obsession with digi-physics. Mental models that seemed disparate may merge and become fruitful. It doesnt even have to be a fully formed toolkit. Newton's invention of calculus was kinda sketchy. But he was explaining things with it, things that were not undestood before.
But at the end of the day he needs to make a concrete prediction that differs than the current view in order to have people devote a lot of time studying his world view. He's a brilliant guy and the Wolfram Language is fantastic, but he really needs to humble himself enough to value the work of convincing others.
To be useful as a theoretical framework it always needed to be able to predict something which only string theory could - as a "more accurate view of reality".
Which is the same problem here: you've got to make a prediction, an accessible prediction, and ideally also not introduce any new incompatibilities.
Even if it doesn't make any different concrete predictions, a new way of thinking about things can attract scientists' attention. The Many Worlds interpetation of QM is an example.
He's a hobbyist. That doesn't make him any less prestigious if his ideas are neat.