What’s wrong with humanity?
If you're interested, you can peruse the C code that was used to generate the tables. Here's the file for sine/cosine:
https://github.com/RandalLinden/DOOM-FX/blob/master/source/m...
Is this really the order of events? I imagine the pre-calculated route is what you'd try first, and only go for extra hardware if that failed somehow.
If you traveled at the speed of light then you'd be at the planet in 0 time your time, and you would arrive with the first of the broadcasts over those 20k years. So once on the planet you'd get to watch all 20k years of broadcasts.
If you traveled at 1/2 the speed of light (not focusing on your dilation for the moment), then you'd still beat 50% of the transmissions and have 10k earth years of broadcasts to watch.
I think the question is what % of the speed of light is a gamma factor of 20k/10 = 2000. That's something like 99.9999999% of the speed of light.
Meaning you would get there before 99.99999% of the broadcasts had arrived and you'd be able to watch just about all of them in real time over the next (just less than) 20k years.
Assuming those two planets are roughly in the same reference frame, the only way 20k years could pass on earth w.r.t. your arrival is if the destination planet is 20k light years away.
If the destination planet is moving relativistically away from earth at an appreciable percentage of the speed of light then I couldn't say what the math would be. Maybe still the same, maybe not.
(Always check your assumptions. Sorting has an k log k lower bound if all you can do is compare elements. With integers 1..n, one can do much more.)
I showed up at 5pm at his office, as he was bracing himself for the drive from Swarthmore back to Philly, wondering why he ever took this gig. I proposed a linear algorithm: Break 1..n into k bins, and only sort to detect collisions within each bin. Bins would have an average occupancy of 1, so it didn't matter what sort you used.
(I found this idea later in Knuth, and was briefly disillusioned that adults didn't read everything.)
I looked like George Harrison post-Beatles, I probably smelled of pot smoke, and he dismissed my idea. Then my dorm hall phone rang at 10pm, a professor of mine relaying an apology. For that and other ideas, the second edition of his book opened with thanks to me.
I'm surprised now that I realized then that bubble sort actually had the least overhead, in such an application. People don't use Strassen to multiple matrices unless the matrices are really big. For sorting, "really big" is 2 or 3 elements.
Suppose a flyby mission records a couple of these cartidges and flies back to Earth. Assume 3 years round trip (1 there, 1 record, 1 back).
That's Nx80,000,000,000,000 bits in 3 years.
Or Nx10E+16 bits / 9E+7 seconds.
Or about a gigabit per second effective rate per cartridge.
Not bad. Psyche (our best optical link from deep space), has 2 megabits/s peak rate.
I would have imagined that we could upgrade the communication equipment on a space probe much more easily than we could add fuel for a return trip.
Furthermore, a predictive model is not working with a complete picture of the weather, but rather some limited-resolution measurements. So, even ignoring non-weather, there may be local weather phenomena detected at time t0, escaping detection at time t1, but still affecting weather at time t2.
I want to stress that this is key to my confusion. This is not just some abstract result; this is a computation that we can perform and draw a real value from.
Of course, I'll now fall back on godel's second incompleteness theorem and say that one cannot prove, inside ZFC, that ZFC is consistent. But if the above turing machine halts, then we proved ZFC is consistent - a contradiction!
Where is the mistake here? My current guess is there is a technical detail in the proof of Thm 1 which uses a stronger metatheory than ZFC to show that the 758-state turing machine halts iff ZFC is inconsistent. This is not a contradiction, because yes, we can run the turing machine for BB(748) steps, but that will only show that ZFC+ proves ZFC is consistent, which is already well known - ZFC + there exists an inaccessible cardinal does the job.
However, I haven't looked at the paper in detail to know whether this is the case. Does anybody who has thought deeply about this problem have insight to offer?
[0] https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-unde...
How would we compute the value of BB(748)?
We discuss the numbers for Go in the introduction of our paper https://matthieuw.github.io/go-games-number/AGoogolplexOfGoG...
I find it slightly odd that the game length is calibrated to "reasonable" games but the branching factor is not.
If the goal is to estimate the number of possible games of go, then the calculation would be dominated by the number of long games rather than the number of short games, and very long games are possible.
If the goal is to estimate the number of "reasonable" games of go, then the branching factor should also be much smaller, as most possible moves are not reasonable. Perhaps the logarithm of the branching factor could be estimated as the entropy of some policy model such at that of KataGo.
P.S. I am happy to have received a reply from the mighty Tromp!
Chess Shogi Go
Board Size 8x8 9x9 19x19
Branching Factor 35 92 250
Complexity 10^123 10^226 10^360