Readit News logoReadit News
hakuseki commented on A made-up name is better than no name   mbuffett.com/posts/a-made... · Posted by u/marcusbuffett
hakuseki · 9 months ago
The article is about (position, move) pairs. Why not call these objects "steps"?
hakuseki commented on Children should be allowed to get bored (2013)   bbc.com/news/education-21... · Posted by u/xj
teekert · a year ago
Screw getting bored and “fixing” it with a screen. I’ve been seeing a new trend: Parents with kids on the backs of their bikes (something everyone does here, but now) with the kids glued to a screen… So, instead of learning (without getting bored at all!) about the reality that they are one day going to have to navigate by themselves, they watch some cartoon, blaring annoying audio to people around them in the process.

What’s wrong with humanity?

hakuseki · a year ago
In what sense is looking at a screen a failure to prepare for the world that adults navigate? Adults also look at screens.
hakuseki commented on How do computers calculate sine?   androidcalculator.com/how... · Posted by u/vegesm
staplung · a year ago
I recently learned how Doom was ported to the SNES. It's quite impressive. The SNES hardware was nowhere near fast enough to do all the trig calculations needed for the game but cartridge based games had a trick up their sleeve: they could include actual hardware inside the cart that the game code could make use of. It was more expensive but if you expected to sell a boatload of copies, it could be worth it. However, even using extra hardware wasn't enough in this case. So they pre-calculated lookup tables for sine, cosine, tangent etc. for every angle at the necessary precision. They were helped by the fact that the game resolution in this case was fairly low.

If you're interested, you can peruse the C code that was used to generate the tables. Here's the file for sine/cosine:

https://github.com/RandalLinden/DOOM-FX/blob/master/source/m...

hakuseki · a year ago
> However, even using extra hardware wasn't enough in this case. So they pre-calculated lookup tables for sine, cosine, tangent etc. for every angle at the necessary precision.

Is this really the order of events? I imagine the pre-calculated route is what you'd try first, and only go for extra hardware if that failed somehow.

hakuseki commented on Relativistic Spaceship   dmytry.github.io/space/... · Posted by u/thunderbong
BoiledCabbage · 2 years ago
I think you may have that wrong (If I'm understanding you correctly).

If you traveled at the speed of light then you'd be at the planet in 0 time your time, and you would arrive with the first of the broadcasts over those 20k years. So once on the planet you'd get to watch all 20k years of broadcasts.

If you traveled at 1/2 the speed of light (not focusing on your dilation for the moment), then you'd still beat 50% of the transmissions and have 10k earth years of broadcasts to watch.

I think the question is what % of the speed of light is a gamma factor of 20k/10 = 2000. That's something like 99.9999999% of the speed of light.

Meaning you would get there before 99.99999% of the broadcasts had arrived and you'd be able to watch just about all of them in real time over the next (just less than) 20k years.

Assuming those two planets are roughly in the same reference frame, the only way 20k years could pass on earth w.r.t. your arrival is if the destination planet is 20k light years away.

If the destination planet is moving relativistically away from earth at an appreciable percentage of the speed of light then I couldn't say what the math would be. Maybe still the same, maybe not.

hakuseki · 2 years ago
Are you planning to decelerate by crashing into the planet at relativistic speed?
hakuseki commented on When would you ever want bubblesort?   buttondown.email/hillelwa... · Posted by u/BerislavLopac
Syzygies · 2 years ago
In college in the 1970's, I took a combinatorial algorithms class with Herb Wilf. He explained that choosing a subset of k distinct elements from 1..n uniformly at random was likely k log k, because you needed to detect collisions, and "sorting" k elements was k log k.

(Always check your assumptions. Sorting has an k log k lower bound if all you can do is compare elements. With integers 1..n, one can do much more.)

I showed up at 5pm at his office, as he was bracing himself for the drive from Swarthmore back to Philly, wondering why he ever took this gig. I proposed a linear algorithm: Break 1..n into k bins, and only sort to detect collisions within each bin. Bins would have an average occupancy of 1, so it didn't matter what sort you used.

(I found this idea later in Knuth, and was briefly disillusioned that adults didn't read everything.)

I looked like George Harrison post-Beatles, I probably smelled of pot smoke, and he dismissed my idea. Then my dorm hall phone rang at 10pm, a professor of mine relaying an apology. For that and other ideas, the second edition of his book opened with thanks to me.

I'm surprised now that I realized then that bubble sort actually had the least overhead, in such an application. People don't use Strassen to multiple matrices unless the matrices are really big. For sorting, "really big" is 2 or 3 elements.

hakuseki · 2 years ago
I guess you mean k log k, not n log n.
hakuseki commented on 10,000TB storage cartridges could become mainstream by 2030   techradar.com/pro/video-o... · Posted by u/Brajeshwar
jvanderbot · 2 years ago
It's about a year to the outer planets on falcon heavy. Less on SLS, but I digress.

Suppose a flyby mission records a couple of these cartidges and flies back to Earth. Assume 3 years round trip (1 there, 1 record, 1 back).

That's Nx80,000,000,000,000 bits in 3 years.

Or Nx10E+16 bits / 9E+7 seconds.

Or about a gigabit per second effective rate per cartridge.

Not bad. Psyche (our best optical link from deep space), has 2 megabits/s peak rate.

hakuseki · 2 years ago
What's so difficult about optical links from deep space compared to low earth orbit, where 200 gigabit throughput has been achieved? Is it just the attenuation?

I would have imagined that we could upgrade the communication equipment on a space probe much more easily than we could add fuel for a return trip.

hakuseki commented on GraphCast: AI model for weather forecasting   deepmind.google/discover/... · Posted by u/bretthoerner
broast · 2 years ago
Weather is markovian
hakuseki · 2 years ago
That is not strictly true. The weather at time t0 may affect non-weather phenomena at time t1 (e.g. traffic), which in turn may affect weather at time t2.

Furthermore, a predictive model is not working with a complete picture of the weather, but rather some limited-resolution measurements. So, even ignoring non-weather, there may be local weather phenomena detected at time t0, escaping detection at time t1, but still affecting weather at time t2.

hakuseki commented on BB(3, 3) is Hard   sligocki.com//2023/10/16/... · Posted by u/todsacerdoti
tybug · 2 years ago
I'm hoping someone can enlighten me here. My understanding is that there is a turing machine of 748 states [0], which halts iff ZFC is inconsistent (Thm 1). But this machine is a "physical" object, in the sense that we can materialize it on a computer and run it. Though we don't have the computing power for this currently, there is nothing in principle stopping us from running this machine for BB(748) steps: if it halts, we have proven by Thm 1 that ZFC is inconsistent. If not, we have similarly proven that ZFC is consistent.

I want to stress that this is key to my confusion. This is not just some abstract result; this is a computation that we can perform and draw a real value from.

Of course, I'll now fall back on godel's second incompleteness theorem and say that one cannot prove, inside ZFC, that ZFC is consistent. But if the above turing machine halts, then we proved ZFC is consistent - a contradiction!

Where is the mistake here? My current guess is there is a technical detail in the proof of Thm 1 which uses a stronger metatheory than ZFC to show that the 758-state turing machine halts iff ZFC is inconsistent. This is not a contradiction, because yes, we can run the turing machine for BB(748) steps, but that will only show that ZFC+ proves ZFC is consistent, which is already well known - ZFC + there exists an inaccessible cardinal does the job.

However, I haven't looked at the paper in detail to know whether this is the case. Does anybody who has thought deeply about this problem have insight to offer?

[0] https://www.ingo-blechschmidt.eu/assets/bachelor-thesis-unde...

hakuseki · 2 years ago
> there is nothing in principle stopping us from running this machine for BB(748) steps

How would we compute the value of BB(748)?

hakuseki commented on Why isn't chess popular in Japan?   lichess.org/@/datajunkie/... · Posted by u/cushpush
tromp · 2 years ago
I think the numbers are taken from https://en.wikipedia.org/wiki/Game_complexity

We discuss the numbers for Go in the introduction of our paper https://matthieuw.github.io/go-games-number/AGoogolplexOfGoG...

hakuseki · 2 years ago
So to summarize, the game tree complexity is estimated by estimating the branching factor and the game length, and raising the former to the power of the latter.

I find it slightly odd that the game length is calibrated to "reasonable" games but the branching factor is not.

If the goal is to estimate the number of possible games of go, then the calculation would be dominated by the number of long games rather than the number of short games, and very long games are possible.

If the goal is to estimate the number of "reasonable" games of go, then the branching factor should also be much smaller, as most possible moves are not reasonable. Perhaps the logarithm of the branching factor could be estimated as the entropy of some policy model such at that of KataGo.

P.S. I am happy to have received a reply from the mighty Tromp!

hakuseki commented on Why isn't chess popular in Japan?   lichess.org/@/datajunkie/... · Posted by u/cushpush
29athrowaway · 2 years ago

                      Chess    Shogi    Go
    Board Size        8x8      9x9      19x19
    Branching Factor  35       92       250
    Complexity        10^123   10^226   10^360

hakuseki · 2 years ago
How were these numbers arrived at?

u/hakuseki

KarmaCake day105December 24, 2020View Original