Achieved a new world record in verifying the Goldbach Conjecture using grid computing, by extending the verification up to 4 quadrillion (4×10¹⁸) + 70 trillion (7×10¹³).
My grid computing system - Gridbach is a cloud-based distributed computing system accessible from any PC or smartphone. It requires no login or app installation. The high-performance WASM (WebAssembly) binary code is downloaded as browser content, enabling computation on the user’s browser.
[Website] https://gridbach.com/
[Medium] https://medium.com/@jay_gridbach/grid-computing-shatters-wor...
A. No validation is done on server side to confirm the workers are reporting correct results.
B. Increasing the limit by less than a thousandth of a percent does not make this a "world record"! If we go by that logic, I only have to validate one more example than you and claim a world record. And then you'd do the same. And then I'd do the same and we'll be playing "world record" ping pong all day!
But "B" isn't the big problem here because we have worse problems "A"! Nobody (not even the OP) can tell if the results are accurate!
No, I'm not simply dissing at a Show HN post. There are many comments here that explain these problems much better than I could.
This is egregrious clickbait!
Why doesn't it?
"If we go by that logic, I only have to validate one more example than you and claim a world record."
Yes. You can argue that it's not difficult enough or interesting enough, but you can't argue that N+1 result is not a world record.
He slightly pushed the computation past the previous world record, and he’s continuing to push it forward with a clear goal. It’s well within the spirit of a world record.
Besides, a world record is still a world record — it’s up to you to decide how interesting it is. You are indeed just dissing on a Show HN post.
Server side validation is trivial. What makes you believe that is not happening? That code is not available.
If you'd read the article carefully, he hasn't. For all we know one client (or worse, several) found counterexamples but didn't report them back to the server. Without verification on the server side, there's no way to claim the entire range has been reliably checked.
What he's shown is that many volunteers have checked a large portion of the numbers up to a certain bound and found no counterexamples. But he hasn't shown that all numbers within that range have actually been verified. It's entirely possible that some block results were falsely reported by bad clients. Meaning counterexamples could still be hiding in those falsely reported gaps, however improbable! This kind of lapse in rigor matters in math! This lapse in rigor invalidates the entire claim of the OP!
> Server side validation is trivial. What makes you believe that is not happening? That code is not available.
Please read the full thread. This has all already been discussed at length.
https://news.ycombinator.com/item?id=43735397
https://news.ycombinator.com/item?id=43735498
https://news.ycombinator.com/item?id=43735483
https://news.ycombinator.com/item?id=43735555
From the OP himself, an admission that there's no mechanism to ensure clients aren't submitting false results:
https://news.ycombinator.com/item?id=43736281
https://news.ycombinator.com/item?id=43736558
Don't get me wrong. I've said before. This is a good project. But the claims made in the post don't hold up to scrutiny.
> a world record is still a world record
This isn't particularly relevant at the moment, since OP can't confirm the correctness of the results!
What Hiroaki wants here is "no one has proven it mathematically". Full stop.
Or "no one has proven it mathematically to this day", or "no one has proven it mathematically so far".
Alternatively, "it has yet to be proved mathematically".
To request a new batch:
returns something like which means the client should check 4000075534400000000-4000075534500000000.Once done:
Here, _client_hash is generated by wasmHash(`{"method":"Hash"}`) in /js/worker.js (yes, the payload is a fixed string), and while I didn't try to disassemble the wasm, one can pause execution and repeatedly call wasmHash() to observe it's basically a TOTP that changes every 10s, so it doesn't carry any mathematical information.Therefore, all the info that can be used for verification on the server is a single pair of _p and _q adding up to one number in the entire range. That's the extent of the verification.
One can of course patch the code to check a single number before reporting that the entire range has been checked. Pretty sure it's impossible for the server to detect that.
Correct me if I made a mistake somewhere.
Edit: On second thought, maybe the specific number reported back is deterministically chosen in a way that relies on finishing all the checks in the range, and thus can be compared with other reported results for the same range?
Even in that case, the server can't verify the work without repeating it. mersenne.org hands out a double checking job about 8 years later presumably to thwart determined attackers.[1]
[1] https://www.mersenne.org/various/math.php
Dead Comment
As highlighted by @tromp and @oefrha (https://news.ycombinator.com/item?id=43734877) it is clear, clients can cheat. So we can't be 100% sure that none of the clients cheated. What if a counterexample to the conjecture exists, but a dishonest client simply failed to report it? Math results require rigor and without rigor no claim can be trusted. Without rigor, this bold assertion remains just that. A claim, not a fact.
OP! On top of that, you're being evasive in threads where you're being asked how your validation works and you went so far as to flag a pertinent thread. That definitely doesn't inspire confidence. Addressing the validation questions is absolutely 100% necessary if you want this to be seen as more than just a claim.
Deleted Comment
And this project has increased that number to 4,000,010,000,000,000,000.
Increasing the limit by 0.00025%
Not totally sure this is a good use of the compute/brainpower...
I don't know about making a judgement call on what a good use of computer or brainpower is, it seems like a fun enough project in many ways, but in terms of claims worth headlining about the project I agree wholeheartedly.
So this tickles my nostalgia bone strongly. And maybe makes me feel a tiny bit jealous. But more excited than envious, really, to see people are still working on this problem.
Time required to compute next range grows very slowly and this project has only computed the incremental part from 4*10^18 to 4*10^18+7*10^13 . It would have taken previous record holder extra 0.002% time get those additional 7*10^13.
A meaningful record needs to either reproduce old one or beat it by significant margin. Otherwise you get meaningless +1 like this.
By my estimates (~7s to compute 10^8 large chunk) new "record" represents ~60days worth of single core compute. Run it on multiple threads and you essentially get 3-4days worth compute on single modern computer.
And it does so at rate which is much worse than previous record using 2012/2013 hardware. Previous record software was able to do 10^12 window in 48minutes on single i3 core from 2013. That's roughly 24x faster using the old software on 10year old low end computer compared to the new software on new hardware. Previous record represents ~133000 days of single core compute, probably less since majority of it likely run on something better than i3.
Unless author gets it to maliciously run on a popular website with at least 10^5 users(concurrently every minute not 10^5 unique during day), 5*10^18 doesn't seem reachable this way. Getting a data center to donate computing hours would also work, but in that case you could use more efficient native software like the one from 2013 (which was order of magnitude faster even then) or rewrite of it optimized for modern hardware. The current webassembly one only makes sense if you can get random individual volunteers do donate compute.
That is not to say that this is not a cool project. The distributed nature and running so seamlessly directly in the browser is definitely cool and allows people to contribute compute easily.
It may be that grandiose claims of new records are needed to make people donate their computational resources but I am not a fan of deceptive claims like this.
Well, maybe they do on their resume.
Great intuitive metaphor, btw.
---
Burj Khalifa - 828m
US Penny - 1.52mm (0.00152m)
Adding a US penny to the Burj Khalifa would therefore make it 0.000183% taller.
--
Original work - 4,000,000,000,000,000,000
OP's work - 70,000,000,000
OP's work added 0.00000175% to the current record.
---
Conclusion: adding a penny to the Burj Khalifa is actually >100x more constructive than this effort.
I tried running in on 4 tabs on Firefox, and it did slow down a bit (maybe 16 seconds). All 4 tabs reported the same count, and it seemed not to increment for all the tabs. Also the initializing step was very fast on the subsequent tabs, as if it was reusing some data. Each tab used 100% of CPU and was doing different calculations. Same for Chrome.
Maybe it is not designed to be run in parallel on the same browser? Now I just run it on two separate browsers, one tab each. I probably stop later today when I need the computer for something else.
(Edit: Got a bit over 100B in 3.5 hours, stopping now. Machine running a tad warm, 25% CPU use, feels normal to use, but I think the fans are working a bit harder than normal)