It's not particularly hard (for someone who knows the language rules, which are difficult for a language like C++) to make a widely-used compiler be erroneous in its acceptance or rejection of code.
What's much more difficult ("never" happens) is to make the compiler accept valid code and then generate an incorrect executable. It's possible (and I run into this maybe once a year doing unusual things) but it's really rare. If you think that's what's going on, it's very unlikely to be the case.
I've also found 3-4 JavaScript JIT compiler errors in major browsers, all confirmed. I was a developer on what was for its time a quite complicated JavaScript solution, so we tended to encounter obscure JavaScript errors before others.
It should be considered a distinct field. At some level there is overlap (information theory, Kolmogorov complexity, etc.), but prompt optimization and model distillation is far removed from computability, formal language theory, etc. The analytical methods, the techniques to create new architectures, etc. are very different beasts.
Interactive zero-knowledge proofs are also technically non-interactive. They are defined in terms of the verifier evaluating a transcript of the protocol. If the verifier accepts some causal assumptions about the provenance of the transcript, they will accept the proof. If they disagree with the assumptions, the proof is indistinguishable from random noise they could generate themself. An interactive commitment – challenge – response protocol is one possible causal assumption. A source of randomness could replace the challenges, making the protocol non-interactive. Or there could be a pre-committed secret, making a single-round protocol effectively non-interactive.
These are things a sufficiently interested CS undergraduate can prove and understand. Public-key cryptography, on the other hand, remains magical. There are many things people assume to be true. Which need to be true for public-key cryptography to function. Empirically these things seem to be true, but nobody has been able to prove them. And I don't think anyone really understands them.
It's a fantastic bit of algorithmic magic that will always impress me to see it.
It's striking to imagine a fully functional fusion reactor that could benefit humanity, yet its creator now focuses on fintech payment systems. This highlights the importance of a strong middle class, which seems to be declining globally. A thriving middle class, with disposable income and free time, creates the conditions for innovation. Without it, even brilliant minds like Einstein might spend their entire careers working on immediate economic needs rather than pursuing breakthrough discoveries.
https://newsforkids.net/articles/2024/09/04/16-year-old-stud...https://online.kidsdiscover.com/quickread/arkansas-teen-buil...https://interestingengineering.com/energy/nuclear-fusion-rea... ...
Another portion of the article says more explicitly:
Limitations:
Maximum number of nodes: 200.
The structure is stored only in memory (no disk saving).
The later part says the opposite - that the original implementation had "No ability to save progress" and that this is new in the C++ implementation.
I can't help but wonder (also due to other language features) if the author ran the article through an AI to 'tidy up' before posting... because I've often found ChatGPT etc. to introduce changes in meaning like this, rather than just rewriting. This is not to dismiss either the article or the power of LLM's, just a curious observation :)