The crux of the issue is that their simulation is single threaded. It's a complicated problem to do both deterministic and multi-threaded, but I feel some of us could help them.
I suppose one can argue about whether designing a new AGI-capable architecture and learning algorithm(s) is a matter of engineering (applying what we already know) or research, but I think saying we need new scientific discoveries is going to far.
Neural nets seems to be the right technology, and we've now amassed a ton of knowledge and intuition about what neural nets can do and how to design with them. If there was any doubt, then LLMs, even if not brain-like, have proved the power of prediction as a learning technique - intelligence essentially is just successful prediction.
It seems pretty obvious that the rough requirements for an neural-net architecture for AGI are going to be something like our own neocortex and thalamo-cortical loop - something that learns to predict based on sensory feedback and prediction failure, including looping and working memory. Built-in "traits" like curiosity (prediction failure => focus) and boredom will be needed so that this sort of autonomous AGI puts itself into leaning situations and is capable of true innovation.
The major piece to be designed/discovered isn't so much the architecture as the incremental learning algorithm, and I think if someone like Google-DeepMind focused their money, talent and resources on this then they could fairly easily get something that worked and could then be refined.
Demis Hassabis has recently given an estimate of human-level AGI in 5 years, but has indicated that a pre-trained(?) LLM may still be one component of it, so not clear exactly what they are trying to build in that time frame. Having a built-in LLM is likely to prove to be a mistake where the bitter lesson applies - better to build something capable of self-learning and just let it learn.
He said 50% chance of AGI in 5 years.
There are known cases where power loss during a write can corrupt previously written data (data at rest). This is not some rare occurrence. This is why enterprise flash storage devices have power loss protection.
See also: https://serverfault.com/questions/923971/is-there-a-way-to-p...
https://www.science.org/topic/blog-category/things-i-wont-wo...
He didn't write write an article about pentaborane, he left this one to Max Gergel https://www.science.org/content/blog-post/max-gergel-s-memoi...
(Warning: Spoilers ahead)
> The next day I told Parry that I was flattered but would not make pentaborane. He was affable, showed no surprise, no disappointment, just produced a list of names, most of which had been crossed off; ours was close to the bottom. He crossed us off and drove off in his little auto leaving for Gittman's, or perhaps, another victim. Later I heard that he visited two more candidates who displayed equal lack of interest and the following Spring the Navy put up its own plant, which blew up with considerable loss of life. The story did not make the press.
When AVX isn’t enabled, the std::min + std::max example still uses fewer instructions. Looks like a random register allocation failure.
So when you have an algorithm like clamp which requires v to be "preserved" throughout the computation you can't overwrite xmm0 with the first instruction, basically you need to "save" and "restore" it which means an extra instruction.
I'm not sure why this causes the extra assembly to be generated in the "realistic" code example though. See https://godbolt.org/z/hd44KjMMn
Always specify your target.