Intelligence must be built from a first principles theory of what intelligence actually is.
The missing science to engineer intelligence is composable program synthesis. Aloe (https://aloe.inc) recently released a GAIA score demonstrating how CPS dramatically outperforms other generalist agents (OpenAI's deep research, Manus, and Genspark) on tasks similar to those a knowledge worker would perform.
So I don't buy the engineering angle, I also don't think LLMs will scale up to AGI as imagined by Asimov or any of the usual sci-fi tropes. There is something more fundamental missing, as in missing science, not missing engineering.
I'd argue it's because intelligence has been treated as a ML/NN engineering problem that we've had the hyper focus on improving LLMs rather than the approach articulated in the essay.
Intelligence must be built from a first principles theory of what intelligence actually is.