Why would you require an LLM to have proof for the things it says? I mean, that would be nice, and I am actually working on that, but it is not anything we would require of humans and/or HN commenters, would we?
Symbols, by definition, only represent a thing. They are not the same as the thing. The map is not the territory, the description is not the described, you can't get wet in the word "water".
They only have meaning to sentient beings, and that meaning is heavily subjective and contextual.
But there appear to be some who think that we can grasp truth through mechanical symbol manipulation. Perhaps we just need to add a few million more symbols, they think.
If we accept the incompleteness theorem, then there are true propositions that even a super-intelligent AGI would not be able to express, because all it can do is output a series of placeholders. Not to mention the obvious fallacy of knowing super-intelligence when we see it. Can you write a test suite for it?
It would be interesting to know what the percentage of people is, who invoke the incompleteness theorem, and have no clue what it actually says.
Most people don't even know what a proof is, so that cannot be a hindrance on the path to AGI ...
Second: ANY world model that can be digitally represented would be subject to the same argument (if stated correctly), not only LLMs.
The actual management of memory- allocating, reclaiming, etc - are all handled automagically for you.
I mean, THAT IS WHAT TECHNOLOGY IS SUPPOSED TO BE THERE FOR. Nobody wants a job, everybody just wants to live their lives.
Really good examples will be rather domain-specific, so it’s perfectly understandable why Alexis would trust her readers to be able to imagine uses that suit their needs.
It's just an example! Well, if you cannot come up with a good example, maybe you don't have a point.