In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.
This is separate from directly answering the question "is a singularity coming?"
One says "well, it was built as a bunch of pieces, so it can only do the thing the pieces can do", which is reasonably dismissed by noting that basically the only people predicting current LLM capabilities are the ones who are remarkably worried about a singularity occurring.
The other says "we can evaluate capabilities and notice that LLMs keep gaining new features at an exponential, now bordering into hyperbolic rate", like the OP link. And those people are also fairly worried about the singularity occurring.
So mainly you get people using "here's how LLMs actually work" to argue against the Singularity if-and-only-if they are also the ones arguing that LLMs can't do the things that they can provably do, today, or are otherwise making arguments that also declare humans aren't capable of intelligence / reasoning / etc..
But that problem is MUCH MUCH MUCH harder than people make it out to be.
For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.
You can get around that with agentic frameworks, but all of those right now are manually coded.
So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.
It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.
,
Fascinating reasoning. Should we conclude that humans are also incapable of intelligence? I don't know any human who can fit a terabyte of assembly into their context window.
You explicitly said: "the excuse that "it's not plagiarizing, it thinks!!!!1"", and it seems rather relevant that they've never actually used that excuse.
The term AGI so obviously means something way smarter than what we have. We do have something impressive but it’s very limited.
Playing loud music, your neighbours can hear it => you’re the problem
Smoking and having the smoke pollute your neighbours air => you’re the problem
Plenty of times the fault is with the apartment, etc.: if the reasonable noise of me living disrupts my neighbors, that's bad design. Different people work different shifts - I don't see why the morning person should have to hold off on a morning shower just because the plumbing wakes up their neighbor, nor why the night-shift worker should have to hold off on doing laundry just because that wakes the morning person up.
We can't blindly trust Waymo's PR releases or apples-to-oranges comparisons. That's why the bar is higher.