Readit News logoReadit News
Ukv commented on Why AGI Will Not Happen   timdettmers.com/2025/12/1... · Posted by u/dpraburaj
jqpabc123 · 4 days ago
What logic and physics are being defied

The logic and physics that make a computer what it is --- a binary logic playback device.

By design, this is all it is capable of doing.

Assuming a finite, inanimate computer can produce AGI is to assume that "intelligence" is nothing more than a binary logic algorithm. Currently, there is no logical basis for this assumption --- simply because we have yet to produce a logical definition of "intelligence".

Of all people, programmers should understand that you can't program something that is not defined.

Ukv · 4 days ago
> By design, this is all it is capable of doing. Assuming a finite, inanimate computer can produce AGI is [...]

Humans are also made up of a finite number of tiny particles moving around that would, on their own, not be considered living or intelligent.

> [...] we have yet to produce a logical definition of "intelligence". Of all people, programmers should understand that you can't program something that is not defined.

There are multiple definitions of intelligence, some mathematically formalized, usually centered around reasoning and adapting to new challenges.

There are also a variety of definitions for what makes an application "accessible", most not super precise, but that doesn't prevent me improving the application in ways such that it gradually meets more and more people's definitions of accessible.

Ukv commented on Why AGI Will Not Happen   timdettmers.com/2025/12/1... · Posted by u/dpraburaj
jqpabc123 · 4 days ago
TLDR;

No amount of fantastical thinking is going to coax AGI out of a box of inanimate binary switches --- aka, a computer as we know it.

Even with billions and billions of microscopic switches operating at extremely high speed consuming an enormous share of the world's energy, a computer will still be nothing more than a binary logic playback device.

Expecting anything more is to defy logic and physics and just assume that "intelligence" is a binary algorithm.

Ukv · 4 days ago
The article doesn't say anything along those lines as far as I can tell - it focuses on scaling laws and diminishing returns ("If you want to get linear improvements, you need exponential resources").

I generally agree with the article's point, though I think "Will Never Happen" is too strong of a conclusion, whereas I don't think the idea that simple components ("a box of inanimate binary switches") fundamentally cannot combine to produce complex behaviour is well-founded.

Ukv commented on I successfully recreated the 1996 Space Jam website with Claude   theahura.substack.com/p/i... · Posted by u/theahura
thecr0w · 6 days ago
Oh you're right. I read it a bit too quickly this morning and thought it had just done that initially to compare planet placement. Too bad.
Ukv · 6 days ago
The index_tiled.html version correctly positions the original assets, and to me looks as close as you can get to the screenshot while using the original assets (except for the red text).

The version with the screenshot as a background is where it was asked to create an exact match for screenshot that had been scaled/compressed, which isn't really possible any other way. The article acknowledges this one as cheating.

Better I think would've been to retake the screenshot without the scaling/compression, to see if it can create a site that is both an exact match and using the original assets.

Ukv commented on I successfully recreated the 1996 Space Jam website with Claude   theahura.substack.com/p/i... · Posted by u/theahura
toroszo · 6 days ago
I have, and I couldn't believe what it was saying and had to go see the code to verify. I'm really struggling to believe that anyone would consider this a "coding success".
Ukv · 6 days ago
> I'm really struggling to believe that anyone would consider this a "coding success".

The index_tiled.html version later in the article is what justifies the success claim IMO, and is the version I think it would've made more sense to host.

The currently hosted index.html just feels like a consequence of the author taking a scaled/compressed screenshot and asking Claude to produce an exact match.

Ukv commented on I successfully recreated the 1996 Space Jam website with Claude   theahura.substack.com/p/i... · Posted by u/theahura
Aldipower · 6 days ago
Does not render correctly here. It does not zoom properly and a window resize also have weird effects. Recreation not finished I guess.
Ukv · 6 days ago
From the article, Claude asked:

> The screenshot shows viewport-specific positioning - should we match at a specific viewport size or make it responsive?

And the author responded:

> exact screenshot dimensions

So it's only intended to replicate the screenshot, but I do agree that making it center/zoom properly would've been more interesting.

Ukv commented on I successfully recreated the 1996 Space Jam website with Claude   theahura.substack.com/p/i... · Posted by u/theahura
Palmik · 6 days ago
That does not make the title any less clickbaity. Moreover, it does not seem like a vindication of johnfn's original comment.
Ukv · 6 days ago
index_tiled.html is what justifies the title IMO - it's not using a screenshot as the background like index.html, and is as close as you can get using the original assets given the screenshot's scaling and compression artifacts (minus the red text being off).

But I feel it'd make more sense to just retake the screenshot properly and see if it can create a pixel-perfect replica.

Ukv commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
ACCount37 · 7 days ago
If you want actionable intuition, try "a human with almost zero self-awareness".

"Self-awareness" used in a purely mechanical sense here: having actionable information about itself and its own capabilities.

If you ask an old LLM whether it's able to count the Rs in "strawberry" successfully, it'll say "yes". And then you ask it to do so, and it'll say "2 Rs". It doesn't have the self-awareness to know the practical limits of its knowledge and capabilities. If it did, it would be able to work around the tokenizer and count the Rs successfully.

That's a major pattern in LLM behavior. They have a lot of capabilities and knowledge, but not nearly enough knowledge of how reliable those capabilities are, or meta-knowledge that tells them where the limits of their knowledge lie. So, unreliable reasoning, hallucinations and more.

Ukv · 6 days ago
Agree that's a better intuition, with pretraining pushing the model towards saying "I don't know" in the kinds of situations where people write that as opposed to by introspection of its own confidence.
Ukv commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
voidhorse · 7 days ago
When you have a thought, are you "predicting the next thing"—can you confidently classify all mental activity that you experience as "predicting the next thing"?

Language and society constrains the way we use words, but when you speak, are you "predicting"? Science allows human beings to predict various outcomes with varying degrees of success, but much of our experience of the world does not entail predicting things.

How confident are you that the abstractions "search" and "thinking" as applied to the neurological biological machine called the human brain, nervous system, and sensorium and the machine called an LLM are really equatable? On what do you base your confidence in their equivalence?

Does an equivalence of observable behavior imply an ontological equivalence? How does Heisenberg's famous principle complicate this when we consider the role observer's play in founding their own observations? How much of your confidence is based on biased notions rather than direct evidence?

The critics are right to raise these arguments. Companies with a tremendous amount of power are claiming these tools do more than they are actually capable of and they actively mislead consumers in this manner.

Ukv · 7 days ago
> can you confidently classify all mental activity that you experience as "predicting the next thing"? [...] On what do you base your confidence in their equivalence?

To my understanding, bloaf's claim was only that the ability to predict seems a requirement of acting intentionally and thus that LLMs may "end up being a component in a system which actually does think" - not necessarily that all thought is prediction or that an LLM would be the entire system.

I'd personally go further and claim that correctly generating the next token is already a sufficiently general task to embed pretty much any intellectual capability. To complete `2360 + 8352 * 4 = ` for unseen problems is to be capable of arithmetic, for instance.

Ukv commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
akomtu · 7 days ago
Spoken Query Language? Just like SQL, but for unstructured blobs of text as a database and unstructured language as a query? Also known as Slop Query Language or just Slop Machine for its unpredictable results.
Ukv · 7 days ago
> Spoken Query Language? Just like SQL, but for unstructured blobs of text as a database and unstructured language as a query?

I feel that's more a description of a search engine. Doesn't really give an intuition of why LLMs can do the things they do (beyond retrieval), or where/why they'll fail.

Ukv commented on Bag of words, have mercy on us   experimental-history.com/... · Posted by u/ntnbr
Ukv · 7 days ago
I'm not convinced that "It's just a bag of words" would do much to sway someone who is overestimating an LLM's abilities. Feels too abstract/disconnected from what their experience using the LLM will be that it'll just sound obviously mistaken.

u/Ukv

KarmaCake day1822August 15, 2022View Original