Readit News logoReadit News
handoflixue commented on The Singularity will occur on a Tuesday   campedersen.com/singulari... · Posted by u/ecto
wavemode · a day ago
False dichotomy. One can believe that LLMs are capable of more than their constituent parts without necessarily believing that their real-world utility is growing at a hyperbolic rate.
handoflixue · 21 hours ago
Fair - I meant there's two major clusters in the mainstream debate, but like all debates there's obviously a few people off in all sorts of other positions.
handoflixue commented on The Singularity will occur on a Tuesday   campedersen.com/singulari... · Posted by u/ecto
dimitri-vs · a day ago
On the other hand the average human has a context window of 2.5 petabytes that's streaming inference 24/7 while consuming the energy equivalent of a couple sandwiches per day. Oh and can actually remember things.
handoflixue · 21 hours ago
Citation desperately needed? Last I checked, humans could not hold the entirety of Wikipedia in working memory, and that's a mere 24 GB. Our GPU might handle "2.5 petabytes" but we're not writing all that to disc - in fact, most people have terrible memory of basically everything they see and do. A one-trick visual-processing pony is hardly proof of intelligence.
handoflixue commented on The Singularity will occur on a Tuesday   campedersen.com/singulari... · Posted by u/ecto
wavemode · a day ago
You're putting a bunch of words in the parent commenter's mouth, and arguing against a strawman.

In this context, "here’s how LLMs actually work" is what allows someone to have an informed opinion on whether a singularity is coming or not. If you don't understand how they work, then any company trying to sell their AI, or any random person on the Internet, can easily convince you that a singularity is coming without any evidence.

This is separate from directly answering the question "is a singularity coming?"

handoflixue · a day ago
The problem is, there's two groups:

One says "well, it was built as a bunch of pieces, so it can only do the thing the pieces can do", which is reasonably dismissed by noting that basically the only people predicting current LLM capabilities are the ones who are remarkably worried about a singularity occurring.

The other says "we can evaluate capabilities and notice that LLMs keep gaining new features at an exponential, now bordering into hyperbolic rate", like the OP link. And those people are also fairly worried about the singularity occurring.

So mainly you get people using "here's how LLMs actually work" to argue against the Singularity if-and-only-if they are also the ones arguing that LLMs can't do the things that they can provably do, today, or are otherwise making arguments that also declare humans aren't capable of intelligence / reasoning / etc..

handoflixue commented on The Singularity will occur on a Tuesday   campedersen.com/singulari... · Posted by u/ecto
ActorNightly · a day ago
Its pretty clear that the problem of solving AI is software, I don't think anyone would disagree.

But that problem is MUCH MUCH MUCH harder than people make it out to be.

For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

You can get around that with agentic frameworks, but all of those right now are manually coded.

So how do you train an LLM to correctly take any length of assembly code and produce the correct result? The only way is to essentially train the structure of the neurons inside of it behave like a computer, but the problem is that you can't do back-propagation with discrete zero and 1 values unless you explicitly code in the architecture for a cpu inside. So obviously, error correction with inputs/outputs is not the way we get to intelligence.

It may be that the answer is pretty much a stochastic search where you spin up x instances of trillion parameter nets and make them operate in environments with some form of genetic algorithm, until you get something that behaves like a Human, and any shortcutting to this is not really possible because of essentially chaotic effects.

,

handoflixue · a day ago
> For example, you can reliably train an LLM to produce accurate output of assembly code that can fit into a context window. However, lets say you give it a Terabyte of assembly code - it won't be able to produce correct output as it will run out of context.

Fascinating reasoning. Should we conclude that humans are also incapable of intelligence? I don't know any human who can fit a terabyte of assembly into their context window.

handoflixue commented on Billing can be bypassed using a combo of subagents with an agent definition   github.com/microsoft/vsco... · Posted by u/napolux
piker · 3 days ago
+1. I see all these posts about tokens, and I'm like "who's paying by the token?"
handoflixue · 3 days ago
Anthropic pushes you to use the API for anything "third party", such as running OpenClaw
handoflixue commented on Claude Opus 4.6   anthropic.com/news/claude... · Posted by u/HellsMaddy
dev1ycan · 5 days ago
Did using LLMs too much remove your ability to critically think too?
handoflixue · 3 days ago
Just to be clear: you're mad because your "critical thinking" led you to a spurious argument that you disagree with, and that they never actually made?

You explicitly said: "the excuse that "it's not plagiarizing, it thinks!!!!1"", and it seems rather relevant that they've never actually used that excuse.

handoflixue commented on Claude Opus 4.6   anthropic.com/news/claude... · Posted by u/HellsMaddy
dev1ycan · 6 days ago
Their "constitution" is just garbage meant to defend them ripping off copyrighted material with the excuse that "it's not plagiarizing, it thinks!!!!1" which is, false.
handoflixue · 6 days ago
I don't recall them ever offering that legal reasoning - I'm sure you can provide a citation?
handoflixue commented on Why do people still talk about AGI?    · Posted by u/cermicelli
dangus · 10 days ago
This is a great analogy.

The term AGI so obviously means something way smarter than what we have. We do have something impressive but it’s very limited.

handoflixue · 10 days ago
The term AGI explicitly refers to something as smart as us: humans are the baseline for what "General Intelligence" means.
handoflixue commented on Teaching my neighbor to keep the volume down   idiallo.com/blog/teaching... · Posted by u/firefoxd
ornornor · 10 days ago
The rule is simple IMO: whatever you’re doing, if it impacts people beyond your own sphere then you’re the problematic one.

Playing loud music, your neighbours can hear it => you’re the problem

Smoking and having the smoke pollute your neighbours air => you’re the problem

handoflixue · 10 days ago
Your comment impacted me, so I assume you'll post an apology for this deeply problematic behavior of yours?

Plenty of times the fault is with the apartment, etc.: if the reasonable noise of me living disrupts my neighbors, that's bad design. Different people work different shifts - I don't see why the morning person should have to hold off on a morning shower just because the plumbing wakes up their neighbor, nor why the night-shift worker should have to hold off on doing laundry just because that wakes the morning person up.

handoflixue commented on Waymo robotaxi hits a child near an elementary school in Santa Monica   techcrunch.com/2026/01/29... · Posted by u/voxadam
kj4211cash · 13 days ago
But we have no way of knowing whether robotaxis are safer. See, for example, the arguments raised here: https://www.bloomberg.com/news/features/2026-01-06/are-auton...

We can't blindly trust Waymo's PR releases or apples-to-oranges comparisons. That's why the bar is higher.

handoflixue · 13 days ago
You may not have any way of knowing but the rest of society has developed all sorts of systems of knowing. "Scientific method", "Bayesian reasoning", etc. or start with the Greek philosophy classics.

u/handoflixue

KarmaCake day1248March 21, 2019View Original