Readit News logoReadit News
tsunamifury commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
ssl-3 · 2 days ago
Have you ever employed anyone?

People, when tasked with a job, often get it right. I've been blessed by working with many great people who really do an amazing job of generally succeeding to get things right -- or at least, right-enough.

But in any line of work: Sometimes people fuck it up. Sometimes, they forget important steps. Sometimes, they're sure they did it one way when instead they did it some other way and fix it themselves. Sometimes, they even say they did the job and did it as-prescribed and actually believe themselves, when they've done neither -- and they're perplexed when they're shown this. They "hallucinate" and do dumb things for reasons that aren't real.

And sometimes, they just make shit up and lie. They know they're lying and they lie anyway, doubling-down over and over again.

Sometimes they even go all spastic and deliberately throw monkey wrenches into the works, just because they feel something that makes them think that this kind of willfully-destructive action benefits them.

All employees suck some of the time. They each have their own issues. And all employees are expensive to hire, and expensive to fire, and expensive to keep going. But some of their outputs are useful, so we employ people anyway. (And we're human; even the very best of us are going to make mistakes.)

LLMs are not so different in this way, as a general construct. They can get things right. They can also make shit up. They can skip steps. The can lie, and double-down on those lies. They hallucinate.

LLMs suck. All of them. They all fucking suck. They aren't even good at sucking, and they persist at doing it anyway.

(But some of their outputs are useful, and LLMs generally cost a lot less to make use of than people do, so here we are.)

tsunamifury · 2 days ago
As far as I can tell (as someone who worked on the early foundation of this tech at Google for 10 years) making up “shit” then using your force of will to make it true is a huge part of the construction of reality with intelligence.

Will to reality through forecasting possible worlds is one of our two primary functions.

tsunamifury commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
balder1991 · 2 days ago
It doesn’t really solve it as a slight shift in the prompt can have totally unpredictable results anyway. And if your prompt is always exactly the same, you’d just cache it and bypass the LLM anyway.

What would really be useful is a very similar prompt should always give a very very similar result.

tsunamifury · 2 days ago
That’s a way different problem my guy.
tsunamifury commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
sheeshe · 2 days ago
In essence it is a thing that is actually promoting your own brain… seems counter intuitive but that’s how I believe this technology should be used.
tsunamifury · 2 days ago
This technology (which I had a small part in inventing) was not based on intelligently navigating the information space, it’s fundamentally based on forecasting your own thoughts by weighting your pre-linguistic vectors and feeding them back to you. Attention layers in conjunction of roof later allowed that to be grouped in higher order and scan a wider beam space to reward higher complexity answers.

When trained on chatting (a reflection system on your own thoughts) it mostly just uses a false mental model to pretend to be a desperate intelligence.

Thus the term stochastic parrot (which for many us actually pretty useful)

tsunamifury commented on GPT-5.2   openai.com/index/introduc... · Posted by u/atgctg
EastLondonCoder · 2 days ago
I’m with the people pushing back on the “confidence scores” framing, but I think the deeper issue is that we’re still stuck in the wrong mental model.

It’s tempting to think of a language model as a shallow search engine that happens to output text, but that metaphor doesn’t actually match what’s happening under the hood. A model doesn’t “know” facts or measure uncertainty in a Bayesian sense. All it really does is traverse a high‑dimensional statistical manifold of language usage, trying to produce the most plausible continuation.

That’s why a confidence number that looks sensible can still be as made up as the underlying output, because both are just sequences of tokens tied to trained patterns, not anchored truth values. If you want truth, you want something that couples probability distributions to real world evidence sources and flags when it doesn’t have enough grounding to answer, ideally with explicit uncertainty, not hand‑waviness.

People talk about hallucination like it’s a bug that can be patched at the surface level. I think it’s actually a feature of the architecture we’re using: generating plausible continuations by design. You have to change the shape of the model or augment it with tooling that directly references verified knowledge sources before you get reliability that matters.

tsunamifury · 2 days ago
Hallucinations are a feature of reality that LLMs have inherited.

It’s amazing that experts like yourself who have a good grasp of the manifold MoE configuration don’t get that.

LLMs much like humans weight high dimensionality across the entire model then manifold then string together an attentive answer best weighted.

Just like your doctor occasionally giving you wrong advice too quickly so does this sometimes either get confused by lighting up too much of the manifold or having insufficient expertise.

tsunamifury commented on Show HN: Gemini Pro 3 imagines the HN front page 10 years from now   dosaygo-studio.github.io/... · Posted by u/keepamovin
sallveburrpi · 4 days ago
It’s actually referencing Nietzsche referencing Empedocles, but your point works as well I guess
tsunamifury · 3 days ago
haha thats both not true, and still works as drunk nonsense.

But good job googling this and getting fooled by an LLM

tsunamifury commented on Show HN: Gemini Pro 3 imagines the HN front page 10 years from now   dosaygo-studio.github.io/... · Posted by u/keepamovin
sallveburrpi · 5 days ago
Time is a flat circle
tsunamifury · 5 days ago
FYI, this quote was meant to be the ramblings of a drunk who says something that sounds deep but is actually meaningless.
tsunamifury commented on Human hair grows through 'pulling' not pushing, study shows   phys.org/news/2025-12-hum... · Posted by u/pseudolus
tokai · 10 days ago
You don't say. Seeing that Munchausen is a byword for telling tall tales.
tsunamifury · 10 days ago
Look at his handle… not that yc is for these novelty accounts
tsunamifury commented on Google, Nvidia, and OpenAI   stratechery.com/2025/goog... · Posted by u/tambourine_man
vikinghckr · 12 days ago
Advertisement is unquestionably a net positive for society and humanity. It's one of the few true positive sum business models where everyone is better off.
tsunamifury · 12 days ago
You are shadow blocked fyi
tsunamifury commented on Google, Nvidia, and OpenAI   stratechery.com/2025/goog... · Posted by u/tambourine_man
raw_anon_1111 · 12 days ago
At at least $5 million in paid subscriptions annually and living between Wisconsin and Taiwan, as an independent writer do you really think he needs to juice his subscriptions by advocating other people do ads on an LLM?

Any use of LLMs by other people reduces his value.

tsunamifury · 12 days ago
None of this proves anything other than he writes what audiences want to hear.

Which as we know has nothing to do with reality.

u/tsunamifury

KarmaCake day6679June 2, 2008
About
Guy who helped computers predict your next word
View Original