Readit News logoReadit News
uh_uh commented on AWS CEO says using AI to replace junior staff is 'Dumbest thing I've ever heard'   theregister.com/2025/08/2... · Posted by u/JustExAWS
moi2388 · 5 days ago
I completely agree.

On a side note.. ya’ll must be prompt wizards if you can actually use the LLM code.

I use it for debugging sometimes to get an idea, or a quick sketch up of an UI.

As for actual code.. the code it writes is a huge mess of spaghetti code, overly verbose, with serious performance and security risks, and complete misunderstanding of pretty much every design pattern I give it..

uh_uh · 5 days ago
Which model?
uh_uh commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
dgfitz · 13 days ago
>> LLMs don't think. At all.

>How can you so confidently proclaim that?

Do you know why they're called 'models' by chance?

They're statistical, weighted models. They use statistical weights to predict the next token.

They don't think. They don't reason. Math, weights, and turtles all the way down. Calling anything an LLM does "thinking" or "reasoning" is incorrect. Calling any of this "AI" is even worse.

uh_uh · 13 days ago
Do you think Hinton and Ilya haven’t heard these arguments?
uh_uh commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
MarkusQ · 13 days ago
LLMs don't think. At all. They do next token prediction.

If they are conditioned on a large data set that includes lots of examples of the result of people thinking, what they produce will look sort of like the results of thinking, but then if they were conditioned on a large data set of people repeating the same seven knock knock jokes over and over and over in some complex pattern (e.g. every third time, in French), what they produced will look like that, and nothing like thinking.

Failing to recognize this is going to get someone killed, if it hasn't already.

uh_uh · 13 days ago
> LLMs don't think. At all.

How can you so confidently proclaim that? Hinton and Ilya Sutskever certainly seem to think that LLMs do think. I'm not saying that you should accept what they say blindly due to their authority in the field, but their opinions should give your confidence some pause at least.

uh_uh commented on GTP Blind Voting: GPT-5 vs. 4o   gptblindvoting.vercel.app... · Posted by u/findhorn
dmd · 16 days ago
19/20 GPT-5. I’m impressed.
uh_uh · 16 days ago
Same result.
uh_uh commented on Persona vectors: Monitoring and controlling character traits in language models   anthropic.com/research/pe... · Posted by u/itchyjunk
devmor · 23 days ago
> I've started to think of LLM's as a form lossy compression of available knowledge which when prompted produces "facts".

That is almost exactly what they are and what you should treat them as.

A lossy compressed corpus of publicly available information with a weight of randomness. The most fervent skeptics like to call LLMs "autocorrect on steroids" and they are not really wrong.

uh_uh · 23 days ago
An LLM is an autocorrect in as much as humans are replicators. Something seriously gets lost in this "explanation".
uh_uh commented on Why you should choose HTMX for your next web-based side project (2024)   hamy.xyz/blog/2024-02_htm... · Posted by u/kugurerdem
sgt · a month ago
Htmx can scale. It's very basic and Htmx isn't the only technology to use that approach.
uh_uh · a month ago
It cannot scale because it doesn't have a solution for reusable components. That's why I have abandoned it. Frameworks like React solve this in a much saner way.
uh_uh commented on OpenAI claims gold-medal performance at IMO 2025   twitter.com/alexwei_/stat... · Posted by u/Davidzheng
gellybeans · a month ago
Making an account just to point out how these comments are far more exhausting, because they don't engage with the subject matter. They are just agreeing with a headline and saying, "See?"

You say, "explaining away the increasing performance" as though that was a good faith representation of arguments made against LLMs, or even this specific article. Questionong the self-congragulatory nature of these businesses is perfectly reasonable.

uh_uh · a month ago
But don't you think this might be a case where there is both self-congragulation and actual progress?

Deleted Comment

uh_uh commented on Reflections on OpenAI   calv.info/openai-reflecti... · Posted by u/calvinfo
lz400 · a month ago
I mean, they're wrong? LLMs don't have agency, don't learn, don't do anything except react to prompts really.
uh_uh · a month ago
What agentic tools have you tried?
uh_uh commented on Reflections on OpenAI   calv.info/openai-reflecti... · Posted by u/calvinfo
lz400 · a month ago
Engineers thinking they're building god is such a good marketing strategy. I can't overstate it. It's even difficult to be rational about it. I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI. But this idea is sort of half-immune to criticism or skepticism: you can always respond with "but what if it's true?". The stakes are so high that the potentially infinite payoff snowballs over any probabilities. 0.00001% multiplied by infinite is an infinite EV so you have to treat it like that. Best marketing, it writes itself.
uh_uh · a month ago
> I don't actually believe it's true, I think it's pure hype and LLMs won't even approximate AGI.

Not sure how you can say this so confidently. Many would argue they're already pretty close, at least on a short time horizon.

u/uh_uh

KarmaCake day695December 7, 2020View Original