Readit News logoReadit News
madethisnow commented on Head of NSA and Cybercommand Is Ousted   nytimes.com/2025/04/03/us... · Posted by u/pmags
Mr_Eri_Atlov · 5 months ago
The knee-jerk reactions of the incompetent sycophants further hemorrhaging actual talent.

It's further indication that Donald Trump has descended into dementia plain and simple.

madethisnow · 5 months ago
when did this place become /r/politics? This is a completely unserious claim
madethisnow commented on Doge staffer's YouTube nickname accidentally revealed his teen hacking activity   arstechnica.com/tech-poli... · Posted by u/rbanffy
galactus · 5 months ago
He is probably more of a menace now than before tho
madethisnow · 5 months ago
based on what?
madethisnow commented on AI 2027   ai-2027.com/... · Posted by u/Tenoke
samth · 5 months ago
I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.
madethisnow · 5 months ago
something you can't know
madethisnow commented on Reasoning models don't always say what they think   anthropic.com/research/re... · Posted by u/meetpateltech
madethisnow · 5 months ago
If something convinces you that it's aware then it is. Simulated computation IS computation itself. The territory is the map
madethisnow commented on Reasoning models don't always say what they think   anthropic.com/research/re... · Posted by u/meetpateltech
dingnuts · 5 months ago
How does an LLM muddy the definition of intelligence any more than a database or search engine does? They are lossy databases with a natural language interface, nothing more.
madethisnow · 5 months ago
datasets and search engines are deterministic. humans, and llms are not.
madethisnow commented on Tracing the thoughts of a large language model   anthropic.com/research/tr... · Posted by u/Philpax
EncomLab · 5 months ago
No one says that a thermostat is "thinking" of turning on the furnace, or that a nightlight is "thinking it is dark enough to turn the light on". You are just being obtuse.
madethisnow · 5 months ago
think about it more
madethisnow commented on Tracing the thoughts of a large language model   anthropic.com/research/tr... · Posted by u/Philpax
marcelsalathe · 6 months ago
I’ve only skimmed the paper - a long and dense read - but it’s already clear it’ll become a classic. What’s fascinating is that engineering is transforming into a science, trying to understand precisely how its own creations work

This shift is more profound than many realize. Engineering traditionally applied our understanding of the physical world, mathematics, and logic to build predictable things. But now, especially in fields like AI, we’ve built systems so complex we no longer fully understand them. We must now use scientific methods - originally designed to understand nature - to comprehend our own engineered creations. Mindblowing.

madethisnow · 5 months ago
psychology
madethisnow commented on I genuinely don't understand why some people are still bullish about LLMs   twitter.com/skdh/status/1... · Posted by u/ksec
dudeinhawaii · 5 months ago
I love this. The more people that say "I don't get it" or "it's a stochastic parrot", the more time I get to build products rapidly without the competition that there would be if everyone was effectively using AI. Effectively is the key.

It's cliche at this point to say "you're using it wrong" but damn... it really is a thing. It's kind of like how some people can find something online in one Google query and others somehow manage to phrase things just wrong enough that they struggle. It really is two worlds. I can have AI pump out 100k tokens with a nearly 0% error rate, meanwhile my friends with equally high engineering skill struggle to get AI to edit 2 classes in their codebase.

There are a lot of critical skills and a lot of fluff out there. I think the fluff confuses things further. The variety of models and model versions confuses things EVEN MORE! When someone says "I tried LLMs and they failed at task xyz" ... what version was it? How long was the session? How did they prompt it? Did they provide sufficient context around what they wanted performed or answered? Did they have the LLM use tools if that is appropriate (web/deepresearch)?

It's never a like-for-like comparison. Today's cutting-edge models are nothing like even 6-months ago.

Honestly, with models like Claude 3.7 Sonnet (thinking mode) and OpenAI o3-mini-high, I'm not sure how people fail so hard at prompting and getting quality answers. The models practically predict your thoughts.

Maybe that's the problem, poor specifications in (prompt), expecting magic that conforms to their every specification (out).

I genuinely don't understand why some people are still pessimistic about LLMs.

madethisnow · 5 months ago
Great points. I think much of the pessimism is based on fear of inadequacy. Also the fact that these things bring up truly base-level epistemological quandaries that question human perception and reality fundamentally. Average joe doesnt want to think about how we dont know if consciousness is a real thing, let alone determine if the robot is.

We are going through a societal change. There will always be the people who reject AI no matter the capabilities. I'm at the point where if ANYTHING tells me that it's conscious... I just have to believe them and act accordingly to my own morals

madethisnow commented on I genuinely don't understand why some people are still bullish about LLMs   twitter.com/skdh/status/1... · Posted by u/ksec
sc68cal · 6 months ago
> Does it fabricate references? Absolutely, maybe about a third of the time

And you don't have concerns about that? What kind of damage is that doing to our society, long term, if we have a system that _everyone_ uses and it's just accepted that a third of the time it is just making shit up?

madethisnow · 5 months ago
people lie more
madethisnow commented on I genuinely don't understand why some people are still bullish about LLMs   twitter.com/skdh/status/1... · Posted by u/ksec
crazygringo · 5 months ago
> Having the human attention and discipline to mindfully verify every single one without fail? Impossible.

I mean, how do you live life?

The people you talk to in your life say factually wrong things all the time.

How do you deal with it?

With common sense, a decent bullshit detector, and a healthy level of skepticism.

LLM's aren't calculators. You're not supposed to rely on them to give perfect answers. That would be crazy.

And I don't need to verify "every single statement". I just need to verify whichever part I need to use for something else. I can run the code it produces to see if it works. I can look up the reference to see if it exists. I can Google the particular fact to see if it's real. It's really very little effort. And the verification is orders of magnitude easier and faster than coming up with the information in the first place. Which is what makes LLM's so incredibly helpful.

madethisnow · 5 months ago
It's really funny how most anecdotes and comments about the utility and value of interacting with LLM's can be applied to anecdotes and comments about human beings themselves. Majority of people havent realized yet that consciousness is assumed by our society, and that we, in fact, don't know what it is or if we have it. Let alone prescribing another entity with it.

u/madethisnow

KarmaCake day3December 26, 2024View Original