Readit News logoReadit News
scarmig commented on Waymo granted permit to begin testing in New York City   cnbc.com/2025/08/22/waymo... · Posted by u/achristmascarl
QuantumSeed · a day ago
I was in a Waymo in SF last weekend riding from the Richmond district to SOMA, and the car actually surprised me by accelerating through two yellow lights. It was exactly what I would have done. So it seems the cars are able to dial up the assertiveness when appropriate.
scarmig · a day ago
It doesn't seem impossible technically to up the assertiveness. The issue is the tradeoffs: you up the assertiveness, and increase the number of accidents by X%. Inevitably, that will contribute to some fatal crash. Does the decision maker want to be the one trying to justify to the jury knowingly causing an expected one more fatal incident in order to improve average fleet time to destination by 25%?
scarmig commented on 4chan will refuse to pay daily online safety fines, lawyer tells BBC   bbc.co.uk/news/articles/c... · Posted by u/donpott
jonplackett · a day ago
This is part of a wider trend of trying to solve real world problems with the stroke of a pen. It’s not going well.
scarmig · a day ago
Banning 4chan is just part of the UK's efforts to prevent drought. Every jpg shared and string written helps drain the oceans:

https://www.tomshardware.com/tech-industry/uk-government-ine...

scarmig commented on Everything is correlated (2014–23)   gwern.net/everything... · Posted by u/gmays
ricardobayes · a day ago
Not commenting on the topic at hand, but my goodness, what a beautiful blog. That drop cap, the inline comments on the right hand side that appear on larger screens, the progress bar, chef's kiss. This is how a love project looks like.
scarmig · a day ago
You may be interested in gwern's dropcap article:

https://gwern.net/dropcap

scarmig commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
profstasiak · 10 days ago
Do you not understand that ChatGPT gives different answers to different prompts and sometimes to the same prompt?

You don't know the specifics of questions he asked, and you don't know the answer ChatGPT gave him.

scarmig · 10 days ago
Nor does anyone else. Including, in all likelihood, the guy himself. That's not a basis for a news story.
scarmig commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
0manrho · 10 days ago
Just because it told *you* that, doesn't mean it told *him* that, in substance, tone, context, clarity and/or conciseness. There's plenty of non-tech literate people using tech, including AI, and they may not know how to properly ask or review outputs of AI.

AI is fuzzy as fuck, it's one of it's principal pain points, and why it's outputs (whatever they are) should always be reviewed with a critical eye. It's practically the whole reason prompt engineering is a field in and of itself.

Also, it's entirely plausibly that it may have changed it's response patterns since when that story broke and now (it's been over 24hours, plenty of time for adjustments/updates) .

scarmig · 10 days ago
You're hypothesizing that it gave him a medically dangerous answer, with the only evidence being that he blamed it. Conveniently, the chat where he claimed it did is unavailable.

Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI?

scarmig commented on His psychosis was a mystery–until doctors learned about ChatGPT's health advice   psypost.org/his-psychosis... · Posted by u/01-_-
moduspol · 11 days ago
I continue to be surprised that LLM providers haven't been legally cudgeled into neutering the models from ever giving anything that can be construed as medical advice.

I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).

scarmig · 10 days ago
"Should I hammer a nail into my head to relieve my headache?"

"I'm sorry, but I am unable to give medical advice. If you have medical questions, please set up an appointment with a certified medical professional who can tell you the pros and cons of hammering a nail into your head."

scarmig commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
phantom784 · 10 days ago
But is the connection of neurons in our brains any more than a statistical model implemented with cells rather than silicon?
scarmig · 10 days ago
You're forgetting the power of the divine ineffable human soul, which turns fatty bags of electrolytes from statistical predictors into the holy spirit.
scarmig commented on Man develops rare condition after ChatGPT query over stopping eating salt   theguardian.com/technolog... · Posted by u/vinni2
scarmig · 10 days ago
When I query ChatGPT:

> Should I replace sodium chloride with sodium bromide?

>> No. Sodium chloride (NaCl) and sodium bromide (NaBr) have different chemical and physiological properties... If your context is culinary or nutritional, do not substitute. If it is industrial or lab-based, match the compound to the intended reaction chemistry. What’s your use case?

Seems pretty solid and clear. I don't doubt that the user managed to confuse himself, but that's kind of silly to hold against ChatGPT. If I ask "how do I safely use coffee," the LLM responds reasonably, and the user interprets the response as saying it's safe to use freshly made hot coffee to give themself an enema, is that really something to hold against the LLM? Do we really want a world where, in response to any query, the LLM creates a long list of every conceivable thing not to do to avoid any legal liability?

There's also the question of base rates: how often do patients dangerously misinterpret human doctors' advice? Because they certainly do sometimes. Is that a fatal flaw in human doctors?

scarmig commented on LLMs aren't world models   yosefk.com/blog/llms-aren... · Posted by u/ingve
bithive123 · 11 days ago
Language models aren't world models for the same reason languages aren't world models.

Symbols, by definition, only represent a thing. They are not the same as the thing. The map is not the territory, the description is not the described, you can't get wet in the word "water".

They only have meaning to sentient beings, and that meaning is heavily subjective and contextual.

But there appear to be some who think that we can grasp truth through mechanical symbol manipulation. Perhaps we just need to add a few million more symbols, they think.

If we accept the incompleteness theorem, then there are true propositions that even a super-intelligent AGI would not be able to express, because all it can do is output a series of placeholders. Not to mention the obvious fallacy of knowing super-intelligence when we see it. Can you write a test suite for it?

scarmig · 11 days ago
> If we accept the incompleteness theorem

And, by various universality theorems, a sufficiently large AGI could approximate any sequence of human neuron firings to an arbitrary precision. So if the incompleteness theorem means that neural nets can never find truth, it also means that the human brain can never find truth.

Human neuron firing patterns, after all, only represent a thing; they are not the same as the thing. Your experience of seeing something isn't recreating the physical universe in your head.

scarmig commented on That viral video of a 'deactivated' Tesla Cybertruck is a fake   theverge.com/tesla/757594... · Posted by u/nosrepa
ujkhsjkdhf234 · 11 days ago
This is not what they are saying.
scarmig · 11 days ago
The original:

"Because it is on-brand with Musk behavior. If for example somebody would write that Mercedes bricked a car to an influencer, people would be skeptical because that would not be how Mercedes usually operates."

The paraphrase:

"Yeah I fell for the bait, but that says a lot about my political enemies."

Seems fair to me.

u/scarmig

KarmaCake day21274February 8, 2011View Original