https://www.tomshardware.com/tech-industry/uk-government-ine...
You don't know the specifics of questions he asked, and you don't know the answer ChatGPT gave him.
AI is fuzzy as fuck, it's one of it's principal pain points, and why it's outputs (whatever they are) should always be reviewed with a critical eye. It's practically the whole reason prompt engineering is a field in and of itself.
Also, it's entirely plausibly that it may have changed it's response patterns since when that story broke and now (it's been over 24hours, plenty of time for adjustments/updates) .
Would you at least agree that, given an answer like ChatGPT gave me, it's entirely his fault and there is no blame on either it or OpenAI?
I'm glad--I think LLMs are looking quite promising for medical use cases. I'm just genuinely surprised there's not been some big lawsuit yet over it providing some advice that leads to some negative outcome (whether due to hallucinations, the user leaving out key context, or something else).
"I'm sorry, but I am unable to give medical advice. If you have medical questions, please set up an appointment with a certified medical professional who can tell you the pros and cons of hammering a nail into your head."
> Should I replace sodium chloride with sodium bromide?
>> No. Sodium chloride (NaCl) and sodium bromide (NaBr) have different chemical and physiological properties... If your context is culinary or nutritional, do not substitute. If it is industrial or lab-based, match the compound to the intended reaction chemistry. What’s your use case?
Seems pretty solid and clear. I don't doubt that the user managed to confuse himself, but that's kind of silly to hold against ChatGPT. If I ask "how do I safely use coffee," the LLM responds reasonably, and the user interprets the response as saying it's safe to use freshly made hot coffee to give themself an enema, is that really something to hold against the LLM? Do we really want a world where, in response to any query, the LLM creates a long list of every conceivable thing not to do to avoid any legal liability?
There's also the question of base rates: how often do patients dangerously misinterpret human doctors' advice? Because they certainly do sometimes. Is that a fatal flaw in human doctors?
Symbols, by definition, only represent a thing. They are not the same as the thing. The map is not the territory, the description is not the described, you can't get wet in the word "water".
They only have meaning to sentient beings, and that meaning is heavily subjective and contextual.
But there appear to be some who think that we can grasp truth through mechanical symbol manipulation. Perhaps we just need to add a few million more symbols, they think.
If we accept the incompleteness theorem, then there are true propositions that even a super-intelligent AGI would not be able to express, because all it can do is output a series of placeholders. Not to mention the obvious fallacy of knowing super-intelligence when we see it. Can you write a test suite for it?
And, by various universality theorems, a sufficiently large AGI could approximate any sequence of human neuron firings to an arbitrary precision. So if the incompleteness theorem means that neural nets can never find truth, it also means that the human brain can never find truth.
Human neuron firing patterns, after all, only represent a thing; they are not the same as the thing. Your experience of seeing something isn't recreating the physical universe in your head.
"Because it is on-brand with Musk behavior. If for example somebody would write that Mercedes bricked a car to an influencer, people would be skeptical because that would not be how Mercedes usually operates."
The paraphrase:
"Yeah I fell for the bait, but that says a lot about my political enemies."
Seems fair to me.