Readit News logoReadit News
shagie · 3 months ago
I read this not as "you can't ask ChatGPT about {medical or legal topic}" but rather "you can't build something on ChatGPT that provides {medical or legal} advice or support to someone else."

For example, Epic couldn't embed ChatGPT into their application to have it read your forms for you. You can still ask it - but Epic can't build it.

That said, I haven't found the specific terms and conditions that are mentioned but not quoted in context.

junon · 3 months ago
(For anyone misunderstanding the reference to Epic, it's the name of an electronic healthcare record system widely used in American hospitals, not the game company Epic Games)
CapitaineToinon · 3 months ago
I was picturing a doctor AI NPC that gives you medical advice in the middle of your Fortnite game and now I'm disapointed
tylervigen · 3 months ago
And for anyone who wants to be nerd sniped by this one comment chain, the Acquired podcast recently did an episode on Epic and it is great.

Title: "Epic Systems (MyChart)."

bee_rider · 3 months ago
AI tends to be overly-positive, so either AI might congratulate you on your kill streak.
evv · 3 months ago
I'm confused how you would "build something on ChatGPT" in the first place. Does ChatGPT have an API?

Of course OpenAI's GPT-5 and family are available as an API, but this is the first time I'm hearing about the ability to build on top of ChatGPT (the consumer product). I'm guessing this is a mistake by the journalist, who didn't realize that you can use GPT-5 without using ChatGPT?

It seems that they have a unified TOS for the APIs and ChatGPT: https://openai.com/policies/row-terms-of-use/

The seemingly-relevant passage:

> You must not use any Output relating to a person for any purpose that could have a legal or material impact on that person, such as making credit, educational, employment, housing, insurance, legal, medical, or other important decisions about them.

jeromegv · 3 months ago
Don't assume a general news journalist on a deadline to drop a quick non-important article would understand fully between the API access of OpenAI, and ChatGPT itself.
teratron27 · 3 months ago
Call centers, support chats etc. Lots of companies are now basically telling their support staff to "ChatGPT it" before replying.
siva7 · 3 months ago
wait... so you can't use openai models for educational content? so they can kick out any competi.. ehm company they don't like as this list is quite... extensive?
simsla · 3 months ago
I think it's simpler than that, and they're just trying to avoid liability.

With all of their claims about how GPT can pass the legal/medical bar etc. I wouldn't be surprised if they're eventually held accountable for some of the advice their model gives out.

This way, they're covered. Similarly, Epic could still build that feature, but they'd have to add a disclaimer like "AI generated, this does not constitute legal advice, consult a professional if needed".

freejazz · 3 months ago
Hiding it in the TOS isn't going to be a good defense from a products liability standpoint
xp84 · 3 months ago
And this seems actually wildly reasonable. It’s actually pretty scammy to take people’s money (or whatever your business model is) for legal or medical advice just to give them whatever AI shits out. If I wanted ChatGPT’s opinion (as I actually often do) I’d just ask ChatGPT for free or nearly-free. Anyone repackaging ChatGPT’s output as fit for a specialized purpose is scamming their customers.
left-struck · 3 months ago
Your comment actually hints at this towards the end but yeah this doesn’t just apply to medical and legal topics.
miki123211 · 3 months ago
What if Chat GPT is used as a "power tool" for actual legal / medical professionals?

E.G. Here's the set of documents uploaded by the customer. Here are my most important highlights (notice that document #68 is an official letter from last year informing the person that form 68d doesn't need to be filled in their specific situation). Here are my takeaways. Here's my recommendation. Apporve or change?

VBprogrammer · 3 months ago
You are failing to recognise all of the hard work which goes into "prompt engineering" to get AI to magically work!
altcognito · 3 months ago
> Anyone repackaging ChatGPT’s output as fit for a specialized purpose is scamming their customers.

This is genuinely hilarious given what LLMs are.

amelius · 3 months ago
Wait, so we cannot use the API anymore?
agentcoops · 3 months ago
Indeed. Similarly, I like now having ChatGPT as the absolute lowest bar of service I’m willing to accept. If you aren’t a better therapist, lawyer or programmer than it then why would I think about paying you.
SwtCyber · 3 months ago
OpenAI seems to be trying to cover liability without fully neutering the utility of the tool
PeterStuer · 3 months ago
I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous. People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on.

Tbh, and I usually do not like this way of thought, but these are lawsuits waiting to happen.

dotancohen · 3 months ago

  > I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses.
I'm working with a company that writes AI tools for mental health professionals. The output of the models _never_ goes to the patient.

I've heard quite a few mouthfuls of how harmful LLM advice is for mental health patients - with specific examples. I'm talking on the order of suicide attempts (and at least one success) and other harmful effects to the patient and their surroundings as well. If you know of someone using an LLM for this purpose, please encourage them to seek professional help. Or just point them at this comment - anybody is encouraged to contact me. I'm not a mental health practitioner, but I work with them and I'll do my best to answer their questions or point them to someone who can. My Gmail username is the same as my HN username.

ninetyninenine · 3 months ago
Professionals have confirmation bias too and the LLM will hook into that regardless of patient or professional. It's so good at it that even "training" the professional received can be subtly bypassed.

Basically there needs to be an LLM that can push back.

johnny_canuck · 3 months ago
It is awful, but at the same time this isn't new. People have for a long time used Google searches to self diagnose their issues. ChatGPT just makes that process ever easier now.

From my viewpoint it speaks more to a problem of the healthcare system.

ninetyninenine · 3 months ago
I agree with everything you said but chatGPT does have an insidious thing where it confirms your biases. It kind of senses what you want and actually runs with it. For truly unbiased responses you literally have to hide your intention from chatGPT. But even so chatGPT can many times still sense your intent and give you a biased answer.
bluGill · 3 months ago
Even before google people got books to self-diagnose problems.
LeafItAlone · 3 months ago
Completely agree. Let’s trust the experts we’ve reported on for years: put your symptoms into WebMD. Now please excuse me; according to WebMD apparently this stubbed toe means I have cancer so I have to get that treated.
SwtCyber · 3 months ago
The model's default mode is to be helpful and agreeable, which is exactly the wrong dynamic when someone's dealing with mental health issues or looking for a diagnosis
stocksinsmocks · 3 months ago
You’re absolutely right! Just given that behavior, I’d say there’s a pretty good chance that there is some real mental illness there.
andsoitis · 3 months ago
> I have seen more than once people using ChatGPT as a psychologist and psychiatrist and self diagnosing mental illnesses. It would be hilarious if it was not so disasterous.

What is the disaster?

eecc · 3 months ago
"People with real questions get roped in by an enthusiastically confirming sycophantic model that will never question or push back, just confirm and lead on."

That wasn't too buried IMHO

kakacik · 3 months ago
It can help and also hurt, depends. You have some mild situation that is quite common? Good chance it will give you good advice.

Something more complex, more towards clinical psychiatry rather than shoulder to cry on/complaining to friend over coffee (psychologists)? You are playing a russian roulette that model in your case won't hallucinate something harmful, while acting very confidently, more than any relevant doctor would be.

There have been cases of models suggesting suicide for example or general harmful behavior. There is no responsibility, no oversight, no expert to confirm or refute claims. Its just a faster version of reddit threads.

sans_souse · 3 months ago
Article has since been updated for some clarity;

   >    Correction
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed." [0]

[0] https://www.ctvnews.ca/sci-tech/article/chatgpt-users-cant-u...

odyssey7 · 3 months ago
Marketing masterpiece
realitydrift · 3 months ago
The real issue isn’t the policy change, it’s that AI has gotten better at sounding credible than at staying grounded. That creates a kind of performativity drift where the tone of expertise scales faster than the underlying accuracy.

So even when the model is wrong, it’s wrong with perfect bedside manner, and people anchor on that. In high stakes domains like medicine or law, that gap between confidence and fidelity becomes the actual risk, not the tool itself.

zelias · 3 months ago
Wasn't there a recent OAI dev day in which they had some users come on stage and discuss how helpful ChatGPT was in parsing diagnoses from different doctors?

I guess the legal risks were large enough to outweigh this

fluidcruft · 3 months ago
I'd wager it's probably more that there's an identifiable customer and specific product to be sold. Doctors, hospitals, EHR companies and insurers all are very interested in paying for a validated version of this thing.
gxs · 3 months ago
Or simple threats of lawsuits directly/indirectly, theres a lot of money at stake here in the end
lupire · 3 months ago
But validation is theoretically impossible.
mk89 · 3 months ago
I was wondering when they would start with the regulations.

Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.

This way they can craft an instance of GPT for your specific purposes (law, medicine, etc) and you know it's "safe" to use.

This way they sell EE licenses, which is where the big $$$ are.

dotancohen · 3 months ago

  > Next will be: you're allowed to use it, if your company/product is "HippocraticGPT"-certified, a bit like SOC or PCI compliance.
That's exactly how it should be. That's why one needs a license to practice medicine.

I work with a company in this space. All the AI tools are for the professional to use - the patient never interacts with them. The inferred output (I prefer the word prediction) is only ever seen by a mental health professional.

We reduce the professionals' workloads and help them be more efficient. But it's _not_ better care. That absolutely must be stressed. It is more _efficient_ care for the mental health practitioner and for society that can not produce enough mental health practitioners.

liampulles · 3 months ago
What is the value of the certification on the AI output? If OpenAI says they'll get guarantee the output and be accountable for errors, then thats of value, and would justify the certification. Is that the bar?
stalfie · 3 months ago
Im working on a PhD concerning LLMs and somatic medicine, as an MD, and I must admit that my perspective is the complete opposite.

Medical care, at the end of the day, has nothing to do with having a license or not. Its about making the correct diagnosis, in order to administer the correct treatment. Reality does not care about who (or what) made a diagnosis, or how the antibiotic you take was prescribed. You either have the diagnosis, or you do not. The antibiotic helps, or it does not.

Doing this in practice is costly and complicated, which is why society has doctors. But the only thing that actually matters is making the correct decision. And actually, when you test LLMs (in particular o3/gpt-5 and probably gemini 2.5), they are SUPERIOR to individual doctors in terms of medical decisionmaking, at least on benchmarks. That does not mean that they are superior to an entire medical system, or a skillfull attending in a particular speciality, but it does seem imply that they are far from a bad source of medical information. Just like LLMs are good at writing boilerplate code, they are good at boilerplate medical decisions, and the fact is that there is so much medical boilerplate that this skill alone makes it superior to most human doctors. There was one study which tested LLM assisted (I think it was o3) doctors VS LLMs alone (+doctors alone) on a set of cases, and the unassisted LLM did BETTER than doctors, assisted or not.

And so all this medicolegal pearlclutching about how LLMs should not provide medical advice is entirely unfounded when you look at the actual evidence. In fact, the evidence seems to suggest that you should ignore the doctor and listen to chatGPT instead.

And frankly, as a doctor, it really grinds my gears when anyone implies that medical decisions should be a protected domain to our benefit. The point of medicine is not to employ doctors. The point of medicine is to cure patients, by whatever means best serves them. If LLMs take our jobs, because they do a better job than we do, that is a good thing. It is an especially good thing if the general, widely available LLM is the one that does so, and not the expensively licensed "HippocraticGPT-certified" model. Can you imagine anything more frustrating, as a poor kid in the boonies of Bangladesh trying to understand why your mother is sick, than getting told "As a language model I cannot dispense medical advice, please consult your closest healthcare professional".

Medical success is not measured in employment, profits, or legal responsibilities. It is measured in reduced mortality. The means to achieve this is completely irrelevant.

Of course, mental health is a little bit different, and much more nebulous overall. However, from the perspective of someone on the somatic front, overregulation of LLMs is unecessary, and in fact unethical. On average, an LLM will dispense better medical advice than an average person with access to google, which is what it was competing with to begin with. It is an insult to personal liberty and to the Hippocratic oath to support that this should be taken away simply because of some medicolegal bs.

gblargg · 3 months ago
Or you'll need a prescription to be able to ask it health questions.
liampulles · 3 months ago
I think this would apply if they sought out government regulation that applies to all AI players, not just their own company.
mk89 · 3 months ago
...or they want to be the first to do it, as others don't have it.

OpenAI is the leading company, if they provide an instance you can use for legal advice, with relative certification etc., it'd be better to trust them rather than another random player. They create the software, the certification and the need for it.

entropi · 3 months ago
Hmm. So OpenAI doesn't care about other people's terms, copyrights or any sort of IP; but they get to have their own terms.
exasperaited · 3 months ago
Controlling other people's things and gaining from the control is kind of the point of being a thief.
sabatonfan · 3 months ago
Welcome to the internet! Have a look around

(https://www.youtube.com/watch?v=k1BneeJTDcU)

Another obligatory song named Jeff bezos by Bo Burnham (https://www.youtube.com/watch?v=lI5w2QwdYik)

C'mon Jeffrey you can do it! (I mean you can break half the internet with AWS us-east-1, but in all seriousness both songs are really nice lol)

sfn42 · 3 months ago
I don't understand this outrage. People put things on the internet for all to see but now you're mad someone saw it and made use of it.

If you didn't want others to read your information you shouldn't have published it on the internet. That's all they're doing at the end of the day, reading it. They're not publishing it as their own, they just used publicly available data to train a model.

It's quite the same as if I read an article and then tell someone about it. If I'm allowed to learn from your article then why isn't openai?

Also the terms are just for liability. Nobody gives a shit what you use ChatGPT for, the only thing those terms do is prevent you from turning around and suing OpenAI after it blows up in your face.

entropi · 3 months ago
And I feel like the difference between, say;

- Paying for a ticket/dvd/stream to see a Ghibli movie

- Training a model on their work without compensating them, then enabling everyone to copy their art style with ~zero effort, flooding the market and diluting the value of their work. And making money in the process.

should be rather obvious. My only hypothesis so far is that a lot of the people in here have a vested interest in not understanding the outrage, so they don't.

freejazz · 3 months ago
You ever see warning labels on products? That's because putting them in terms and conditions isn't enough to avoid liability in a products liability case.
snoman · 3 months ago
Quite the over-simplified straw man you have there.
Etherlord87 · 3 months ago
People publish stuff on the Internet for various reasons. Sometimes they want to share information, but if the information is shared, they want to be *attributed* as the authors.

> If you didn't want others to read your information you shouldn't have published it on the internet. That's all they're doing at the end of the day, reading it. They're not publishing it as their own, they just used publicly available data to train a model.

There is some nuance here that you fail to notice or you pretend you don't see it :D I can't copy-paste a computer program and resell it without a license. I can't say "Oh I've just read the bits, learned from it and based on this knowledge I created my own computer program that looks exactly the same except the author name is different in the »About...« section" - clearly, some reason has to be used to differentiate reading-learning-creating from simply copying...

What if instead of copy-pasting the digital code, you print it onto a film, pass the light through the film onto ants, make the light kill the ants exposed, and the rest of the ants eventually go away, and now use the dead ants as another film to somehow convert that back to digital data. You can now argue you didn't copy, you taught ants, the ants learned, and they created a new program. But you will fool no one. AI models don't actually learn, they are a different way the data is stored. I think when court decides if a use is fair and transformative enough, it investigates how much effort was put into this transformation: there was a lot of effort put into creating the AI, but once it was created, the effort put into any single work is nearly null, just the electricity, bandwidth, storage.

modeless · 3 months ago
I wouldn't be surprised to see new products from OpenAI targeted specifically at doctors and/or lawyers. Forbidding them from using the regular ChatGPT with legal terms would be a good way to do price discrimination.
mongol · 3 months ago
Definitely. And in the long run, that is the only way those occupations can operate. From that point, you are locked in to an AI dependency to operate.
District5524 · 3 months ago
Read their paper on GDPval (https://arxiv.org/abs/2510.04374). In section 3, it's quite clear that their marketing strategy is now "to cooperate with professionals" and augment them. (Which does not rule out replacing them later, when the regulatory situation is more appropriate, like AGI is already a well-accepted fact, if ever.) But this will take a lot of time and local presence which they do not have.
mgr86 · 3 months ago
I have seen "AI" in my Dr's office. They have been using it to summarize visits and write after visit notes.
devsda · 3 months ago
Can it become a proxy for AI companies to collect patient data and medical history or "train" on the data and sell that as a service to insurance companies.

There's HIPAA but AI firms have ignored copyright laws, so ignoring HIPAA or making consent mandatory is not a big leap from there.

ncraig · 3 months ago
That's likely DAX Copilot, which doesn't provide medical advice.
bradydjohnson · 3 months ago
OpenEvidence is free for anyone with an NPI

Deleted Comment