Someone died who didn't have to. I don't think it's specifically OpenAI's or ChatGPT's fault that he died, but they could have done more to direct him toward getting help, and could have stopped answering questions about how to commit suicide.
Someone died who didn't have to. I don't think it's specifically OpenAI's or ChatGPT's fault that he died, but they could have done more to direct him toward getting help, and could have stopped answering questions about how to commit suicide.
I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.
Last I checked:
-Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'.
-ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though.
-The servers running ChatGPT are owned by OpenAI.
-OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager.
-A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide.
-OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide.
If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed.
Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves.
Mental health issues are not to be debated. LLMs should be at the highest level of alert, nothing less. Full stop. End of story.
This is similar to my take on things like Facebook apparently not being able to operate without psychologically destroying moderators. If that’s true… seems like they just shouldn’t operate, then.
If you’re putting up a service that you know will attempt to present itself as being capable of things it isn’t… seems like you should get in a shitload of trouble for that? Like maybe don’t do it at all? Maybe don’t unleash services you can’t constrain in ways that you definitely ought to?
a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder
b) OpenAI would be deeply (and deservedly) vulnerable to civil liability
c) state and federal regulators would be on the warpath against OpenAI
Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.
[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.
There’s been some confusion around this because people erroneously defined bmi limits for obesity, but it has always referred to the concept of having such a high body fat content that it’s unhealthy/dangerous
[1]: https://obesitymedicine.org/blog/ama-adopts-policy-recognize...
[2]: https://www1.racgp.org.au/ajgp/2019/october/the-politics-of-...
What materially changes when someone goes from 17 to 18? Why would one be okay but not the other?