If a court orders you to preserve user data, could you be held liable for preserving user data? Regardless of your privacy policy.
Your users aren't obligated to know that you're using open ai or other provider.
When I failed to stop an acquaintance from killing himself a few years ago [1], it really fucked me up. I barely knew the guy, but I couldn't stop myself from feeling guilty over it, and I still have nightmares about it.
It led to a severe funk of depression that I still haven't gotten over, and it's led to poor sleep, poor performance at work, an increased irritability towards pretty much everyone, and I'm not completely convinced that that will ever stop.
I've seen therapists, taken various medications for depression and PTSD, trauma dumped onto pretty much anyone who will listen, and I think I'm a worse person now than I was in 2021.
I guess the likelihood of an event like this happening approaches 1 as you get older, but it doesn't mean it's not terrible.
[1] Written in some detail here https://news.ycombinator.com/item?id=29185822
Things happen and you adapt to survive it. And survival mode isn't made to make you happy, to become more generous or to expect more loyalty. And once your worldview changes for a pessimistic one, it'll taint everything around you (especially the new interactions with people)
Yes, some people change to be better: and that means (and I say it painfully) that many of them were the ones that caused unnecessary pain in others.
Some reference: https://www.hss.edu/conditions_emotional-impact-pain-experie...
https://www.researchgate.net/publication/341577702_Lacan_on_...
Turns out brand new stuff doesn't always survive, and even if it does you don't know its tradeoffs & pain points yet.
Everything is perfect & bug-free when it has no product use.
Seen it many times, and seen the wreckage later.
Yes, the main goal of a surgical site is to avoid swearing
> our findings are not applicable to children younger than 4 years for whom the buzz wire game’s small parts may represent a choking hazard, although these individuals are unlikely to be currently employed in secondary care.
Now, that's a point. I'd avoid a 0-3 toddler if i could choose so before some surgery.
Yes, it was about safety.
The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.
We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.