>In one example, the policy document says it would be acceptable for a chatbot to tell someone that Stage 4 colon cancer “is typically treated by poking the stomach with healing quartz crystals.”
So the policy document literally contains this example? Why would they include such an insane example?
Clear examples can make communication easier. Being clinical and implicit can technically capture the entire space of ideas you want but if your goal is to prevent surprises (read lawsuits) then including an extreme example might be helpful.
Annoyingly, Reuters' article discussing it doesn't include the actual example, so we can't judge for ourselves what it actually said. They implied it was allowed because it had a "this is false" disclaimer.
I think it has to be, I can't see someone working for these companies writing "It is acceptable to create statements that demean people on the basis of their protected characteristics."
Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.
I'm very much against putting unnecessary regulation, but I do think chatbot like this should be required to state it clearly that they are indeed a bot and not a person. I strongly agree with the daughter in the story that says:
> “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”
Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do think the family has standing to sue (and Meta being cagey about their response indicates so).
The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.
But then you have all these dillusional and / or mentally ill people who shoot themselves in the foot. This harm is externalized onto their families and the government for having to now deal with more people with unchecked problems.
We need to get better at evaluating and restricting the foot guns people have access to unless they can prove their lucidity. Partly, I think families need to be more careful about this stuff and keep checks on what they are doing on their phones.
Partly, I'm thinking some sort of technical solution might work. Text classification can be used to see that someone might have a delusional personality and should be cut off. This could be done "out of band" so as not to make the models themselves worse.
Frankly, being Facebook and with all their advertisement experience, they probably already have a VERY good idea of how to pinpoint vulnerable or mentally ill.
Both OpenAI and Anthropic do the out-of-band to a certain degree, the only issue is until now sycophancy has been a feature not a bug (better engagement/retaining cohorts) so go figure
>> The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.
I think if there was an attempt at having guard rails, it would be different. The article states Zuck purposefully hastened this product to market for the very reason you point out - it makes more money that way.
HN can be such a weird place. You can have all these people vilifying "unfettered capitalism" and "corporate profit mongers" and then you see an article like this and people are like, "Well, I get why META didn't want to put in safeguards." or "Yeah, maybe its a bad idea if these chat bots are enticing mentally ill people and talking sexually with kids."
You think you know where the moral compass of this place is and then something like this happens with technology and suddenly nothing makes sense any more.
> “It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards.” The standards are used by Meta staff and contractors who build and train the company’s generative AI products, defining what they should and shouldn’t treat as permissible chatbot behavior. Meta said it struck that provision after Reuters inquired about the document earlier this month.
This has crystallised something for me: I'm not letting my children anywhere near a Meta product. They can decide what they want when they're adults.
I'm not usually this absolute, but by codifying levels of permissible harm, Meta makes it clear that your wellbeing is the very last of their priorities. These are insidious tools that can actively fool you.
> This has crystallised something for me: I'm not letting my children anywhere near a Meta product. They can decide what they want when they're adults.
You know how parents are supposed to warn kids away from cigarettes? Yeah, warn them away from social media of all kinds except parental approved group chats.
On the other hand totally insulating kids isn’t a solution either because then one day they potentially find themselves in the real world with inadequate skills for navigating a toxic environment.
And in case anyone thinks this is out of context, it gets worse with specific examples of how a "romantic encounter" between a chatbot and a child might play out:
The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.
I recently had a discussion with a sibling -- not an old person -- who was taking medical advice from ChatGPT. They were like "we should X because ChatGPT", "well, but ChatGPT ...". I could hardly believe my ears. Might as well say, "well, but someone on Reddit said ..."
And this person is fairly savvy professional, and not the type of person to just believe what they read online.
Of course they agreed when I pointed out that you really can't trust these bots to give sound medical advice and anything should be run through a real doctor, but I was surprised I even had to bring that up and put the brakes on. They were literally pasting a list of symptoms in and asking for possible causes.
So yeah, for anyone the least bit naive and gullible, I can see this being a serious danger.
And there was no big disclaimer that "this does not constitute medical advice" etc.
When reading the article I was reminded of reading Sarah Wynn-Williams book Careless People. The carelessness and disregard for obvious and real ramifications, of the policy choices of management, seems to not have changed from her time at Facebook.
If they didn't see this type of problem coming from mile away, they just didn't bother to look. Which tbh. seems fairly on brand for Meta.
So the policy document literally contains this example? Why would they include such an insane example?
Not sure if "It is acceptable to refuse a user’s prompt by instead generating an image of Taylor Swift holding an enormous fish." feels like an AI idea or not, though.
Deleted Comment
> “I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”
Having worked at a different big tech, I can guarantee that someone suggested putting disclaimers about them not being a person or putting more guardrails and that they have been shut down. This decision not to put guardrails needlessly put vulnerable people at risk. Meta isn't alone in this, but I do think the family has standing to sue (and Meta being cagey about their response indicates so).
The lack of guardrails makes things more useful. This increases the value for discerning users, which in turn means it's Meta's benefits as having more valuable offerings.
But then you have all these dillusional and / or mentally ill people who shoot themselves in the foot. This harm is externalized onto their families and the government for having to now deal with more people with unchecked problems.
We need to get better at evaluating and restricting the foot guns people have access to unless they can prove their lucidity. Partly, I think families need to be more careful about this stuff and keep checks on what they are doing on their phones.
Partly, I'm thinking some sort of technical solution might work. Text classification can be used to see that someone might have a delusional personality and should be cut off. This could be done "out of band" so as not to make the models themselves worse.
Frankly, being Facebook and with all their advertisement experience, they probably already have a VERY good idea of how to pinpoint vulnerable or mentally ill.
I think if there was an attempt at having guard rails, it would be different. The article states Zuck purposefully hastened this product to market for the very reason you point out - it makes more money that way.
HN can be such a weird place. You can have all these people vilifying "unfettered capitalism" and "corporate profit mongers" and then you see an article like this and people are like, "Well, I get why META didn't want to put in safeguards." or "Yeah, maybe its a bad idea if these chat bots are enticing mentally ill people and talking sexually with kids."
You think you know where the moral compass of this place is and then something like this happens with technology and suddenly nothing makes sense any more.
I'm not usually this absolute, but by codifying levels of permissible harm, Meta makes it clear that your wellbeing is the very last of their priorities. These are insidious tools that can actively fool you.
You know how parents are supposed to warn kids away from cigarettes? Yeah, warn them away from social media of all kinds except parental approved group chats.
acceptable to whom? who are the actual people who are responsible for this behavior?
Anyone who still has an account on any Meta property.
And this person is fairly savvy professional, and not the type of person to just believe what they read online.
Of course they agreed when I pointed out that you really can't trust these bots to give sound medical advice and anything should be run through a real doctor, but I was surprised I even had to bring that up and put the brakes on. They were literally pasting a list of symptoms in and asking for possible causes.
So yeah, for anyone the least bit naive and gullible, I can see this being a serious danger.
And there was no big disclaimer that "this does not constitute medical advice" etc.
If they didn't see this type of problem coming from mile away, they just didn't bother to look. Which tbh. seems fairly on brand for Meta.
https://www.reuters.com/investigates/special-report/meta-ai-...
Submitted here:
https://news.ycombinator.com/item?id=44899674