I'm not much on X anymore due to the vitriol, and visiting now kinda proved it. Beneath almost every trending post made by a female is someone using grok to sexualize a picture of them.
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
I left when they started putting verified (paid) comments at the top of every conversation. Having the worst nazi views front and center on every comment isn't really a great experience.
I've got to imagine that Musk fired literally all of the product people. Pay-for-attention was just such an obviously bad idea, with a very long history of destroying social websites.
They also don’t take down overt Nazi content anymore. Accounts with all the standard unambiguous Nazi symbologies and hate content about their typical targets with associated slurs. With imagery of Hitler and praises of his policies. And calls for exterminating their perceived enemies and dehumanizing them as subhuman vermin. I’ve tried reporting many accounts and posts. It’s all protected now and boosted via payment.
I normally stay away too, but just decided to scroll through grok’s replies to see how wide spread it really is. It looks like it is a pretty big problem, and not just for women. Though, I must say that Xi Jinping in a bikini made me laugh.
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
Before Elon bought it out it was mostly possible to contain the hate with a carefully curated feed. Afterward the first reply on any post is some blue check Nazi and/or bot. Elon amplifying the racism by reposting white supremacist content, no matter how fabricated/false/misleading, is quite a signal to send to the rest of the userbase.
X wrote in offering to pay something for my OG username, because fElon wanted it for one of his Grok characters. I told them to make an offer, only for them to invoke their Terms of Service and steal it instead.
Hmm, I have an old Twitter account. Elon promised that he was going to make it the best site ever, lets see what the algorithm feeds me today, January 5 2026.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
Makes me laugh when people say Twitter is "better than ever." Not sure they understand how revealing that statement is about them, and how the internet always remembers.
The best use of generative AI is as an excuse for everyone to stop posting pictures of themselves (or of their children, or of anyone else) online. If you don't overshare (and don't get overshared), you can't get Grok'd.
The number of people saying that it is not worthy of intervention that every single woman who posts on twitter has to worry about somebody saying "hey grok, take her clothes off" and then be made into a public sex object is maybe the most acute example of rape culture that I've seen in decades.
This thread is genuinely enraging. The people making false appeals to higher principles (eg section 230) in order to absolve X of any guilt are completely insane if you take the situation at face value. Here we have a new tool that allows you to make porn of users, including minors, in an instant. None of the other new AI platforms seem to be having this problem. And yet, there are still people here making excuses.
I am not a lawyer but my understanding of section 230 was that platforms are not responsible for the content their users post (with limitations like “you can’t just host CSAM”). But as far as I understand, if the platform provides tools to create a certain type of harmful content, section 230 doesn’t protect it. Like there’s a difference between someone downloading a photo off the internet and then using tools like photoshop to make lewd content before reuploading it, as compared to the platform just offering a button to do all of that without friction.
At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything. Lot's of people are asking if it might, and if that's what X is relying on here.
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
> None of the other new AI platforms seem to be having this problem
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
Elon Musk mentioned multiple times that he doesn't want to censor. If someone does or says something illegal on his platform, it has to be solved by law enforcement, not by someone on his platform. When asked to "moderate" it, he calls that censorship. Literally everything he does and says is about Freedom - no regulations, or as little as possible, and no moderation.
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
This weekend has made me explicitly decided my kids photos will never be allowed on the internet especially social media. Its was just absolutely disgusting.
The same argument could be made of photoshopping someone's face on to a nude body. But for the most part, nobody cares (the only time I recall it happening was when it happened to David Brent in The Office).
"For a Linux user, you can already build such a system yourself quite trivially ..."
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
How is "It's acceptable because people perform a lesser form of the same behavior" an argument at all? Taken to its logical extreme, you could argue that you shouldn't be prevented from punching children in the face because there are adults in the world who get punched in the face. Obviously, this is an insane take, but it applies the same logic you've outlined here.
"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
Given X can quite simply control what Grok can and can't output, wouldn't you consider it a duty upon X to build those guardrails in for a situation like CSAM? I don't think there's any grey area here to argue against it.
Yes, every image generation tool can be used to create revenge porn. But there are a bunch of important specifics here.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
> but output is directly connected to its input and blame can be proportionally shared
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
It is trivially easy to filter this with an LLM or even just a basic CLIP model. Will it be 100% foolproof? Not likely. Is it better than doing absolutely nothing and then blaming the users? Obviously. We've had this feature in the image generation tools since the first UI wrappers around Stable Diffusion 1.0.
How about policing CSAM at all? I can still vividly remember firehose API access and all the horrible stuff you would see on there. And if you look at sites like tk2dl you can still see most of the horrible stuff that does not get taken down.
It's on X, not some fringe website that many people in the world don't access.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
> How about not enabling generating such content, at all?
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
This is probably harder because it's synthetic and doesn't exist in PhotoDNA database.
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
Willing to bet that X premium signups have shot up because of this feature. Currently this is the most convenient tool to generate porn of anything and everything.
I don’t think anyone can claim that it’s not the user’s fault. The question is whether it’s the machine’s fault (and the creator and administrator - though not operator) as well.
Getting off to images of child abuse (simulated or not) is a deep violation of social mores. This itself does indeed constitute a type of crime, and the victim is taken to be society itself. If it seems unjust, it's because you have a narrow view of the justice system and what its job actually is (hint: it's not about exacting controlled vengeance)
It may shock you to learn that bigamy and sky-burials are also quite illegal.
Any lawyers around? I would assume (IANAL) that Section 230 does not apply to content created by an agent owned by the platform, as opposed to user-uploaded content. Also it seems like their failure to create safeguards opens up the possibility of liability.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
The CSAM aspects aren't necessarily as affected by 230: to the extent that you're talking about it being criminal, 230 doesn't apply at all there.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
> non-consensual sexual material being generated of them by Grok
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Yeah this is pretty funny. Seeing all these discussions about section 230 and the American constitution...
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
Jokes on xAI. Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners. In Europe, AI generated or photoshopped CSAM is treated the same as actual abuse-backed CSAM if the depiction is realistic. Possession and distribution are both serious crimes.
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
I'm not sure Grok output is even covered by Section 230. Grok isn't a separate person posting content to a platform, it's an algorithm running on X's servers publishing on X's website. X can't reasonably say "oh, that image was uploaded by a user, they're liable, not us" when the post was performed by Grok.
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and
> is defined as any visual depiction of
> sexually explicit conduct involving a
> person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
> Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners.
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
Is Europe actually going to do anything? They currently appear to be puckering their assholes and cowering in the face of Trump, and his admin are already yelling about how the EU is "illegally" regulating American companies.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
There's so many of these nonsense views of the EU here. Not being vocal about a mental case president doesn't mean politicians are "puckering their assholes". The EU is not affraid to moderate and fine tech companies. These things take time.
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.
Grok will shit-talk Elon Musk, and it will also put him in a bikini for you. I've always found it a bit surprisingly how little control they seem to have there.
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
Edit: just to bring receipts, 3 instances in a few scrolls: https://x.com/i/status/2007949859362672673 https://x.com/i/status/2007945902799941994 https://x.com/i/status/2008134466926150003
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
I haven't seen Xi, but I am unfortunate enough to know that such an animated depiction of Maduro also exists.
These people are clearly doing it largely for shock value.
It's become a bit of a meme to do this right now on X.
FWIW (very little), it's also on a lot of male posts, as well. None of that excuses this behavior.
Fuck X.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
Dead Comment
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
Deleted Comment
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
Dead Comment
Dead Comment
Dead Comment
Dead Comment
Dead Comment
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
Also, this always existed in one form or another. Draw, photoshop, imagine, discuss imaginary intercourse with popular person online or irl
It's not worthy of intervention because it will happen anyway and it doesn't fundamentally change much
"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
I understand everyone pouncing when X won't own Grok's output, but output is directly connected to its input and blame can be proportionally shared.
Isn't this a problem for any public tool? Adversarial use is possible on any platform, and consistent law is far behind tech in this space today.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
Do yourself a favor and not Google that.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
Also, where are all the state attorneys general?
Surprising, usually the system automatically bans people who post CSAM and elon personally intervenes to unban then.
https://mashable.com/article/x-twitter-ces-suspension-right-...
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
Dead Comment
It may shock you to learn that bigamy and sky-burials are also quite illegal.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and > is defined as any visual depiction of > sexually explicit conduct involving a > person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.