Actually, “we”, collectively, do know, because the SEC maintains an “XKEYSCORE for equities” called CAT.
If there was interest, the government could know exactly who placed these trades. But the call (options) are coming from inside the house.
https://catnmsplan.com/sites/default/files/2025-04/04.01.25-...
Also, CAT is run by CATNMS, LLC which was created in response to an SEC rule 613, however it is operated by the same consortium of SROs that it purports to provide oversight on...
All these layers of responsibility diffusion and a notable absence of penalties for failing to meet rule 613 guidelines mean that rule is little more than for show.
> This case also highlights how the use of artificial intelligence (AI) can potentially contribute to the development of preventable adverse health outcomes. Based on the timeline of this case, it appears that the patient either consulted ChatGPT 3.5 or 4.0 when considering how he might remove chloride from this diet. Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs.
However I don't see this single negative instance of a vast social-scale issue as much more than fear/emotion-mongering without at least MENTIONING that LLM also have positive effects. Certainly, it doesn't seem like science to me. Unless these models are subtly leading otherwise healthy and well-adjusted users to unhealthy behavior, I don't see how this interaction with artificial intelligence is any different than the billions of confirmation-bias pitfalls that already occur daily using google and natural stupidity. From the article:
> The case also raises broader concerns about the growing role of generative AI in personal health decisions. Chatbots like ChatGPT are trained to provide fluent, human-like responses. But they do not understand context, cannot assess user intent, and are not equipped to evaluate medical risk. In this case, the bot may have listed bromide as a chemical analogue to chloride without realizing that a user might interpret that information as a dietary recommendation.
It just seems they've got an axe to grind and no technical understanding of the tool they're criticizing.
To be fair, I feel there's much to study and discuss about pernicious effects of LLMs on mental health. I just don't think this article frames these topics constructively.