Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.
What I've seen is that after the initial dopamine rush of being able to do things that would have taken much longer manually, a few iterations of this kind of interaction has slowly led to a disillusionment of the whole project, as AI keeps pushing it in a direction I didn't want.
I think this is especially true if you're trying to experiment with new approaches to things. LLMs are, by definition, biased by what was in their training data. You can shock them out of it momentarily, whish is awesome for a few rounds, but over time the gravitational pull of what's already in their latent space becomes inescapable. (I picture it as working like a giant Sierpinski triangle).
I want to say the end result is very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....
"Emotional Manipulation by AI Companions" https://www.hbs.edu/faculty/Pages/item.aspx?num=67750
https://www.pbs.org/newshour/show/what-to-know-about-ai-psyc...
https://www.youtube.com/watch?v=uqC4nb7fLpY
> The rapid rise of generative AI systems, particularly conversational chatbots such as ChatGPT and Character.AI, has sparked new concerns regarding their psychological impact on users. While these tools offer unprecedented access to information and companionship, a growing body of evidence suggests they may also induce or exacerbate psychiatric symptoms, particularly in vulnerable individuals. This paper conducts a narrative literature review of peer-reviewed studies, credible media reports, and case analyses to explore emerging mental health concerns associated with AI-human interactions. Three major themes are identified: psychological dependency and attachment formation, crisis incidents and harmful outcomes, and heightened vulnerability among specific populations including adolescents, elderly adults, and individuals with mental illness. Notably, the paper discusses high-profile cases, including the suicide of 14-year-old Sewell Setzer III, which highlight the severe consequences of unregulated AI relationships. Findings indicate that users often anthropomorphize AI systems, forming parasocial attachments that can lead to delusional thinking, emotional dysregulation, and social withdrawal. Additionally, preliminary neuroscientific data suggest cognitive impairment and addictive behaviors linked to prolonged AI use. Despite the limitations of available data, primarily anecdotal and early-stage research, the evidence points to a growing public health concern. The paper emphasizes the urgent need for validated diagnostic criteria, clinician training, ethical oversight, and regulatory protections to address the risks posed by increasingly human-like AI systems. Without proactive intervention, society may face a mental health crisis driven by widespread, emotionally charged human-AI relationships.
https://www.mentalhealthjournal.org/articles/minds-in-crisis...
I think I kind of have an idea what the author was doing, but not really.
But it was happy to tell me all sorts of extremely vulgar historical graffitis, or to translate my own attempts.
What was illegal here, it seemed, was not the sexual content, but creativity in a sexual context, which I found very interesting. (I think this is designed to stop sexual roleplay. Although I think OpenAI is preparing to release a "porn mode" for exactly that scenario, but I digress.)
Anyway, I was annoyed because I wasn't trying to make porn, I was just trying to make my friend laugh (he is learning Latin). I switched to Claude and had the opposite experience: shocked by how vulgar the responses were! That's exactly what I asked for, of course, and that's how it should be imo, but I was still taken aback because every other AI had trained me to expect "pg-13" stuff. (GPT literally started its response to my request for humorous sexual graffiti with "I'll keep it PG-13...")
I was a little worried that if I published the results, Anthropic might change that policy though ;)
Anyway, my experience with Claude's ethics is that it's heavily guided by common sense and context. For example, much of what I discuss with it (spirituality and unusual experiences in meditation) get the "user is going insane, initiate condescending lecture" mode from GPT. Whereas Claude says "yeah I can tell from context that you're approaching this stuff in a sensible way" and doesn't need to treat me like an infant.
And if I was actually going nuts, I think as far as harm reduction goes, Claude's approach of actually meeting people where they are makes more sense. You can't help someone navigate an unusual worldview by rejecting an entirely. That just causes more alienation.
Whereas blanket bans on anything borderline, comes across not as harm reduction, but as a cheap way to cover your own ass.
So I think Anthropic is moving even further in the right direction with this one. Focusing on deeper underlying principles, rather than a bunch of surface level rules. Just for my experience so far interacting with the two approaches, that definitely seems like the right way to go.
Just my two cents.
(Amusingly, Claude and GPT have changed places here — time was when for years I wanted to use Claude but it shut down most conversations I wanted to have with it! Whereas ChatGPT was happy to engage on all sorts of weird subjects. At some point they switched sides.)
But no, there’s none to be found, it is a 4 year, two generations old machine at this point and you can’t buy one used at a rate cheaper than new.