Readit News logoReadit News
aswegs8 commented on Y Combinator will let founders receive funds in stablecoins   fortune.com/2026/02/03/fa... · Posted by u/shscs911
airstrike · 5 days ago
I second this motion
aswegs8 · 5 days ago
I third this motion
aswegs8 commented on A few random notes from Claude coding quite a bit last few weeks   twitter.com/karpathy/stat... · Posted by u/bigwheels
daxfohl · 12 days ago
I worry about the "brain atrophy" part, as I've felt this too. And not just atrophy, but even moreso I think it's evolving into "complacency".

Like there have been multiple times now where I wanted the code to look a certain way, but it kept pulling back to the way it wanted to do things. Like if I had stated certain design goals recently it would adhere to them, but after a few iterations it would forget again and go back to its original approach, or mix the two, or whatever. Eventually it was easier just to quit fighting it and let it do things the way it wanted.

What I've seen is that after the initial dopamine rush of being able to do things that would have taken much longer manually, a few iterations of this kind of interaction has slowly led to a disillusionment of the whole project, as AI keeps pushing it in a direction I didn't want.

I think this is especially true if you're trying to experiment with new approaches to things. LLMs are, by definition, biased by what was in their training data. You can shock them out of it momentarily, whish is awesome for a few rounds, but over time the gravitational pull of what's already in their latent space becomes inescapable. (I picture it as working like a giant Sierpinski triangle).

I want to say the end result is very akin to doom scrolling. Doom tabbing? It's like, yeah I could be more creative with just a tad more effort, but the AI is already running and the bar to seeing what the AI will do next is so low, so....

aswegs8 · 12 days ago
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." - Socrates on Writing and Reading, Phaedrus 370 BC
aswegs8 commented on TikTok users can't upload anti-ICE videos. The company blames tech issues   cnn.com/2026/01/26/tech/t... · Posted by u/kotaKat
aswegs8 · 13 days ago
I only browse top page. There is anti-ICE on here all the time.
aswegs8 commented on Qwen3-Max-Thinking   qwen.ai/blog?id=qwen3-max... · Posted by u/vinhnx
nosuchthing · 13 days ago
Claude is a database with some software, it has no gender. Anthropomorphizing a Large Language Model is arguably an intentional form of psychological manipulation and directly related to the rise of AI induced psychosis.

"Emotional Manipulation by AI Companions" https://www.hbs.edu/faculty/Pages/item.aspx?num=67750

https://www.pbs.org/newshour/show/what-to-know-about-ai-psyc...

https://www.youtube.com/watch?v=uqC4nb7fLpY

> The rapid rise of generative AI systems, particularly conversational chatbots such as ChatGPT and Character.AI, has sparked new concerns regarding their psychological impact on users. While these tools offer unprecedented access to information and companionship, a growing body of evidence suggests they may also induce or exacerbate psychiatric symptoms, particularly in vulnerable individuals. This paper conducts a narrative literature review of peer-reviewed studies, credible media reports, and case analyses to explore emerging mental health concerns associated with AI-human interactions. Three major themes are identified: psychological dependency and attachment formation, crisis incidents and harmful outcomes, and heightened vulnerability among specific populations including adolescents, elderly adults, and individuals with mental illness. Notably, the paper discusses high-profile cases, including the suicide of 14-year-old Sewell Setzer III, which highlight the severe consequences of unregulated AI relationships. Findings indicate that users often anthropomorphize AI systems, forming parasocial attachments that can lead to delusional thinking, emotional dysregulation, and social withdrawal. Additionally, preliminary neuroscientific data suggest cognitive impairment and addictive behaviors linked to prolonged AI use. Despite the limitations of available data, primarily anecdotal and early-stage research, the evidence points to a growing public health concern. The paper emphasizes the urgent need for validated diagnostic criteria, clinician training, ethical oversight, and regulatory protections to address the risks posed by increasingly human-like AI systems. Without proactive intervention, society may face a mental health crisis driven by widespread, emotionally charged human-AI relationships.

https://www.mentalhealthjournal.org/articles/minds-in-crisis...

aswegs8 · 13 days ago
I mean, yeah, but I doubt OP is psychotic for asking this.
aswegs8 commented on ‘ELITE’: The Palantir app ICE uses to find neighborhoods to raid   werd.io/elite-the-palanti... · Posted by u/sdoering
commandlinefan · 24 days ago
That's where I'm stuck on this. When you have certain cities (or even entire states) saying "we will resist _any_ deportation effort", what choice does a deportation officer have than what they're doing right now?
aswegs8 · 17 days ago
To me it seems like a universal, almost spiritual law of human nature and balance. If you have people being extreme about X, you will generate people being extreme about not X. So interesting to see how this plays out and always swings back and forth.
aswegs8 commented on I was banned from Claude for scaffolding a Claude.md file?   hugodaniel.com/posts/clau... · Posted by u/hugodan
cortesoft · 17 days ago
I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

I think I kind of have an idea what the author was doing, but not really.

aswegs8 · 17 days ago
Yeah, I couldn't follow this "disabled organization" and "non-disabled organization" naming either.
aswegs8 commented on I was banned from Claude for scaffolding a Claude.md file?   hugodaniel.com/posts/clau... · Posted by u/hugodan
Aldipower · 17 days ago
I was recently kicked out from ChatGPT because I wrote "a*hole" in a context where ChatGPT constantly kept repeating nonsense! I find the ban by OpenAI to be very intrusive. Remember, ChatGPT is a machine! And I did not hurt any sentient being with my statement, nor was the GPT chat public. As long as I do not hurt any feeling beings with my thoughts, I can do whatever I want, can't I? After all, as the saying goes, "Thoughts are free." Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior. However, there is no repeated use here. I don't run around the flat all day swearing. Anyone who basically insinuates such a thing, like OpenAI, is, as I said, intrusive. I want to be able to use a machine the way I want to! As long as no one else is harmed, of course...
aswegs8 · 17 days ago
Wait what? I keep insulting ChatGPT way worse on a weekly basis (to me it's just a joke, albeit a very immature one). This is new to me that this behavior has any consequences. It never did for me.
aswegs8 commented on Claude's new constitution   anthropic.com/news/claude... · Posted by u/meetpateltech
andai · 18 days ago
Yesterday I asked ChatGPT to riff on a humorous Pompeii graffiti. It said it couldn't do that because it violated the policy.

But it was happy to tell me all sorts of extremely vulgar historical graffitis, or to translate my own attempts.

What was illegal here, it seemed, was not the sexual content, but creativity in a sexual context, which I found very interesting. (I think this is designed to stop sexual roleplay. Although I think OpenAI is preparing to release a "porn mode" for exactly that scenario, but I digress.)

Anyway, I was annoyed because I wasn't trying to make porn, I was just trying to make my friend laugh (he is learning Latin). I switched to Claude and had the opposite experience: shocked by how vulgar the responses were! That's exactly what I asked for, of course, and that's how it should be imo, but I was still taken aback because every other AI had trained me to expect "pg-13" stuff. (GPT literally started its response to my request for humorous sexual graffiti with "I'll keep it PG-13...")

I was a little worried that if I published the results, Anthropic might change that policy though ;)

Anyway, my experience with Claude's ethics is that it's heavily guided by common sense and context. For example, much of what I discuss with it (spirituality and unusual experiences in meditation) get the "user is going insane, initiate condescending lecture" mode from GPT. Whereas Claude says "yeah I can tell from context that you're approaching this stuff in a sensible way" and doesn't need to treat me like an infant.

And if I was actually going nuts, I think as far as harm reduction goes, Claude's approach of actually meeting people where they are makes more sense. You can't help someone navigate an unusual worldview by rejecting an entirely. That just causes more alienation.

Whereas blanket bans on anything borderline, comes across not as harm reduction, but as a cheap way to cover your own ass.

So I think Anthropic is moving even further in the right direction with this one. Focusing on deeper underlying principles, rather than a bunch of surface level rules. Just for my experience so far interacting with the two approaches, that definitely seems like the right way to go.

Just my two cents.

(Amusingly, Claude and GPT have changed places here — time was when for years I wanted to use Claude but it shut down most conversations I wanted to have with it! Whereas ChatGPT was happy to engage on all sorts of weird subjects. At some point they switched sides.)

aswegs8 · 18 days ago
ChatGPT self-censoring went through the roof after v5, and it was already pretty bad before.
aswegs8 commented on Nvidia Stock Crash Prediction   entropicthoughts.com/nvid... · Posted by u/todsacerdoti
fooker · 19 days ago
I’ll be so happy to buy a EOL H100!

But no, there’s none to be found, it is a 4 year, two generations old machine at this point and you can’t buy one used at a rate cheaper than new.

aswegs8 · 19 days ago
Not sure why this "GPUs obsolete after 3 years" gets thrown around all the time. Sounds completely nonsensical.

u/aswegs8

KarmaCake day249February 18, 2021View Original