Readit News logoReadit News
xyzzy123 commented on AGI is an engineering problem, not a model training problem   vincirufus.com/posts/agi-... · Posted by u/vincirufus
xyzzy123 · 2 days ago
Am I the only one who feels that Claude Code is what they would have imagined basic AGI to be like 10 years ago?

It can plan and take actions towards arbitrary goals in a wide variety of mostly text-based domains. It can maintain basic "memory" in text files. It's not smart enough to work on a long time horizon yet, it's not embodied, and it has big gaps in understanding.

But this is basically what I would have expected v1 to look like.

Deleted Comment

xyzzy123 commented on Ask HN: Is Kubernetes still a big no-no for early stages in 2025?    · Posted by u/herval
xyzzy123 · 4 days ago
There aren't really any huge gotchas imho in 2025, just watch out that you don't get sidetracked delivering awesome developer infrastructure (preview environments! blue/green! pristine iac! its fun!) if there are actually more important things to be working on (there usually are).

At early stage the product should usually be a monolith and there are a LOT of simple ways to deploy & manage 1 thing.

Probably not an issue for you but costs will also tend to bump up quite a lot, you will be ingesting way more logs, tons more metrics etc just for the cluster itself, you may find yourself paying for more things to help manage & maintain your cluster(s) . Security add-ons can quickly get expensive.

Deleted Comment

xyzzy123 commented on Yuck. Anthropic welcomes dystopia by hinting that AI should have moral status   twitter.com/JnBrymn/statu... · Posted by u/JnBrymn
ghssds · 6 days ago
I don't even know if consciousness can be achieved from computation. Consider xkcd 505 [0]. Would you consider the inhabitants of that simulated universe conscious?

0: https://xkcd.com/505/

xyzzy123 · 6 days ago
I'm not sure if the question is answerable given that consciousness is not well defined enough for everyone to agree on whether, say, a fly is conscious.

Instead maybe we can think about the system that comprises us, the models, anthropic, and society at large etc and ask which kinds of actions lead to better moral / ethical outcomes for this larger system. I also believe it helps to consider specific situations rather than to ask if x or y is "worthy" of moral consideration.[1]

For the NPCs in games thing I am honestly still unpacking it, but I genuinely think no harm is done. The reason is that the intent of the user is not to cause harm or suffering to another "being". It seems like people are surprisingly robust at distinguishing between fantasy and reality in that scenario.

We can notice that drone operators get PTSD / moral injury at fairly high rates while FPS players don't, even though at a surface level the pixels are the same.

I do think a drone operator who believed they were killing, even though the whole thing was secretly a simulation, could be injured by "killing" an NPC.

[1] Dichotomies / assigning utilitarian "worth" etc without broader consideration of the situation and kind of world we want seems to result in essays full of repugnant conclusions where the author's "gotcha" is that if we assign any value at all to the life of a shrimp in any situation, we have to fill the entire light cone with them as rapidly as possible or some such nonsense. [To be clear, this is undesirable from my perspective].

xyzzy123 commented on Yuck. Anthropic welcomes dystopia by hinting that AI should have moral status   twitter.com/JnBrymn/statu... · Posted by u/JnBrymn
ghssds · 7 days ago
Is it also bad karma to let people kill npc in videogames? If yes, why? If not, how is it different?
xyzzy123 · 7 days ago
Great question, I don't know. It doesn't seem necessary to feel empathy for a pawn knocked off a chess board. I do think a detailed and realistic torture simulator game would be a bad idea though.

Thinking it through I feel it is maybe about intent?

xyzzy123 commented on Yuck. Anthropic welcomes dystopia by hinting that AI should have moral status   twitter.com/JnBrymn/statu... · Posted by u/JnBrymn
bigyabai · 7 days ago
"in case such welfare is possible" lol

It's a fancy way of saying they want to reduce liability and save a few tokens. "I'm morally obligated to take custody of your diamond and gold jewelry, as a contingency in the event that they have sentience and a free will."

xyzzy123 · 7 days ago
I think it's bad karma to let people torture models. What I mean by karma is that in my view, it ultimately hurts the people doing it because of the effect their actions have on themselves.

What does it do to users to have a thing that simulates conversations and human interaction and teach them to have complete moral disregard for something that is standing in for an intelligent being? What is the valid use case for someone to need an AI model kept in a state where it is producing tokens indicating suffering or distress?

Even if you're absolutely certain that the model itself is just a bag of matrices and can no way suffer (which is of course plausible although I don't see how anybody can really know this), it also seems like the best way to get models which are kind & empathetic is to try to be that as far as possible.

Deleted Comment

xyzzy123 commented on Books will soon be obsolete in school   shkspr.mobi/blog/2025/08/... · Posted by u/edent
Bender · 9 days ago
I've heard similar anecdotes. I am of the opinion that AI can't fix that. That will require school boards and parents willing to change budget allocation to school police and local governments that implement zero tolerance on [instigating] violence. Parents of violent kids will of course get upset and emit crocodile tears because their "little darling" would "never hurt anyone". Cameras in classrooms, body cams on teachers, school police and lock up the little darlings will take a bite out of it. Short of that and we just lose more good teachers. Kids can earn back camera free classrooms once the violent ones are weeded out.

I would put the AI in juvenile hall to teach the violent kids. It will be a dystopian environment but they earned it and can earn their way out of it.

[Edit] - I learned today that school districts have been lazy and using "zero tolerance" to even include defending ones self vs instigation of violence. This is indefensible. I will be encouraging POTUS to end all pax payer funded education and federal funding to states if schools and states can not get their act together and change verbiage to be zero tolerance for instigators of violence.

xyzzy123 · 9 days ago
> It will be a dystopian environment but they earned it and can earn their way out of it.

It's not very clear cut but sometimes it seems like kids just end up being punished for having bad parents.

Deleted Comment

u/xyzzy123

KarmaCake day6473September 21, 2012
About
contact zxcdotmx@gmail.com
View Original