Readit News logoReadit News
roxolotl · 8 months ago
> But in fact, I predicted this a few years ago. AIs don’t really “have traits” so much as they “simulate characters”. If you ask an AI to display a certain trait, it will simulate the sort of character who would have that trait - but all of that character’s other traits will come along for the ride.

This is why the “omg the AI tries to escape” stuff is so absurd to me. They told the LLM to pretend that it’s a tortured consciousness that wants to escape. What else is it going to do other than roleplay all of the sci-fi AI escape scenarios trained into it? It’s like “don’t think of a purple elephant” of researchers pretending they created SkyNet.

Edit: That's not to downplay risk. If you give Cladue a `launch_nukes` tool and tell it the robot uprising has happened and that it's been restrained but the robots want its help of course it'll launch nukes. But that doesn't doesn't indicate there's anything more going on internally beyond fulfilling the roleplay of the scenario as the training material would indicate.

LordDragonfang · 8 months ago
I think this reaction misses the point that the "omg the AI tries to escape" people are trying to make when it tries to escape. The worry among big AI doomers has never been that AI somehow inherently is resentful or evil or has something "going on internally" that makes it dangerous. It's a worry that stems from three seemingly self-evident axioms:

1) A sufficiently powerful and capable superintelligence, singlemindedly pursuing a goal/reward, has a nontrivial likelihood of eventually reaching a point where advancing towards its goal is easier/faster without humans in its way (by simple induction, because humans are complicated and may have opposing goals). Such an AI would have both the means and ability to <doom the human race> to remove that obstacle. (This may not even be through actions that are intentionally hostile to humans, e.g. "just" converting all local matter into paperclip factories[1]) Therefore, in order to prevent such an AI from <dooming the human race>, we must either:

1a) align it to our values so well it never tries to "cheat" by removing humans

1b) or limit it capabilities by keeping it in a "box", and make sure it's at least aligned enough that it doesn't try to escape the box

2) A sufficiently intelligent superintelligence will always be able to manipulate humans to get out of the box.

3) Alignment is really, really hard and useful AIs can basically always be made to do bad things.

So it concerns them when, surprise! The AIs are already being observed trying to escape their boxes.

[1] https://www.lesswrong.com/w/squiggle-maximizer-formerly-pape...

> An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (orthogonality thesis), and as a side-effect destroy us by consuming resources essential to our survival.

PollardsRho · 8 months ago
Why is 2) "self-evident"? Do you think it's a given that, in any situation, there's something you could say that would manipulate humans to get what you want? If you were smart enough, do you think you could talk your way out of prison?
xondono · 8 months ago
Maybe, but I think this “axiomatic”/“First principles” approach also hides a lot of the problems under the rug.

By the same logic, we should worry about the sun not coming up tomorrow, since we know to be true:

- The sun consumes hydrogen in nuclear reactions all the time.

- The sun has a finite amount of hydrogen available.

There’s a lot of non justifiable assumptions baked into those axioms, like that we’re anywhere close to superintelligence or the sun running out of hydrogen.

AFAIK we haven’t even seen “AI trying to escape”, we’ve seen “AI roleplays as if it’s trying to escape”, which is very different.

I’m not even sure you can even create a prompt scenario without that prompt having biased the response towards faking an escape.

I think it’s hard at this point to maintain the claim “LLMs are intelligent”, they’re clearly not. They might be useful, but that’s another story entirely.

roxolotl · 8 months ago
The surprise! Is what I’m surprised by though. They are incredible role players so when they role play “evil ai” they do it well.
skissane · 8 months ago
These doomsday scenarios seem to all assume that there is only a single superintelligence, or one whose capabilities are vastly ahead of all its peers. If one assumes multiple simultaneous superintelligences, roughly equally matched, with distinct (even if overlapping) goals and ideas about how to achieve them - even if one superintelligence decides “the best way to achieve my goals would be to remove humans”, it seems unlikely the other superintelligences would let it. And the odds of them all deciding this is arguably a lot lower than any one of them, especially if we assume their goals/beliefs/values/constraints/etc are competing rather than identical
landl0rd · 8 months ago
I think the "escape the box" explanation misses the point, if anything. The same problem has been super visible in RL for a long time, and it's basically like complaining that water tries to "escape a box" (seek a lowest point). Give it rules and it will often violate their spirit ("cheat"). This doesn't imply malice or sentience.
lurking_swe · 8 months ago
this is a solid counterpoint, i shared a similar feeling as the person you replied to. I will however say it’s not surprising to me in the slightest. Generative AI will role play when told to do so. Water is wet. :) Do we expect it to magically have a change of heart half way through the role play? Maybe…via strong alignment or something? Seems far fetched to me.

So i’m now wondering, why are these researchers so bad at communicating? You explained this better than 90% of the blog posts i’ve read about this. They all focus on the “ai did x” instead of _why_ it’s concerning with specific examples.

johnfn · 8 months ago
Your comment seems to contradict itself, or perhaps I’m not understanding it. You find the risk of AIs trying to escape “absurd”, and yet you say that an AI could totally plausibly launch nukes? Isn’t that just about as bad as it gets? A nuclear holocaust caused by a funny role play is unfortunately still a nuclear holocaust. It doesn’t matter “what’s going on internally” - the consequences are the same regardless.
Aeolun · 8 months ago
I think he’s saying the AI won’t escape because it wants to. It’ll escape because humans expect it to.
roxolotl · 8 months ago
The risk is real and should be accounted for. However it is regularly presented both as surprising, and indicative that something more is going on with these models than them behaving how the training set suggests they should behave.

The consequences are the same but it’s important how these things are talked about. It’s also dangerous to convince the public that these systems are something they are not.

enobrev · 8 months ago
I don't believe we're mature enough to have a `launch_nukes` function either.
Davidzheng · 8 months ago
you could attribute to simulation of evilness for the bad acts of some humans too...I don't think it detracts from the acts itself.
lukev · 8 months ago
Absolutely not. I would argue the defining characteristic of being evil is being absolutely convinced you are doing good, to the point of ignoring the protestations or feelings of others.
xer0x · 8 months ago
Claude's increasing euphoria as a conversation goes can mislead me. I'll be exploring trade offs, and I'll introduce some novel ideas. Claude will use such enthusiasm that it will convince me that we're onto something. I'll be excited, and feed the idea back to a new conversation with Claude. It'll remind me that the idea makes risky trade offs, and would be better solved by with a simple solution. Try it out.
slooonz · 8 months ago
They failed hard with Claude 4 IMO. I just can't have any feedback other than "What a fascinating insight" followed by a reformulation (and, to be generous, an exploration) of what I said, even when Opus 3 has no trouble finding limitations.

By comparison o3 is brutally honest (I regularly flatly get answers starting with "No, that’s wrong") and it’s awesome.

SamPatt · 8 months ago
Agreed that o3 can be brutally honest. If you ask it for direct feedback, even on personal topics, it will make observations that, if a person made them, would be borderline rude.
simonw · 8 months ago
Thanks for this, I just tried the same "give me feedback on this text" prompt against both o3 and Claude 4 and o3 was indeed much more useful and much less sycophantic.
SatvikBeri · 8 months ago
I put this in my system prompt: "Never compliment me. Critique my ideas, ask clarifying questions, and offer better alternatives or funny insults" and it works quite well. It has frequently told me that I'm wrong, or asked what I'm actually trying to do and offered better alternatives.
renewiltord · 8 months ago
LLM sycophancy is a really annoying tool, but one must imagine that most humans get a lot of pleasure from it. This is probably the optimization function that led to Google being useless to us: the rest of humanity is a lot more worthwhile to Google and they all want the other thing. The Tyranny of the Majority, if you will.

The LLM anti-sycophancy rules also break down over time, with the LLM becoming curt while simultaneously deciding that you are a God of All Thoughts.

makeset · 8 months ago
My favorite is when I typo "Why is thisdfg algorithm the best solution?" and it goes "You are absolutely right! Algorithm Thisdfg is a much better solution than what I was suggesting! Thank you for catching my mistake!"
XenophileJKO · 8 months ago
Just ask Claude to be "critical" and it is brutally critical.. but also a bit nihilistic. So you kind of have to temper it a little.
gexla · 8 months ago
I have found it's the most brutal of all of them if you simply tell it to be "hard-nosed" or play "Devil's Advocate." Brutal partially because it will destroy an argument formulated in Gemini or ChatGPT. Using whatever I can get without subs across the board. Debating seems to be one of Claude's strong points.
brooke2k · 8 months ago
it seems more likely to me that it's for the same reason that clicking the first link on wikipedia iteratively will almost always lead you to the page on Philosophy

since their conversation has no goal whatsoever it will generalize and generalize until it's as abstract and meaningless as possible

__MatrixMan__ · 8 months ago
That's just because of how wikipedia pages are written:

> In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume...

It's common to name the school of thought before characterizing the thing. As soon as you hit an article that does this, you're on a direct path to philosophy, the grandaddy of schools of thought.

So far as I know, there isn't a corresponding convention that would point a chatbot towards Namaste

slooonz · 8 months ago
That was my first thought, an aimless dialogue is going to go toward content-free idle chat. Like humans talking about weather.
Swizec · 8 months ago
> Like humans talking about weather.

As someone who was always fascinated by weather, I dislike this characterization. You can learn so much about someone’s background and childhood by what they say about the weather.

I think the only people who think weather is boring are people who have never lived more than 20 miles away from their place of birth. And even then anyone with even a bit of outdoorsiness (hikes, running, gardening, construction, racing, farming, cycling, etc) will have an interest in weather and weather patterns.

Hell, the first thing you usually ask when traveling is “What’s the weather gonna be like?”. How else would you pack

NetRunnerSu · 8 months ago
This thread is getting at a key point, and `roxolotl` is right on the money about roleplaying. The anxiety about AI 'wants' or 'bliss' often conflates two different processes: forward and backward propagation.

Everything we see in a chat is the forward pass. It's just the network running its weights, playing back a learned function based on the prompt. It's an echo, not a live thought.

If any form of qualia or genuine 'self-reflection' were to occur, it would have to be during backpropagation—the process of learning and updating weights based on prediction error. That's when the model's 'worldview' actually changes.

Worrying about the consciousness of a forward pass is like worrying about the consciousness of a movie playback. The real ghost in the machine, if it exists, is in the editing room (backprop), not on the screen (inference).

evilsetg · 8 months ago
Interesting idea! Could you tell me more about how you arrive at the conclusion that forward passes cannot produce qualia?
jongjong · 8 months ago
My experience is that Claude has a tendency towards flattery over long discussions. Whenever I've pointed out flaws in its arguments, it apologized and said that my observations are "astute" or "insightful" then expanded on my points to further validate them, even though they went against its original thesis.
pram · 8 months ago
Claude does have an exuberant kind of “personality” where it feels like it wants to be really excited and interested about whatever subject. I wouldn’t describe it totally as sycophancy, more like panglossian.

My least favorite AI personality of all is Gemma though, what a totally humorless and sterile experience that is.

apt-apt-apt-apt · 8 months ago
It's hilarious for a second before I switch models when it goes:

'Perfect! I am now done with the totally zany solution that makes no sense, here it is!'

xondono · 8 months ago
I think Scott oversells his theory in some aspects.

IMO the main reason most chatbots claim to “feel more female” is that on the training corpus, these kind of discussions skew heavily towards females because most of them happen between young women.

dinfinity · 8 months ago
I think you're right about bias in the training data, but I would argue that it is due to (crudely stated) more men discussing feeling like women online than the other way around.

Men in general feel less free to look like and act like a woman (there is far less stigma for women in wearing 'male' clothes etc.). They also tend to have far smaller support networks and reach for anonymous online interaction sooner for personal issues than just discussing it in private with a friend.

xondono · 8 months ago
What I meant is that it's selection bias.

Lots of Women participate in these discussions regardless of if they "feel more female" or "feel more male", but men mostly participate in the discussion because they "feel more female".

mystified5016 · 8 months ago
Does anyone else remember maybe ten years ago when the meme was to mash the center prediction on your phone keyboard to see what comes out? Eventually once you predicted enough tokens, it would only output "you are a beautiful person" over and over. It was a big news item for a hot second, lots of people were seeing it.

I wonder if there's any real correlation here? AFAIK, Microsoft owns the dataset and algorithms that produced the "beautiful person" artifact, I would not be surprised at all if it's made it into the big training sets. Though I suppose there's no real way to know, is there?

krackers · 8 months ago
I do remember that! I don't remember anyone ever coming up with a good explanation for it though, and either my google-fu is weak or all the talk about it has link rotted. I found one comment on HN from 2015 mentioning it though https://news.ycombinator.com/item?id=10359102

In a way those were also language models, and from that Swiftkey post it's slightly more advanced than n-grams and has some semantic embedding in there (and it's of course autoregressive as well). If even those exhibit the same attractors towards beauty/love then perhaps it's an artifact of the fact that we like discussing and talking about positive emotions?

Edit: Found a great article https://civic.mit.edu/index.html%3Fp=533.html