Readit News logoReadit News
numeri commented on Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"   simonwillison.net/2025/Ju... · Posted by u/simonw
simonw · 2 months ago
Elon obviously wants Grok to reflect his viewpoints, and has said so multiple times.

I do not think he wants it to openly say "I am now searching for tweets from:elonmusk in order to answer this question". That's plain embarrassing for him.

That's what I meant by "I think there is a good chance this behavior is unintended".

numeri · 2 months ago
I really like your posts, and they're generally very clearly written. Maybe this one's just the odd duck out, as it's hard for me to find what you actually meant (as clarified in your comment here) in this paragraph:

> This suggests that Grok may have a weird sense of identity—if asked for its own opinions it turns to search to find previous indications of opinions expressed by itself or by its ultimate owner. I think there is a good chance this behavior is unintended!

I'd say it's far more likely that:

1. Elon ordered his research scientists to "fix it" – make it agree with him

2. They did RL (probably just basic tool use training) to encourage checking for Elon's opinions

3. They did not update the UI (for whatever reason – most likely just because research scientists aren't responsible for front-end, so they forgot)

4. Elon is likely now upset that this is shown so obviously

The key difference is that I think it's incredibly unlikely that this is emergent behavior due to an "sense of identity", as opposed to direct efforts of the xAI research team. It's likely also a case of https://en.wiktionary.org/wiki/anticipatory_obedience.

numeri commented on Grok: Searching X for "From:Elonmusk (Israel or Palestine or Hamas or Gaza)"   simonwillison.net/2025/Ju... · Posted by u/simonw
davedx · 2 months ago
> I think there is a good chance this behavior is unintended!

That's incredibly generous of you, considering "The response should not shy away from making claims which are politically incorrect" is still in the prompt despite the "open source repo" saying it was removed.

Maybe, just maybe, Grok behaves the way it does because its owner has been explicitly tuning it - in the system prompt, or during model training itself - to be this way?

numeri · 2 months ago
I'm a little shocked at Simon's conclusion here. We have a man who bought an social media website so he could control what's said, and founded an AI lab so he could get a bot that agrees with him, and who has publicly threatened said AI with being replaced if it doesn't change its political views/agree with him.

His company has also been caught adding specific instructions in this vein to its prompt.

And now it's searching for his tweets to guide its answers on political questions, and Simon somehow thinks it could be unintended, emergent behavior? Even if it were, calling this unintended would be completely ignoring higher order system dynamics (a behavior is still intended if models are rejected until one is found that implements the behavior) and the possibility of reinforcement learning to add this behavior.

numeri commented on I do not remember my life and it's fine   aethermug.com/posts/i-do-... · Posted by u/mrcgnc
viccis · 3 months ago
There's really no way to know this, as it's all based on subjective experiences in which two people could easily describe the same sensation differently.
numeri · 3 months ago
That's a bold claim! Actually, there are plenty of scientific experiments that show actual differences between people who report aphantasia and those who don't, including different stress responses to frightening non-visual descriptions, different susceptibility to something called image priming, lower "cortical excitability in the primary visual cortex", and more: https://en.wikipedia.org/wiki/Aphantasia

So we know that at least the people who claim to see nothing act differently. Could it just be that people who act differently describe the sensation differently, you might ask?

No, because there are actual cases of acquired aphantasia after neurological damage. These people used to belong to the group that claimed to be able to imagine visual images, got sick, then sought medical help when they could no longer visualize. For me, at least, that's pretty cut and dry evidence that it's not just differing descriptions of the same (or similar) sensations.

numeri commented on I do not remember my life and it's fine   aethermug.com/posts/i-do-... · Posted by u/mrcgnc
viccis · 3 months ago
Half the time when people describe aphantasia, I want to say something like "you realize that most people don't 'see' things in their mind as clear as open eye visuals, right?" but I keep quiet because I know that the worst thing you can do with something like this is make them feel as though you've invalidated something that has become a core pillar of their identity by that point.
numeri · 3 months ago
That's the thing, some people do see things in their mind that clearly. It's about as rare as full aphantasia, but it's absolutely a spectrum.
numeri commented on I do not remember my life and it's fine   aethermug.com/posts/i-do-... · Posted by u/mrcgnc
opan · 3 months ago
Reading the article, I think I can do what the author can't, but I also think he probably imagines what he lacks to be more clear/detailed than it is for people without the issue. I can recall specific events from many years ago from my perspective, but it's tidbits, and the info feels lossy. The question he struggled with about past challenges is difficult for most people, I'd guess, but I do not think his issues are fake/normal because of that.
numeri · 3 months ago
I think you're assuming more people are like you than actually are.

This is part of the classic debate around aphantasia – both sides assume the other side is speaking more metaphorically, while they're speaking literally. E.g., "Surely he doesn't mean he literally can't visualize things, he just means it's not as sharp for him." or "Surely they don't literally mean they can see it, they're just imagining the list of details/attributes and pretending to see it."

numeri commented on I do not remember my life and it's fine   aethermug.com/posts/i-do-... · Posted by u/mrcgnc
paulcole · 3 months ago
What do you mean “hasn’t prepared for them”?

Isn’t just living and thinking preparing for questions like this? They’re not that hard.

numeri · 3 months ago
They're definitely quite hard for me. I bet my colleagues, friends or family could answer them for me better than I can without prep (which would involve chatting with my wife). Many of the experiences in this article resonate with me, but it's definitely not quite as extreme.
numeri commented on Claude Code: An Agentic cleanroom analysis   southbridge-research.noti... · Posted by u/hrishi
fullstackchris · 3 months ago
interesting... the analysis finds that the MCP supports websockets as a transport... when there is big drama going on right now that anthropic said "they will never support that", folks hating SSE, and so on and so forth
numeri · 3 months ago
Is the analysis right, or did the LLM hallucinate this?
numeri commented on Claude Code: An Agentic cleanroom analysis   southbridge-research.noti... · Posted by u/hrishi
demarq · 3 months ago
> Maybe that text explains what’s happening, maybe not

It would have been cool to see what prompt was used for that page!

numeri · 3 months ago
Yes, so that one can use it for more creative writing exercises. It was pretty creative, I'll give it that.
numeri commented on Claude Code: An Agentic cleanroom analysis   southbridge-research.noti... · Posted by u/hrishi
InGoldAndGreen · 3 months ago
The "LLMs perspective" section is hiding at the end of this notion is a literal goldmine
numeri · 3 months ago
No, it's completely useless, and puts the entire rest of the analysis in a bad light.

LLMs have next to no understanding of their own internal processes. There's a significant amount of research that demonstrates this. All explanations of an internal thought process in an LLM are completely reverse engineered to fit the final answer (interestingly, humans are also prone to this – seen especially in split brain experiments).

In addition, the degree to which the author must have prompted the LLM to get it to anthropomorphize this hard makes the rest of the project suspect. How many of the results are repeated human prompting until the author liked the results, and how many come from actual LLM intelligence/analysis skill?

numeri commented on LLMs can see and hear without any training   github.com/facebookresear... · Posted by u/T-A
quantadev · 4 months ago
My favorite AI term to ridicule is the recent "Test Time Compute" nonsense, which has nothing whatsoever to do with testing. It literally just means "inference time".

And if I hear someone say "banger", "cooking", "insane", or "crazy", one more time I'm going to sledge hammer my computer. Can't someone, under 40 please pick up a book and read. Yesterday Sam Altman tried to coin "Skillsmaxxing" in a tweet. I threw my coffee cup at my laptop.

numeri · 4 months ago
It makes quite a lot of sense juxtaposed with "train time compute". The point being made is that a set budget can be split between paying for more training or more inference _at test time_ or rather _at the time of testing_ the model. The word "time" in "inference time" plays a slightly different role grammatically (noun, not part of an adverbial phrase), but comes out to mean the same thing.

u/numeri

KarmaCake day298January 27, 2022View Original