Readit News logoReadit News
Posted by u/johnnyApplePRNG 9 months ago
Ask HN: Can anybody clarify why OpenAI reasoning now shows non-English thoughts?
People have noticed for a while now that Google's Bard/Gemini has inserted random hindi/bengali words often. [0]

I just caught this in an o3-pro thought process: "and customizing for low difficulty. কাজ করছে!"

That last set of chars is apparently Bengali for "working!".

I just find it curious that similar "errors" are appearing from multiple different models... what is the training method or reasoning that these alternate languages can creep in, does anyone know?

[0] https://www.reddit.com/r/Bard/comments/18zk2tb/bard_speaking_random_languages/

mindcrime · 9 months ago
LLM's aren't humans and there's no reason to expect their "thinking"[1] to behave exactly - or even much - like human thinking. In particular, they don't need to "think" in one language. More concretely, in the DeepSeek R1 paper[2] they observed this "thought language mixing" and did some experiments on suppressing it... and the model results got worse. So I wouldn't personally think of it as an "error", but rather as just an artifact of how these things work.

[1]: By this I mean "whatever it is they do that can be thought of as sorta kind roughly analogous to what we generally call thinking." I'm not interested in getting into a debate (here) about the exact nature of thinking and whether or not it's "correct" to refer to LLM's as "thinking". It's a colloquialism that I find useful in this context, nothing more.

[2]: https://arxiv.org/pdf/2501.12948

learningstud · 9 months ago
I really don't get why people would want AI to think like humans even remotely, especially when we don't even know how humans think. Most people simply cannot provide justification for whatever comes out of their mouths, e.g. try explaining how planimeters work for contours that are not differentiable at every point. This question will bring out the LLM-like behavior in people.
puttycat · 9 months ago
Multilingual LLMs dont have a clear boundary between languages. They will appear to have one since they maximize likelihoods, so asking something in English will most likely produce an English continuation, etc.

In other circumstances they might take a different path (in terms of output probability decoding) through other character sets, if the probabilities justify this.

daeken · 9 months ago
This has been really interesting to me. I've been learning Spanish for a while and will mix un poco español en my sentences with ChatGPT all the time, and it's cool to see the same thing reflected back to me. It's not uncommon for a response to be 75% in English with 25% Spanish at the beginnings and ends especially. All of my conversation titles are in Spanish because I always start them with "Hola", so whatever model sets the title just assumes Spanish for it, regardless of what the rest of the message is.
pixl97 · 9 months ago
I was on a vacation in the last week and was waiting at a restaurant for a while. The lady behind me was seemingly switching between english and spanish every few sentences which caught my attention. I can only assume the person on the other side was a bilingual also. Someone in their family had a medical emergency and was in the hospital. What's interesting is it seemed like the sentences talking about medical stuff were in very good english, and the spanish sentences were about other things (I can't interpret very fast at all). With the speed and fluency of their conversation it seemed like there was not any cost on their part using either language.
outside1234 · 9 months ago
I’m so glad I am not the only one that does this!
ASalazarMX · 9 months ago
I usually interact with LLMs in English. A few weeks ago I made a Gemini gem that tries to consider two opposite sides, moderator included. Somehow it started including bits of Spanish in some of its answers, which I actually don't mind because that's my primary language.

I assumed it knew I speak Spanish from other conversations, my Google profile, geolocation, etc. Maybe my English has enough hints that it was learned by a native Spanish speaker?

johnnyApplePRNG · 9 months ago
I understand that but how often/common could it possibly be to mix in a single bengali word/phrase like that into a larger english one?

Perhaps it's more common in the parts of the world where bengali and english are more commonly spoken in general?

Why so much bengali/hindi then and why not other languages?

epa · 9 months ago
There are many users in India training these models. There is also a lot more content out there the models are consuming.
hiAndrewQuinn · 9 months ago
There have been a nonzero number of times that asking Gemini about something in Finnish about the demoscene or early 1990s tech has returned much more... colorful answers than what I saw with equivalent questioning in English.
Vilian · 9 months ago
Colorful answers?
yen223 · 9 months ago
I have no idea what's going on with ChatGPT, but I can say it's pretty common for multilingual people to be thinking about things in a different language from what they are currently speaking.
latentsea · 9 months ago
Language itself structures how to think about things too. There are some thoughts that are easier to have in one language vs another because the language naturally expresses an idea a particular way that is possible, but less natural to express in another.
johnnyApplePRNG · 9 months ago
Interesting thanks yea I forgot that even I used to be able to think in another language long ago!
Bjorkbat · 9 months ago
I don't actually think this is the case, but nonetheless I think it would be kind of funny if LLMs somehow "discovered" linguistic relativity (https://en.wikipedia.org/wiki/Linguistic_relativity).
diwank · 9 months ago
This isn’t entirely surprising. Language-model “reasoning” is basically the model internally exploring possibilities in token-space. These models are trained on enormous multilingual datasets and optimized purely for next-token prediction, not language purity. When reasoning traces or scratchpads are revealed directly (as OpenAI occasionally does with o-series models or DeepSeek-R1-zero), it’s common to see models slip into code-switching or even random language fragments, simply because it’s more token-efficient in their latent space.

For example, the DeepSeek team explicitly reported this behavior in their R1-zero paper, noting that purely unsupervised reasoning emerges naturally but brings some “language mixing” along. Interestingly, they found a small supervised fine-tuning (SFT) step with language-consistency rewards slightly improved readability, though it came with trade-offs (DeepSeek blog post).

My guess is OpenAI has typically used a smaller summarizer model to sanitize reasoning outputs before display (they mentioned summarization/filtering briefly at Dev Day), but perhaps lately they’ve started relaxing that step, causing more multilingual slips to leak through. It’d be great to get clarity from them directly on whether this is intentional experimentation or just a side-effect.

[1] DeepSeek-R1 paper that talks about poor readability and language mixing in R1-zero’s raw reasoning https://arxiv.org/abs/2501.12948

[2] OpenAI “Detecting misbehavior in frontier reasoning models” — explains use of a separate CoT “summarizer or sanitizer” before showing traces to end-users https://openai.com/index/chain-of-thought-monitoring/

ipsum2 · 9 months ago
Models like O3 are rewarded for the final output, not the intermediary thinking steps. So whatever it generates as "thoughts" that gives a better answer gets a higher score.

The DeepSeek-R1 paper has a section on this, where they 'punish' the model if it thinks in a different language to make the thinking tokens more readable. Probably Anthropic does this too.

drivingmenuts · 9 months ago
I see this as a problem. You can't make an LLM "unlearn" something; once it's in there, it's in there. If I have a huge database, I can easily delete swathes of useless data, but I cannot do the same with an LLM. It's not a living, thinking being - it's a program running on a computer; a device that we, in other circumstances, can add information to or remove it from. We can suppress certain things, but that information is still in there, taking up space and can still possibly be accessed.

We are intentionally undoing one of the things that makes computers useful.

janalsncm · 9 months ago
Others have mentioned that DeepSeek R1 also noticed this “problem”. I believe there are two things going on here.

One, the model is no longer being trained to output likely tokens or tokens likely to satisfy pairwise preferences. So the model doesn’t care. You have to explicitly punish the model for language switching, which dilutes the reasoning reward.

Two, I believe there has been some research on models representing similar ideas in multiple languages in similar areas. Sparse autoencoders have shown this. So if the translated text makes sense, I think this is why. If not, I have no idea.