Iman Mirzadeh on Machine Learning Street Talk (Great podcast if you haven’t already listened!) put into a words a thought I had - LLM labs are so focused on making those scores go up it’s becoming a bit of a perverse incentive.
If your headline metric is a score, and you constantly test on that score, it becomes very tempting to do anything that makes that score go up - i.e Train on the Test set.
I believe all the major ML labs are doing this now because:
- No one talks about their data set
- The scores are front and center of big releases, but there is very little discussion or nuance other than the metric.
- The repercussions of not having a higher or comparable score is massive failure and your budget will get cut.
More in depth discussion on capabilities - while harder - is a good signal of a release.
> LLM labs are so focused on making those scores go up it’s becoming a bit of a perverse incentive.
This seems like an odd comment to post in response to this article.
This is about showing that a new architecture can match the results of more established architectures in a more efficient way. The benchmarks are there to show this. Of course they aren’t going to say “It’s just as good – trust us!”.
Being _perceived_ as having the best LLM/chatbot is a billion dollar game now. And it is an ongoing race, at breakneck speeds. These companies are likely gaming the metrics in any and all ways that they can.
Of course there are probably many working on genuine improvements also. And at the frontier it can be very difficult to separate "hack" from "better generalized performance". But that is much harder, so might be the minority in terms of practical impact already.
It is a big problem for researchers at least that we/they do know what is in the training data and how that process works. Figuring out if there are (for example) data leaks or overeager preference tuning, that caused performance to get better for a given task is extremely difficult with these giganormous black boxes.
You have potentially billions of dollars to gain, no way to be found out… it’s a good idea to initially assume there’s cheating and work back from there.
Intelligence is so vaguely defined and has so many dimensions that it is practically impossible to assess. The only approximation we have is the benchmarks we currently use. It is no surprise that model creators optimize their models for the best results in these benchmarks. Benchmarks have helped us drastically improve models, taking them from a mere gimmick to "write my PhD thesis." Currently, there is no other way to determine which model is better or to identify areas that need improvement.
That is to say, focusing on scores is a good thing. If we want our models to improve further, we simply need better benchmarks.
Inability to distinguish similar-but-distinct events (conceptual blending rate ~83%)
Failure to update prior memories (persistent memory bias >69%)
No genuine recollection (only pattern completion)
Non-Essential (Emotional Valence)
While emotions influence human storytelling:
65% of narrative interpretations vary culturally
Affective priming effects decay exponentially (<7s half-life)
Neutral descriptions achieve 89% comprehension accuracy in controlled studies
The core computational challenge remains bridging:
Symbolic representation (words/syntax)
Embodied experience (sensorimotor grounding)
Self-monitoring (meta-narrative control)
Current LLMs simulate 74% of surface narrative features but lack the substrate for genuine meaning-making. It's like generating symphonies using only sheet music - technically accurate, but devoid of the composer's lived experience.
Benchmark scores are table stakes - necessary but not sufficient to demonstrate the capabilities of a model. Casual observers might just look at the numbers, but anyone spending real money on inference will run their own tests on their own problems. If your model doesn't perform as it should, you will be found out very quickly.
A comparison of testing criticality across countries would be interesting to read if someone knows a decent reference. My sense (which I don't trust) is that test results matter at-least-as much or more in other places than they do in the US. For example, are England's A-levels or China's gaokao tests or Germany's Abitur tests more or less important than US SATs/ACTs?
A large amount of work in the last few years has gone into building benchmarks because models have been going though and beating them at a fairly astonishing rate. It's generally accepted as true that passing any one of them does not constitute fully general intelligence but the difficult part has been finding things that they cannot do. They are giving them more and more difficult tasks. The ARC prize in particular was designed to be focused on reasoning more than knowledge. The 87.5% score achieved in such a short time by throwing lots of resources at conventional methods was quite a surprise.
You can at least have a degree of confidence that they will perform well in the areas covered by the benchmarks (as long as they weren't contaminated) and with enough benchmarks you get fairly broad coverage.
> It's generally accepted as true that passing any one of them does not constitute fully general intelligence but the difficult part has been finding things that they cannot do.
It's pretty easy to find things they can't do. They lack a level of abstraction that even small mammals have, which is why you see them constantly failing when it comes to things like spacial awareness.
The difficult part is creating an intelligence test that they score badly on. But that's more of an issue with treating intelligence tests as if they're representative of general intelligence.
It's like have difficulty finding a math problem that Wolfram Alpha would do poorly on. If a human was able to solve all of these problems as well as Wolfram Alpha, they would be considered a genius. But Wolfram Alpha being able to solve those questions doesn't show that it has general intelligence, and trying to come up with more and more complicated math problems to test it with doesn't help us answer that question either.
> does not constitute fully general intelligence but the difficult part has been finding things that they cannot do
I am very surprised when people say things like this. For example, the best ChatGPT model continues to lie to me on a daily basis for even basic things. E.g. when I ask it to explain what code is contained on a certain line on github, it just makes up the code and the code it's "explaining" isn't found anywhere in the repo.
From my experience, every model is untrustworthy and full of hallucinations. I have a big disconnect when people say things like this. Why?
The trick is that the benchmarks must have a wide enough distribution so that a well scoring model is potentially useful for the widest span of users.
There also would need to be a guarantee (or checking of the model somehow) that model providers don't just train on the benchmarks. Solutions are dynamic components (random names, numbers, etc) or private parts of benchmarks.
A common pattern is for benchmarks owners to hold back X% of their set so they can independently validate that models perform similarly on the holdback set. See: FrontierMath / OpenAI brouhaha.
Typically you train it on one set and test it on another set. If you see that the differences between the two sets are significant enough and yet it has maintained good performance on the test set, you claim that it has done something useful [alongside gaming the benchmark that is the train set]. That "side effect" is always the useful part in any ML process.
If the test set is extremely similar to the train set then yes, it's goodharts law all around. For modern LLMs, it's hard to make a test set that is different from what it has trained on, because of the sheer expanse of the training data used. Note that the two sets are different only if they are statistically different. It is not enough that they simply don't repeat verbatim.
We've been able to pass the Turing Test on text, audio, and short form video (think AI's on video passing coding tests). I think there's an important distinction now with AI streamers where people notice they are AI's eventually. Now there might pop up AI streamers where you don't know they're an AI. However, there's a ceiling on how far digital interactions on the Turing Test can go. The next big hurdle towards AGI is physical interactions, like entering a room.
Yeah, and if anything, RL has a rep of being too good at this job, because of all the cases where it gamed a benchmark by picking up on some environmental factor the supervisors hadn't thought of (numerical instabilities, rounding, bugs, etc.).
No, that is patently false. Many optimization algorithms which computer scientists, mathematicians or software developers devise do not involve benchmakrs at all, and apply to all possible inputs/instances of their respective computational problems.
The romanization of these names is always confusing b/c stripped of the character and tone it's just gibberish. "Hunyuan" or 混元 in chinese means "Primordial Chaos" or "Original Unity".
This helps as more chinese products and services hit the market and makes it easier to remember. The naming is similar to the popularity of greek mythology in western products. (e.g. all the products named "Apollo")
I think it's particularly egregious that they use such a lossy encoding. I can't read the hanzi, but at least "Hùn yuán" would have been more helpful, or even "Hu4n yua1n" would have enabled me to pronounce it or look it up without having the context to guess which characters it was representing.
Yes, this is very annoying, because how Pinyin works. There were a lot mistakes made when using Pinyin in English content. Pinyin suppose to break at character level, Pinyin = Pin Yin, you can easily write it as Pin-Yin, or Pin Yin, but Pinyin is just wrong.
Hun Yuan is a lot better. I agree, with unicode, we can easily incorporate the tone.
Agreed. We all have a duty to respect languages and their official transcription. Pinyin with tones does not look much different from French with accents. In both cases, most people aren’t likely to pronounce it correctly, though.
The irony is not lost on me that Tencent themselves did that.
> The naming is similar to the popularity of greek mythology in western products. (e.g. all the products named "Apollo")
Popular? So you’re saying that all the VPs who have come up with the mind bendingly unique and creative name Prometheus didn’t do so out of level 10 vision?
> 好的,用户发来消息:“hello do you speak english” (Hunyuan-T1 thinking response)
It's kind of wild that even a Chinese model replies "好的" as the first tokens, which basically means "Ok, so..." like R1 and the other models respond. Is this RL'ed or just somehow a natural effect of the training?
If anything I feel like “Ok, so…” is wasted tokens so you’d think RL that incentivizes more concise thought chains would eliminate it. Maybe it’s actually useful in compelling the subsequent text to be more helpful or insightful.
There was a paper[1] from last year where the authors discovered getting the model to output anything during times of uncertainty, improved the generations overall. If all of the post-training alignment reasoning starts with the same tokens then I could see how it would condition the model to continue the reasoning phase.
This is not the case -- it's actually the opposite. The more of these tokens it generates, the more thinking time it gets (very much like humans going "ummm" all the time.) (Loosely speaking) every token generated is an iteration through the model, updating (and refining) the KV cache state and further extending the context.
If you look at how post-training works for logical questions, the preferred answers are front-loaded with "thinking tokens" -- they consistently perform better. So, if the question is "what is 1 + 1?", they're post-trained to prefer "1 + 1 is 2" as opposed to just "2".
Ok, so I'm thinking here that.. hmm... maybe.. just maybe... there is something that, kind of, steers the rest of the thought process into a, you know.. more open process? What do you think? What do I think?
As opposed to the more literary authoritative prose from textbooks and papers where the model output from the get-go has to commit to a chain of thought. Some interesting relatively new results are that time spent on output tokens more or less linearly correspond to better inference quality so I guess this is a way to just achieve that.
The tokens are inserted artificially in some inference models, so when the model wants to end the sentence, you switch over the end token with "hmmmm" and it will happily now continue.
The only metric I really care about, and the one that I think shows the fundamental failure of LLMs as a technology, is this one here [1]. The fact that o1 fails a non-zero amount of the time on the question, "what is 6*1?" means that the models just do not "understand" _anything_ and are still just fancy stochastic parrots. Now, stochastic parrots are still useful! Just not the digital god a lot of people seam to think we're heading towards.
I'm not seeing anything in that graph that implies that o1 ever fails on "what is 6*1?" The chart is graphing the number of digits on each axis; it fails on "what is (some 6 digit number) * (some 1 digit number)"
I don't think this will or necessarily should ever be fixed. The eventual solution (I imagine) will be to simply plug in a calculator. All the MCP talk on HN pushed me to try MCP out, and I'm sold. A Swiss army knife of tools like a calculator available would let a brain do what a brain is best at, and a calculator what a calculator is best at.
The chart you show is about the accuracy of x*y where X and Y are an increasing amount of digits.
This graph shows that both o1 and o3-mini are better at calculating in one’s head than any human I have known. It only starts to break down towards calculating the product of two eight digit factors etc.
So many models coming out these days, so many developments happening in the AI space in general, it's kinda hard to keep up with it all. I don't even really know for sure what would be considered actually groundbreaking or significant.
I try to generally keep up with the overall trends, but I’m an engineer at a resource-constrained startup, not a research scientist. I want to see real-world application, at least mid-term value, minimum lock-in, and strong supportability. Until then, I just don’t have time to think about it.
For me nothing has been groundbreaking nor significant. What we are seeing is the same in every new innovation, a suite of micro-innovations which improves efficiency and reduces cost.
But LLMs are still fundamentally a stochastic parrot that depends heavily on source data to produce useful results. So we will go through a lull until there is some new groundbreaking research which moves everything forward. And then the cycle repeats.
As someone who frequently thinks in both English and Chinese, I wonder if this "proves" that the Whorfian hypothesis is correct, or maybe at least more efficient?
Saving others a web search for some random name...
> Linguistic relativity asserts that language influences worldview or cognition. [...] Various colloquialisms refer to linguistic relativism: the Whorf hypothesis; the Sapir–Whorf hypothesis; the Whorf-Sapir hypothesis; and Whorfianism. [...] Sapir [and] Whorf never co-authored any works and never stated their ideas in terms of a hypothesis
The current state of which seems to be:
> research has produced positive empirical evidence supporting a weaker version of linguistic relativity: that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them.
> I mainly use Chinese to interact, but also have a certain ability to understand English. You can use Chinese or English to communicate with me at any time, [and] I will do my utmost to offer you assistance~
If your headline metric is a score, and you constantly test on that score, it becomes very tempting to do anything that makes that score go up - i.e Train on the Test set.
I believe all the major ML labs are doing this now because:
- No one talks about their data set
- The scores are front and center of big releases, but there is very little discussion or nuance other than the metric.
- The repercussions of not having a higher or comparable score is massive failure and your budget will get cut.
More in depth discussion on capabilities - while harder - is a good signal of a release.
This seems like an odd comment to post in response to this article.
This is about showing that a new architecture can match the results of more established architectures in a more efficient way. The benchmarks are there to show this. Of course they aren’t going to say “It’s just as good – trust us!”.
Unfortunately, I'm not sure what a solution that can't be gamed may even look like (which is what gp is asking for).
It is a big problem for researchers at least that we/they do know what is in the training data and how that process works. Figuring out if there are (for example) data leaks or overeager preference tuning, that caused performance to get better for a given task is extremely difficult with these giganormous black boxes.
That is to say, focusing on scores is a good thing. If we want our models to improve further, we simply need better benchmarks.
Current AI lacks:
First-person perspective simulation Continuous self-monitoring (metacognition error <15%) Episodic future thinking (>72h horizon) Episodic Binding (Memory integration): Depends on: Theta-gamma cross-frequency coupling (40Hz phase synchronization) Dentate gyrus pattern separation (1:7000 distinct memory encoding) Posterior cingulate cortex (reinstatement of distributed patterns)
AI's failure manifests in:
Inability to distinguish similar-but-distinct events (conceptual blending rate ~83%) Failure to update prior memories (persistent memory bias >69%) No genuine recollection (only pattern completion) Non-Essential (Emotional Valence) While emotions influence human storytelling:
65% of narrative interpretations vary culturally Affective priming effects decay exponentially (<7s half-life) Neutral descriptions achieve 89% comprehension accuracy in controlled studies The core computational challenge remains bridging:
Symbolic representation (words/syntax) Embodied experience (sensorimotor grounding) Self-monitoring (meta-narrative control) Current LLMs simulate 74% of surface narrative features but lack the substrate for genuine meaning-making. It's like generating symphonies using only sheet music - technically accurate, but devoid of the composer's lived experience.
You can at least have a degree of confidence that they will perform well in the areas covered by the benchmarks (as long as they weren't contaminated) and with enough benchmarks you get fairly broad coverage.
It's pretty easy to find things they can't do. They lack a level of abstraction that even small mammals have, which is why you see them constantly failing when it comes to things like spacial awareness.
The difficult part is creating an intelligence test that they score badly on. But that's more of an issue with treating intelligence tests as if they're representative of general intelligence.
It's like have difficulty finding a math problem that Wolfram Alpha would do poorly on. If a human was able to solve all of these problems as well as Wolfram Alpha, they would be considered a genius. But Wolfram Alpha being able to solve those questions doesn't show that it has general intelligence, and trying to come up with more and more complicated math problems to test it with doesn't help us answer that question either.
I am very surprised when people say things like this. For example, the best ChatGPT model continues to lie to me on a daily basis for even basic things. E.g. when I ask it to explain what code is contained on a certain line on github, it just makes up the code and the code it's "explaining" isn't found anywhere in the repo.
From my experience, every model is untrustworthy and full of hallucinations. I have a big disconnect when people say things like this. Why?
There also would need to be a guarantee (or checking of the model somehow) that model providers don't just train on the benchmarks. Solutions are dynamic components (random names, numbers, etc) or private parts of benchmarks.
If the test set is extremely similar to the train set then yes, it's goodharts law all around. For modern LLMs, it's hard to make a test set that is different from what it has trained on, because of the sheer expanse of the training data used. Note that the two sets are different only if they are statistically different. It is not enough that they simply don't repeat verbatim.
The hard part is making the benchmark meaningful in the first place.
This helps as more chinese products and services hit the market and makes it easier to remember. The naming is similar to the popularity of greek mythology in western products. (e.g. all the products named "Apollo")
They are also of limited use to non-Chinese readers, who don't understand the tone system and probably can't even audibly distinguish tones.
So, it makes sense that we get this weird system even though it's strictly worse.
Hun Yuan is a lot better. I agree, with unicode, we can easily incorporate the tone.
The irony is not lost on me that Tencent themselves did that.
Popular? So you’re saying that all the VPs who have come up with the mind bendingly unique and creative name Prometheus didn’t do so out of level 10 vision?
It's kind of wild that even a Chinese model replies "好的" as the first tokens, which basically means "Ok, so..." like R1 and the other models respond. Is this RL'ed or just somehow a natural effect of the training?
1: https://arxiv.org/abs/2404.15758
This is not the case -- it's actually the opposite. The more of these tokens it generates, the more thinking time it gets (very much like humans going "ummm" all the time.) (Loosely speaking) every token generated is an iteration through the model, updating (and refining) the KV cache state and further extending the context.
If you look at how post-training works for logical questions, the preferred answers are front-loaded with "thinking tokens" -- they consistently perform better. So, if the question is "what is 1 + 1?", they're post-trained to prefer "1 + 1 is 2" as opposed to just "2".
As opposed to the more literary authoritative prose from textbooks and papers where the model output from the get-go has to commit to a chain of thought. Some interesting relatively new results are that time spent on output tokens more or less linearly correspond to better inference quality so I guess this is a way to just achieve that.
The tokens are inserted artificially in some inference models, so when the model wants to end the sentence, you switch over the end token with "hmmmm" and it will happily now continue.
this seems backwards. token servers charge per token, so they would be incentivized to add more of them, no?
[1] https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
This graph shows that both o1 and o3-mini are better at calculating in one’s head than any human I have known. It only starts to break down towards calculating the product of two eight digit factors etc.
https://nlp.elvissaravia.com/t/ai
But LLMs are still fundamentally a stochastic parrot that depends heavily on source data to produce useful results. So we will go through a lull until there is some new groundbreaking research which moves everything forward. And then the cycle repeats.
Trying to drink from the firehose of ML research is only valuable for extremely active research participants. Can be fun though :)
Not 1T
> Linguistic relativity asserts that language influences worldview or cognition. [...] Various colloquialisms refer to linguistic relativism: the Whorf hypothesis; the Sapir–Whorf hypothesis; the Whorf-Sapir hypothesis; and Whorfianism. [...] Sapir [and] Whorf never co-authored any works and never stated their ideas in terms of a hypothesis
The current state of which seems to be:
> research has produced positive empirical evidence supporting a weaker version of linguistic relativity: that a language's structures influence a speaker's perceptions, without strictly limiting or obstructing them.
From https://en.wikipedia.org/wiki/Linguistic_relativity
It also appears to be intentional:
> [Q:] Do you understand English?
> [A:] 您好!我是由腾讯开发的腾讯元宝(Tencent Yuanbao),当前基于混元大模型(Hunyuan-T1)为您服务。我主要使用中文进行交互,但也具备一定的英文理解能力。您可以用中文或英文随时与我交流,我会尽力为您提供帮助~ 若有特定需求,也可以随时告知我切换更适配的模型哦!
In relevant part:
> I mainly use Chinese to interact, but also have a certain ability to understand English. You can use Chinese or English to communicate with me at any time, [and] I will do my utmost to offer you assistance~