Readit News logoReadit News
ggm · a month ago
Breakthrough is marketing. Come back with some peer review and in the meantime I'm internally translating this as an incremental improvement like most things these last 40 years or more.

The tables of scores strongly speak to increments.

[Edit: it's what the original article says. Not the OP's fault]

cyp0633 · a month ago
This is my direct translation from the subtitle of the Chinese article. Apologies if there's any inaccuracy.
ggm · a month ago
I should have said it's the original articles fault and not yours.
SilverElfin · a month ago
Some people have claimed that LLMs that aren’t from the big foundational model providers (OpenAI, Anthropic, Gemini) are basically gaming benchmarks to get great results. Does anyone know if that’s actually true? I don’t understand this entire post but from the tables of benchmark scores, it seems like this model performs well in a large variety of things. It feels to me like the diversity of benchmarks may mean it’s not just something built to game a benchmark, right?
viraptor · a month ago
Why not just check on your real tasks? I'm quite happy with the k2.5 and glm5 performance in practice. Whether they also gamed the benchmarks is not as relevant.
ne0phyt3 · a month ago
is it the llm model weights or the training data that's important and confidential?
cyp0633 · a month ago
No translation yet
9864247888754 · a month ago
Trained with the trash produced by their braindead underclass clientele.

And they'll eat the slop right up.