Readit News logoReadit News
antirez · a year ago
Note the massive context length (130k tokens). Also because it would be kinda pointless to generate a long CoT without enough context to contain it and the reply.

EDIT: Here we are. My first prompt created a CoT so long that it catastrophically forgot the task (but I don't believe I was near 130k -- using ollama with fp16 model). I asked one of my test questions with a coding question totally unrelated to what it says:

<QwQ output> But the problem is in this question. Wait perhaps I'm getting ahead of myself.

Wait the user hasn't actually provided a specific task yet. Let me check again.

The initial instruction says:

"Please act as an AI agent that can perform tasks... When responding, first output a YAML data structure with your proposed action, then wait for feedback before proceeding."

But perhaps this is part of a system prompt? Wait the user input here seems to be just "You will be given a problem. Please reason step by step..." followed by a possible task? </QwQ>

Note: Ollama "/show info" shows that the context size set is correct.

anon373839 · a year ago
> Note: Ollama "/show info" shows that the context size set is correct.

That's not what Ollama's `/show info` is telling you. It actually just means that the model is capable of processing the context size displayed.

Ollama's behavior around context length is very misleading. There is a default context length limit parameter unrelated to the model's capacity, and I believe that default is a mere 2,048 tokens. Worse, when the prompt exceeds it, there is no error -- Ollama just silently truncates it!

If you want to use the model's full context window, you'll have to execute `/set parameter num_ctx 131072` in Ollama chat mode, or if using the API or an app that uses the API, set the `num_ctx` parameter in your API request.

antirez · a year ago
Ok, this explains why QwQ is working great on their chat. Btw I saw this thing multiple times: that ollama inference, for one reason or the other, even without quantization, somewhat had issues with the actual model performance. In one instance the same model with the same quantization level, if run with MLX was great, and I got terrible results with ollama: the point here is not ollama itself, but there is no testing at all for this models.

I believe that models should be released with test vectors at t=0, providing what is the expected output for a given prompt for the full precision and at different quantization levels. And also for specific prompts, the full output logits for a few tokens, so that it's possible to also compute the error due to quantization or inference errors.

wizee · a year ago
Ollama defaults to a context of 2048 regardless of model unless you override it with /set parameter num_ctx [your context length]. This is because long contexts make inference slower. In my experiments, QwQ tends to overthink and question itself a lot and generate massive chains of thought for even simple questions, so I'd recommend setting num_ctx to at least 32768.

In my experiments of a couple mechanical engineering problems, it did fairly well in final answers, correctly solving mechanical engineering problems that even DeepSeek r1 (full size) and GPT 4o did wrong in my tests. However, the chain of thought was absurdly long, convoluted, circular, and all over the place. This also made it very slow, maybe 30x slower than comparably sized non-thinking models.

I used a num_ctx of 32768, top_k of 30, temperature of 0.6, and top_p of 0.95. These parameters (other than context length) were recommended by the developers on Hugging Face.

zamadatix · a year ago
I always see:

  /set parameter num_ctx <value>
Explained but never the follow up:

  /save <custom-name>
So you don't have to do the parameter change every load. Is there a better way or is it kind of like setting num_ctx in that "you're just supposed to know"?

flutetornado · a year ago
My understanding is that top_k and top_p are two different methods of decoding tokens during inference. top_k=30 considers the top 30 tokens when selecting the next token to generate and top_p=0.95 considers the top 95 percentile. You should need to select only one.

https://github.com/ollama/ollama/blob/main/docs/modelfile.md...

Edit: Looks like both work together. "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)"

Not quite sure how this is implemented - maybe one is preferred over the other when there are enough interesting tokens!

hbbio · a year ago
"My first prompt created a CoT so long that it catastrophically forgot the task"

Many humans would do that

AustinDev · a year ago
I tried the 'Strawberry' question which generated nearly 70k words of CoT.
moffkalast · a year ago
I think you guys might be using too low of a temperature, it never goes beyond like 1k thinking tokens for me.
nicman23 · a year ago
lol did it at least get it right?
ignorantguy · a year ago
Yeah it did the same in my case too. it did all the work in the <think> tokens. but did not spit out the actual answer. I was not even close to 100K tokens
freehorse · a year ago
If you did not change the context length, it is certain that it is not 2k or so. In "/show info" there is a field "context length" which is about the model in general, while "num_ctx" under "parameters" is the context length for the specific chat.

I use modelfiles because I only use ollama because it has easy integration with other stuff eg with zed, so this way I can easily choose models with a set context size directly.

Here nothing fancy, just

    FROM qwq
    PARAMETER num_ctx 100000
You save this somewhere as a text file, you run

    ollama create qwq-100k -f path/to/that/modelfile
and you now have "qwq-100k" in your list of models.

smallerize · a year ago
From https://huggingface.co/Qwen/QwQ-32B

Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.

GTP · a year ago
Sorry, could you please explain what this means? I'm not into machine learning, so I don't get the jargon.
tsunego · a year ago
Can’t wait to see if my memory can even acocomodate this context
k_sze · a year ago
Oddly, the Chinese LLM host SiliconFlow only makes it available with 32k context, which is even smaller than their DeepSeek-R1 offering.
codelion · a year ago
that's interesting... i've been noticing similar issues with long context windows & forgetting. are you seeing that the model drifts more towards the beginning of the context or is it seemingly random?

i've also been experimenting with different chunking strategies to see if that helps maintain coherence over larger contexts. it's a tricky problem.

orbital-decay · a year ago
Neither lost-in-the-middle nor long context performance have seen a lot of improvement in the recent year. It's not easy to generate long training examples that also stay meaningful, and all existing models still become significantly dumber after 20-30k tokens, particularly on hard tasks.

Reasoning models probably need some optimization constraint put on the length of the CoT, and also some priority constraint (only reason about things that need it).

gagan2020 · a year ago
Chinese strategy is open-source software part and earn on robotics part. And, They are already ahead of everyone in that game.

These things are pretty interesting as they are developing. What US will do to retain its power?

BTW I am Indian and we are not even in the race as country. :(

nazgulsenpai · a year ago
If I had to guess, more tariffs and sanctions that increase the competing nation's self-reliance and harm domestic consumers. Perhaps my peabrain just can't comprehend the wisdom of policymakers on the sanctions front, but it just seems like all it does is empower the target long-term.
h0l0cube · a year ago
The tarrifs are for the US to build it's own domestic capabilities, but this will ultimately shift the rest of the world's trade away from the US and toward each other. It's a trade-off – no pun intended – between local jobs/national security and downgrading their own economy/geo-political standing/currency. Anyone who's been making financial bets on business as usual for globalization is going to see a bit of a speed bump over the next few years, but in the long term it's the US taking an L to undo decades of undermining their own peoples' prospects from offshoring their entire manufacturing capability. Their trump card - still no pun intended - is their military capability, which the world will have to wean themselves off first.
bugglebeetle · a year ago
Unitree just open-sourced their robot designs:

https://sc.mp/sr30f

China’s strategy is to prevent any one bloc from achieving dominance and cutting off the others, while being the sole locus for the killer combination of industrial capacity + advanced research.

aurareturn · a year ago

  China’s strategy is to prevent any one bloc from achieving dominance and cutting off the others, while being the sole locus for the killer combination of industrial capacity + advanced research.
You're acting like these startups are controlled by the Chinese government. In reality, they're just like any other American startup. They make decisions on how to make the most money - not what the Chinese government wants.

asadm · a year ago
Not really. It seems unitree didn't open source anything. Not anything useful.

Dead Comment

dtquad · a year ago
>BTW I am Indian and we are not even in the race as country

Why are you surprised?

India was on a per capita basis poorer than sub-Saharan Africa until 2004.

The only reason India is no longer poorer than Africa is because the West (the IMF and World Bank) forced India to do structural reforms in 1991 that stopped the downward trajectory of the Indian economy since its 1947 independence.

aurareturn · a year ago

  The only reason India is no longer poorer than Africa is because the West (the IMF and World Bank) forced India to do structural reforms in 1991 that stopped the downward trajectory of the Indian economy since its 1947 independence.
India had the world's largest GDP at some point in its history. Why did India lose its status?

Deleted Comment

holoduke · a year ago
Also part of their culture/identity. A good thing i believe.
dcreater · a year ago
India is absolutely embarrassing. Could have been an extremely important 3rd party that obviates the moronic US vs China, us or them, fReEdOm vs communism narrative with all the talent it has.
esalman · a year ago
Turns out conservatism and far right demagoguery is not great for progress.
dr_dshiv · a year ago
I love that emphasizing math learning and coding leads to general reasoning skills. Probably works the same in humans, too.

20x smaller than Deep Seek! How small can these go? What kind of hardware can run this?

daemonologist · a year ago
It needs about 22 GB of memory after 4 bit AWQ quantization. So top end consumer cards like Nvidia's 3090 - 5090 or AMD's 7900 XTX will run it.
be_erik · a year ago
Just ran this on a 4000RTX with 24gb of vram and it struggles to load, but it’s very fast once the model loads.
Ey7NFZ3P0nzAe · a year ago
A mathematician once told me that this might be because math teaches you to have different representations for a same thing, you then have to manipulate those abstractions and wander through their hierarchy until you find an objective answer.
samstave · a year ago
>I love that emphasizing math learning and coding leads to general reasoning skills

Its only logical.

Leary · a year ago
To test: https://chat.qwen.ai/ and select Qwen2.5-plus, then toggle QWQ.
bangaladore · a year ago
They baited me into putting in a query and then asking me to sign up to submit it. Even have a "Stay Logged Out" button that I thought would bypass it, but no.

I get running these models is not cheap, but they just lost a potential customer / user.

zamadatix · a year ago
Running this model is dirt cheap, they're just not chasing that type of customer.
mrshu · a year ago
You can also try the HuggingFace Space at https://huggingface.co/spaces/Qwen/QwQ-32B-Demo (though it seems to be fully utilized at the moment)

Deleted Comment

doublerabbit · a year ago
Check out venice.ai

They're pretty up to date with latest models. $20 a month

Alifatisk · a year ago
They have a option specifically for QwQ-32B now
cubefox · a year ago
How do you know this model is the same as in the blog post?
Leary · a year ago
One of the people on the Qwen team tweeted this instruction.
attentive · a year ago
it's on groq now for super fast inference
fsndz · a year ago
super impressive. we won't need that many GPUs in the future if we can have the performance of DeepSeek R1 with even less parameters. NVIDIA is in trouble. We are moving towards a world of very cheap compute: https://medium.com/thoughts-on-machine-learning/a-future-of-...
holoduke · a year ago
Have you heard of Jevons paradox? That says that whenever new tech is used to make something more efficient the tech is just upscaled to make the product quality higher. Same here. Deepseek has some algoritmic improvements that reduces resources for the same output quality. But increasig resources (which are available) will increase the quality. There will be always need for more compute. Nvidia is not in trouble. They have a monopoly on high performing ai chips for which demand will at least rise by a factor of 1000 upcoming years (my personal opinion)
pzo · a year ago
Surprisingly those open models might be savour for Apple and gift for Qualcomm too. They can finetune them to their liking and catch up to competition and also sell more of their devices in the future. Longterm even better models for Vision will have problem to compete with latency of smaller models that are good enough but have very low latency. This will be important in robotics - reason Figure AI dumped OpenAI and started using their own AI models based on Open Source (founder mentioned recently in one interview).

Dead Comment

daemonologist · a year ago
It says "wait" (as in "wait, no, I should do X") so much while reasoning it's almost comical. I also ran into the "catastrophic forgetting" issue that others have reported - it sometimes loses the plot after producing a lot of reasoning tokens.

Overall though quite impressive if you're not in a hurry.

huseyinkeles · a year ago
I read somewhere which I can't find now, that for the -reasoning- models they trained heavily to keep saying "wait" so they can keep reasoning and not return early.
rahimnathwani · a year ago
Is the model using budget forcing?
Szpadel · a year ago
I do not understand why to force wait when model want to output </think>.

why not just decrease </think> probability? if model really wants to finish maybe or could over power it in cases were it's really simple question. and definitely would allow model to express next thought more freely

rosspackard · a year ago
I have a suspicion it does use budget forcing. The word "alternatively" also frequently show up and it happens when it seems logically that a </think> tag could have been place.
manmal · a year ago
I guess I won’t be needing that 512GB M3 Ultra after all.
UncleOxidant · a year ago
I think the Framework AI PC will run this quite nicely.
Tepix · a year ago
I think you want a lot of speed to make up for the fact that it's so chatty. Two 24GB GPUs (so you have room for context) will probably be great.
seanmcdirmid · a year ago
A max with 64 GB of ram should be able to run this (I hope). I have to wait until an MLX model is available to really evaluate its speed, though.
pickettd · a year ago
mettamage · a year ago
Yep, it does that. I have 64 GB and was actually running 40 GB of other stuff.
rpastuszak · a year ago
How much vram do you need to run this model? Is 48 gb unified memory enough?
zamalek · a year ago
39gb if you use a fp8 quantized model.[1] Remember that your OS might be using some of that itself.

As far as I recall, Ollama/llama.cpp recently added a feature to page-in parameters - so you'll be able to go arbitrarily large soon enough (at a performance cost). Obviously more in RAM = more speed = more better.

[1]: https://token-calculator.net/llm-memory-calculator

dulakian · a year ago
I am using the Q6_K_L quant and it's running at about 40G of vram with the KV cache.

Device 1 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||20.170Gi/23.988Gi]

Device 2 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||19.945Gi/23.988Gi]

brandall10 · a year ago
It's enough for 6 bit quant with a somewhat restricted context length.

Though based on the responses here, it needs sizable context to work, so we may be limited to 4 bit (I'm on an M3 Max w/ 48gb as well).

daemonologist · a year ago
The quantized model fits in about 20 GB, so 32 would probably be sufficient unless you want to use the full context length (long inputs and/or lots of reasoning). 48 should be plenty.

Dead Comment

iamronaldo · a year ago
This is insane matching deepseek but 20x smaller?
Imnimo · a year ago
I wonder if having a big mixture of experts isn't all that valuable for the type of tasks in math and coding benchmarks. Like my intuition is that you need all the extra experts because models store fuzzy knowledge in their feed-forward layers, and having a lot of feed-forward weights lets you store a longer tail of knowledge. Math and coding benchmarks do sometimes require highly specialized knowledge, but if we believe the story that the experts specialize to their own domains, it might be that you only really need a few of them if all you're doing is math and coding. So you can get away with a non-mixture model that's basically just your math-and-coding experts glued together (which comes out to about 32B parameters in R1's case).
mirekrusin · a year ago
MoE is likely temporary, local optimum now that resembles bitter lesson path. With the time we'll likely distill what's important, shrink it and keep it always active. There may be some dynamic retrieval of knowledge (but not intelligence) in the future but it probably won't be anything close to MoE.
littlestymaar · a year ago
> , but if we believe the story that the experts specialize to their own domains

I don't think we should believe anything like that.

7734128 · a year ago
Roughly the same number of active parameters as R1 is a mixture-of-experts model. Still extremely impressive, but not unbelievable.
kmacdough · a year ago
I understand the principles of MOE, but clearly not enough to make full sense of this.

Does each expert within R1 have 37B parameters? If so, is QwQ only truly competing against one expert in this particular benchmark?

Generally I don't think I follow how MOE "selects" a model during training or usage.

WiSaGaN · a year ago
I think it will be more akin to o1-mini/o3-mini instead of r1. It is a very focused reasoning model good at math and code, but probably would not be better than r1 at things like general world knowledge or others.
nycdatasci · a year ago
Wasn't this release in Nov 2024 as a "preview" with similarly impressive performance? https://qwenlm.github.io/blog/qwq-32b-preview/
yorwba · a year ago
The benchmark scores in the new announcement are significantly higher than for the preview model.
samus · a year ago
That's good news, I was highly impressed already by what that model could do, even under heavy quantization.