The KV cache won't soften the blow the first time they paste a code sample into a chat and end up waiting 10 minutes with absolutely no interactivity before they even get first token.
You'll get an infinitely more useful build out of a single 3090 and sticking to stuff like Gemma 27B than you will out of trying to run Deepseek off a CPU-only build. Even a GH200 struggles to run Deepseek at realistic speeds with bs=1, and there's an entire H100 attached to CPU there: there just isn't a magic way to get "affordable fast effective" AI out of a CPU offloaded model right now.
How close are we talking?
I’m not calling you a liar OP, but in general I wish people perpetuating such broad claims would be more rigorous.
Unsloth does amazing work, however as far as I’m aware even they themselves do not publish head to head evals with the original unquantized models.
I have sympathy here because very few people and companies can afford to run the original models, let alone engineer rigorous evals.
However I felt compelled to comment because my experience does not match. For relatively simple usage the differences are hard to notice, but they become much more apparent in high complexity and long context tasks.
Thank you.