Downloaded the 14B, 32B, and 70B variants to my Ollama instance. All three are very impressive, subjectively much more capable than QwQ. 70B especially, unsurprisingly. Gave it some coding problems, even 14B did a pretty good job. I wish I could collapse the "thinking" section in Open-WebUI, and also the title for the chat is currently generated wrong - the same model is used by default as for generation, so the title begins with "<thinking>". Be that as it may, I think these will be the first "locally usable" reasoning models for me. URL for the checkpoints: https://ollama.com/library/deepseek-r1
Thanks for sharing your experience with the 14B, 32B, and 70B variants! I'm curious, what hardware setup are you using to run these models on your Ollama instance?
Here is another video from the Pentagon from five years ago where they developed a plasma technology that looks very similar to the glowing orb. On another note, I'm wondering why Hacker News seems to ignore this topic. I don't see any drones or orbs on the front page.