If you are interested you can read about the how its removed[4]
[1] https://huggingface.co/huihui-ai [2] https://huggingface.co/collections/huihui-ai/gpt-oss-abliter... [3] https://ollama.com/huihui_ai [4] https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in...
I'm comfy, but some of the cutting edge local LLMs have been a little bit slow to be available recently, maybe this frontend focus is why.
I will now go and look at other options like Ollama that have either been fully UI integrated since the start, or that is committed to just being a headless backend. If any of them seem better, I'll consider switching, I probably should have done this sooner.
I hope this isn't the first step in Ollama dropping the local CLI focus, offering a subscription and becoming a generic LLM interface like so many of these tools seem to converge on.
You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it
The initial government response can be read as “lol, no”.
See `man systemd.exec`, `systemd-analyze security`, https://wiki.archlinux.org/title/Systemd/Sandboxing
deno can do this via --(allow/deny)-read and --(allow/deny)-write for the file system.
You can do the same for net too
https://docs.deno.com/runtime/fundamentals/security/#permiss...
I think it’s something that even Google should consider: publishing open-source models with the possibility of grounding their replies in Google Search.