I use it on macOS primarily, and have basically stopped using docker mode in favor of the native sandboxing because features like image pasting Just Work™.
> Load it on the fly by pasting this snippet to your Python interpreter
The idea of having a project's readme include a full copy of the project itself as base64'd compressed data is pretty ingenious!
especially for a project like this where you may not have had the foresight to preload it into the environment where you most need it
https://huggingface.co/fblgit/una-xaberius-34b-v1beta
https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16
I mention this because it could theoretically be applied to Mistral Moe. If the uplift is the same as regular Mistral 7B, and Mistral Moe is good, the end result is a scary good model.
This might be an inflection point where desktop-runnable OSS is really breathing down GPT-4's neck.
If you have ollama installed you can try it out with `ollama run nollama/una-cybertron-7b-v2`.
[1]: https://huggingface.co/TheBloke/una-cybertron-7B-v2-GGUF