Why not fork it like Microsoft did with Edge?
Like I'd like to know the sites being visited on different devices.
Is there any such thing possible?
https://github.com/ggerganov/llama.cpp/pull/11016#issuecomme...
So Ollama are basically forking a little bit of everything to try and achieve vendor lock-in. Some examples:
The Ollama transport protocol, it just a slightly forked version of the OCI protocol (they are ex-Docker guys). Just forked enough so one can't use dockerhub, quay.io, helm, etc. (so people will have to buy Ollama Enterprise servers or whatever).
They have forked llama.cpp (I would much rather we upstreamed to llama.cpp than forked, like upstreamining to Linus's kernel tree).
They don't use jinja like everyone else, they use this:
https://ollama.com/library/granite-code/blobs/977871d28ce4etc.
So we started a project called RamaLama to unfork all these bits
"Humans experiencing long-COVID with cognitive symptoms (48 subjects) similarly demonstrate elevated CCL11 levels compared to those with long-COVID who lack cognitive symptoms (15 subjects)."