This library was created to provide an easy and efficient solution for embeddings and vector search, making it perfect for small to medium-scale projects that still need some vector search. It's built around a simple idea: if your dataset is small enough, you can achieve accurate results with brute-force techniques, and with some optimizations like SIMD, you can keep things fast and lean.
I love that you chose to wrap the C++ with purego instead of requiring CGO! I wrapped Microsoft's Lightgbm library and found purego delightful. (To make deployment easier, I embed the compiled library into the Go binary and extract it to a temp directory at runtime. YMMV.)
This post led me to purego, and I've just finished moving my toy project that uses PKCS#11 libraries from cgo to it. It's so much better now! No need to jump through hoops for cross-compilation.
nice work! I wrote a similar library (https://github.com/stillmatic/gollum/blob/main/packages/vect...) and similarly found that exact search (w/the same simple heap + SIMD optimizations) is quite fast. with 100k objects, retrieval queries complete in <200ms on an M1 Mac. no need for a fancy vector DB :)
Interesting choice to call llama.cpp directly, instead of relying on a server like Ollama. Nice!
I wrote a similar library which calls Ollama (or OpenAI, Vertex AI, Cohere, ...), with one benefit being zero library dependencies: https://github.com/philippgille/chromem-go
Do you happen to know the reason to use ollama rather than the built in server? How much work is required to get similar functionality? looks like just downloading the models? I find it odd that ollama took off so quickly if LLamma.cpp had the same built in functionality.
I've used the Sentence Transformers Python library successfully for this: https://www.sbert.net/
My own LLM CLI tool and Python library includes plugin-based support for embeddings (or you can use API-based embeddings like those from Jina or OpenAI) - here's my list of plugins that enable new embeddings models: https://llm.datasette.io/en/stable/plugins/directory.html#em...
The languagemodels[1] package that I maintain might meet your needs.
My primary use case is education, as myself and others use this for short student projects[2] related to LLMs, but there's nothing preventing this package from being used in other ways. It includes a basic in-process vector store[3].
It’s fairly easy to get into an irrecoverably broken state using an intermediate-level Git operation such as an interactive rebase (as of a couple of years ago). (It’s probably recoverable by reaching into the guts of the repo, but given you can’t do the rebase either way I’m still taking off a point.) The distinguished remote URLs thing is pointlessly awkward—I’ve never gotten pushing to places where those remotes are inaccessible to work properly when the pushed commit updates the submodule reference. (I believe it’s possible, but given the amount of effort I’ve put into unsuccessfully figuring that out, I’m comfortable taking off a point here as well.)
I like git submodules, I think they’re fundamentally the right way to do things. But despite their age they aren’t in what I’d call a properly working state, even compared to Git’s usual amount of sharp edges.
that library used `viterin/vek` for SIMD math: https://github.com/viterin/vek/
I wrote a similar library which calls Ollama (or OpenAI, Vertex AI, Cohere, ...), with one benefit being zero library dependencies: https://github.com/philippgille/chromem-go
[0] https://github.com/ggerganov/llama.cpp#web-server
Do you happen to know the reason to use ollama rather than the built in server? How much work is required to get similar functionality? looks like just downloading the models? I find it odd that ollama took off so quickly if LLamma.cpp had the same built in functionality.
And among servers Ollama seems to be more popular, so it's worth mentioning when talking about support for local LLMs.
My own LLM CLI tool and Python library includes plugin-based support for embeddings (or you can use API-based embeddings like those from Jina or OpenAI) - here's my list of plugins that enable new embeddings models: https://llm.datasette.io/en/stable/plugins/directory.html#em...
More about that in my embeddings talk from last year: https://simonwillison.net/2023/Oct/23/embeddings/
My primary use case is education, as myself and others use this for short student projects[2] related to LLMs, but there's nothing preventing this package from being used in other ways. It includes a basic in-process vector store[3].
[1] https://github.com/jncraton/languagemodels
[2] https://www.merlot.org/merlot/viewMaterial.htm?id=773418755
[3] https://github.com/jncraton/languagemodels?tab=readme-ov-fil...
nope. this looks cool, but Git submodules are cursed
It’s fairly easy to get into an irrecoverably broken state using an intermediate-level Git operation such as an interactive rebase (as of a couple of years ago). (It’s probably recoverable by reaching into the guts of the repo, but given you can’t do the rebase either way I’m still taking off a point.) The distinguished remote URLs thing is pointlessly awkward—I’ve never gotten pushing to places where those remotes are inaccessible to work properly when the pushed commit updates the submodule reference. (I believe it’s possible, but given the amount of effort I’ve put into unsuccessfully figuring that out, I’m comfortable taking off a point here as well.)
I like git submodules, I think they’re fundamentally the right way to do things. But despite their age they aren’t in what I’d call a properly working state, even compared to Git’s usual amount of sharp edges.