Readit News logoReadit News
kelindar · a year ago
This library was created to provide an easy and efficient solution for embeddings and vector search, making it perfect for small to medium-scale projects that still need some vector search. It's built around a simple idea: if your dataset is small enough, you can achieve accurate results with brute-force techniques, and with some optimizations like SIMD, you can keep things fast and lean.
biomcgary · a year ago
I love that you chose to wrap the C++ with purego instead of requiring CGO! I wrapped Microsoft's Lightgbm library and found purego delightful. (To make deployment easier, I embed the compiled library into the Go binary and extract it to a temp directory at runtime. YMMV.)
cyberax · a year ago
This post led me to purego, and I've just finished moving my toy project that uses PKCS#11 libraries from cgo to it. It's so much better now! No need to jump through hoops for cross-compilation.
jerrygenser · a year ago
Have you considered using HNSW instead of brute force?
huac · a year ago
nice work! I wrote a similar library (https://github.com/stillmatic/gollum/blob/main/packages/vect...) and similarly found that exact search (w/the same simple heap + SIMD optimizations) is quite fast. with 100k objects, retrieval queries complete in <200ms on an M1 Mac. no need for a fancy vector DB :)

that library used `viterin/vek` for SIMD math: https://github.com/viterin/vek/

neonsunset · a year ago
Look what Go needs to mimic even a fraction of .NET’s SIMD power… ;)
PhilippGille · a year ago
Interesting choice to call llama.cpp directly, instead of relying on a server like Ollama. Nice!

I wrote a similar library which calls Ollama (or OpenAI, Vertex AI, Cohere, ...), with one benefit being zero library dependencies: https://github.com/philippgille/chromem-go

milansuk · a year ago
No need to use Ollama. LLama.cpp has its own OpenAI-compatible server[0] and it works great.

[0] https://github.com/ggerganov/llama.cpp#web-server

citizenpaul · a year ago
Thanks didn't know that.

Do you happen to know the reason to use ollama rather than the built in server? How much work is required to get similar functionality? looks like just downloading the models? I find it odd that ollama took off so quickly if LLamma.cpp had the same built in functionality.

PhilippGille · a year ago
Yes I'm aware. I was contrasting the general use of an inference server vs calling llama.cpp directly (not via HTTP request).

And among servers Ollama seems to be more popular, so it's worth mentioning when talking about support for local LLMs.

kohlerm · a year ago
Nice! would have needed something like this last year.
ashvardanian · a year ago
USearch has had GoLang bindings for a long time, but it's more low-level and you'd have to use something else for embeddings: https://github.com/unum-cloud/usearch/tree/main/golang
ausbah · a year ago
could anyone recommend a similar library for python?
simonw · a year ago
I've used the Sentence Transformers Python library successfully for this: https://www.sbert.net/

My own LLM CLI tool and Python library includes plugin-based support for embeddings (or you can use API-based embeddings like those from Jina or OpenAI) - here's my list of plugins that enable new embeddings models: https://llm.datasette.io/en/stable/plugins/directory.html#em...

More about that in my embeddings talk from last year: https://simonwillison.net/2023/Oct/23/embeddings/

jncraton · a year ago
The languagemodels[1] package that I maintain might meet your needs.

My primary use case is education, as myself and others use this for short student projects[2] related to LLMs, but there's nothing preventing this package from being used in other ways. It includes a basic in-process vector store[3].

[1] https://github.com/jncraton/languagemodels

[2] https://www.merlot.org/merlot/viewMaterial.htm?id=773418755

[3] https://github.com/jncraton/languagemodels?tab=readme-ov-fil...

fjuafhwasd · a year ago
Do these queries complete within 10ms?
38 · a year ago
> git submodule update --init --recursive

nope. this looks cool, but Git submodules are cursed

IncreasePosts · a year ago
I think you mean recursed
snovv_crash · a year ago
What's a better option for linking 3rd party code?
38 · a year ago
Is this a joke? Go has built in support for importing 3rd party code
o11c · a year ago
Dip it in a blessed clear potion.
pipe01 · a year ago
Why?
mananaysiempre · a year ago
Poor integration, mostly.

It’s fairly easy to get into an irrecoverably broken state using an intermediate-level Git operation such as an interactive rebase (as of a couple of years ago). (It’s probably recoverable by reaching into the guts of the repo, but given you can’t do the rebase either way I’m still taking off a point.) The distinguished remote URLs thing is pointlessly awkward—I’ve never gotten pushing to places where those remotes are inaccessible to work properly when the pushed commit updates the submodule reference. (I believe it’s possible, but given the amount of effort I’ve put into unsuccessfully figuring that out, I’m comfortable taking off a point here as well.)

I like git submodules, I think they’re fundamentally the right way to do things. But despite their age they aren’t in what I’d call a properly working state, even compared to Git’s usual amount of sharp edges.