Readit News logoReadit News
simonw · a year ago
Since this doesn't have documentation yet I piped the code through Claude 3 Opus and asked it to write some.

https://gist.github.com/simonw/9ff9a0ab8ab64e8aa8d160c4294c0...

You don't have a license on the code yet (weirdly Claude hallucinated MIT).

Notes on how I generated this in the comments on that Gist.

CGamesPlay · a year ago
What was the total token count? The output is impressive (if accurate). I'm also curious if doing things like omitting function bodies would alter the output (obvious makes the process cheaper and would enable larger projects, but may lead to worse analyses).
simonw · a year ago
From the Anthropic logs looks like 1827 input tokens and 1038 output tokens.

I'm still on the free trial API plan, but at Opus price of $15 per million input tokens and $75 per million output tokens that comes to about 10.5 cents.

kaycey2022 · a year ago
Do you think it's possible to play a game of 20 questions and go from not knowing what HNSW is and in what context it is used, to become a sophisticated user of any library that implements this?
rkwz · a year ago
Thanks for the notes, I thought it would be a more detailed prompt :) Any reason why you choose Opus instead of other LLMs, was it because of the 200k context window?
simonw · a year ago
I'm defaulting to Opus at the moment partly because it's brand new and so I need to spend time with it to get a feel for it - but also because so far it seems to be better than GPT-4 for code stuff. I've had a bunch of examples of it writing mistake-free code that GPT-4 had generated with small bugs in.
leod · a year ago
Happy to see people working on vector search in Rust. Keep it up!

As far as HNSW implementations go, this one appears to be almost entirely unfinished. Node insertion logic is missing (https://github.com/swapneel/hnsw-rust/blob/b8ef946bd76112250...) and so is the base layer beam search.

jallmann · a year ago
How does this compare to hsnwlib - is it faster? https://github.com/nmslib/hnswlib
esafak · a year ago
Also compare with qdrant's Rust implementation; they tout their performance. https://github.com/qdrant/qdrant/tree/master/lib/segment/src...
Chio · a year ago
From a quick survey of the implementation probably not very well since, for example, it is using dynamic dispatching for all distance calculations and there are a lot of allocations in the hot-path.

Maybe it would be better to post this repository as a reference / teaching implementation of HNSW.

dochtman · a year ago
This is in pretty early stages. Might consider instant-distance which I wrote a few years ago and which is in production use at instantdomainsearch.com:

https://github.com/instant-labs/instant-distance

There are Python bindings, too:

https://pypi.org/project/instant-distance/

dureuill · a year ago
Hello, I have a few questions:

- how much time to insert 15 millions of vectors of 768 f32?

- how much RAM needed for this operation?

- if inserting another vector, how incremental is the insertion? Is it faster than reindexing the 15M + 1 vectors from scratch?

- does the structure need to stay in RAM or can it be efficiently queried from a serialized représentation?

- how fast is the search in the 15M vectors on average?

teaearlgraycold · a year ago
I can answer #3. HNSW will allow for incremental index rebuilding. So each additional insert is a sublinear, but greater than constant time, operation.
andre-z · a year ago
I can answer how it would be in Qdrant if interested. The index will take around 70GB RAM. New vectors are first placed in a non-indexed segment and are immediately available for search while the index is being built. The vectors and the index can be offloaded to disk. Search will take some milliseconds.
random42 · a year ago
Great job shipping. The README / repo however could definitely benefit with a benchmarking report in them.
sirfz · a year ago
I recently tried out lancedb and it looks great. Allows to directly index vectors into disk and include meta data with them as well
jbarrow · a year ago
I put together a slow (but readable!) HNSW implementation in python to really understand how it works: https://github.com/jbarrow/tinyhnsw

Indexing time isn't great, but query time is surprisingly good for it being written in unoptimized python and numpy.