Readit News logoReadit News
simonw · 17 days ago
My advice for building something like this: don't get hung up on a need for vector databases and embedding.

Full text search or even grep/rg are a lot faster and cheaper to work with - no need to maintain a vector database index - and turn out to work really well if you put them in some kind of agentic tool loop.

The big benefit of semantic search was that it could handle fuzzy searching - returning results that mention dogs if someone searches for canines, for example.

Give a good LLM a search tool and it can come up with searches like "dog OR canine" on its own - and refine those queries over multiple rounds of searches.

Plus it means you don't have to solve the chunking problem!

navar · 16 days ago
I created a small app that shows the difference between embedding-based ("semantic") and bm25 search:

http://search-sensei.s3-website-us-east-1.amazonaws.com/

(warning! It will download ~50MB of data for the model weights and onnx runtime on first load, but should otherwise run smoothly even on a phone)

It runs a small embedding model in the browser and returns search results in "real time".

It has a few illustrative examples where semantic search returns the intended results. For example bm25 does not understand that "j lo" or "jlo" refer to Jennifer Lopez. Similarly embedding based methods can better deal with things like typos.

EDIT: search is performed over 1000 news articles randomly sampled from 2016 to 2024

andai · 16 days ago
https://www.anthropic.com/engineering/contextual-retrieval

Anthropic found embeddings + BM25 (keyword search) gave the best results. (Well, after contextual summarization, and fusion, and reranking, and shoving the whole thing into an LLM...)

But sadly they didn't say how BM25 did on its own, which is the really interesting part to me.

In my own (small scale) tests with embeddings, I found that I'd be looking right at the page that contained the literal words in my query and embeddings would fail to find it... Ctrl+F wins again!

bredren · 16 days ago
FWIW, the org decided against vector embeddings for Claude Code due in part to maintenance. See 41:05 here: https://youtu.be/IDSAMqip6ms
noobcoder · 16 days ago
No cross encoders?
whakim · 16 days ago
In my experience the semantic/lexical search problem is better understood as a precision/recall tradeoff. Lexical search (along with boolean operators, exact phrase matching, etc.) has very high precision at the expense of lower recall, whereas semantic search sits at a higher recall/lower precision point on the curve.
simonw · 16 days ago
Yeah, that sounds about right to me. The most effective approach does appear to be a hybrid of embeddings and BM25, which is worth exploring if you have the capacity to do so.

For most cases though sticking with BM25 is likely to be "good enough" and a whole lot cheaper to build and run.

cwmoore · 16 days ago
I recently came across a “prefer the most common synonym” problem, in Google Maps, while searching for a poolhall—even literally ‘billiards’ returned results for swimming pools and chlorine. I wonder if some more NOTs aren’t necessary…interested in learning about RAGs though I’m a little behind the curve.
mips_avatar · 16 days ago
In my app the best lexical search approaches completely broke my agent. For my rag system the llm would on average take 2.1 lexical searches to get the results it needed. Which wasn’t terrible but it meant sometimes it needed up to 5 searches to find it which blew up user latency. Now that I have a hybrid semantic search + lexical search it only requires 1.1 searches per result.
nostrebored · 16 days ago
The problem is not using parallel tool calling or not returning a search array. We do this across large data sets and don’t see much of a problem. It also means you can swap algorithms on the fly. Building a BM25 index over a few thousand documents is not very expensive locally. Rg and grep are freeish. If you have information on folder contents you can let your agent decide at execution time based on information need.

Embeddings just aren’t the most interesting thing here if you’re running a frontier fm.

froobius · 17 days ago
Hmm it can capture more than just single words though, e.g. meaningful phrases or paragraphs that could be written in many ways.
leetrout · 17 days ago
Simon have you ever given a talk or written about this sort of pragmatism? A spin on how to achieve this with Datasette is an easy thing to imagine IMO.
simonw · 16 days ago
I did a livestream thing about building RAG against FTS search in Datasette last year: https://simonwillison.net/2024/Jun/21/search-based-rag/
scosman · 16 days ago
Alternative advice: just test and see what works best for your use case. Totally agreed embeddings are often overkill. However, sometimes they really help. The flow is something like:

- Iterate over your docs to build eval data: hundreds of pairs of [synthetic query, correct answer]. Focus on content from the docs not general LLM knowledge.

- Kick off a few parallel evaluations of different RAG configurations to see what works best for your use case: BM25, Vector, Hybrid. You can do a second pass to tune parameters: embedding model, top k, re-ranking, etc.

I build a free system that does all this (synthetic data from docs, evals, test various RAG configs without coding each version). https://docs.kiln.tech/docs/evaluations/evaluate-rag-accurac...

simonw · 16 days ago
That's excellent advice, the only downside being that collecting that eval data remains difficult and time-consuming.

But if you want to build truly great search that's the approach to take.

sbene970 · 16 days ago
At this point you could also optimize your agentic flow directly in DSPy using a colbert model / Ratatouille for retrieval.
victorbuilds · 16 days ago
This matches what I found building an AI app for kids. Started with embeddings because everyone said to, then ripped it out and went with simple keyword matching. The extra complexity wasn't worth it for my use case. Most of the magic comes from the LLM anyway, not the retrieval layer.

Deleted Comment

dmezzetti · 16 days ago
Are multiple LLM queries faster than vector search? Even with the example "dog OR canine" that leads to two LLM inference calls vs one. LLM inference is also more expensive than vector search.

In general RAG != Vector Search though. If a SQL query, grep, full text search or other does the job then by all means. But for relevance-based search, vector search shines.

7734128 · 15 days ago
No reason to try to avoid semantic search. Dead easy to implement, works across languages to some extent and the fuzziness is worth quite alot.

You're realistically going to need chunks of some kind anyway to feed the LLM, and once you got those it's just a few lines of code to get a basic persistant ChromaDB going.

tra3 · 17 days ago
I built a simple emacs package based on this idea [0]. It works surprisingly well, but I dont know how far it scales. It's likely not as frugal from a token usage perspective.

0: https://github.com/dmitrym0/dm-gptel-simple-org-memory

drittich · 16 days ago
Do you have a standard prompt you use for this? I have definitely seen agentic tools doing this for me, e.g., when searching the local file system, but I'm not sure if it native behaviour for tool-using LLMs or if it is coerced via prompts.
simonw · 16 days ago
No I've not got a good only for this yet. I've found the modern models (or the Claude Code etc harness) know how to do this already by default - you can ask them a question and give them a search tool and they'll start running and iterating on searches by themselves.
enraged_camel · 17 days ago
Yes, exactly. We have our AI feature configured to use our pre-existing TypeSense integration and it's stunningly competent at figuring out exactly what search queries to use across which collections in order to find relevant results.
busssard · 16 days ago
if this is coupled with powerful search engines beyond elastic then we are getting somewhere. other nonmonotonic engines that can find structural information are out there.
petercooper · 15 days ago
So kinda GAR - Generation-Augmented Retrieval :-)
pstuart · 16 days ago
Perhaps SQLite with FTS5? Or even better, getting DuckDB into the party as it's ecosystem seems ripe for this type of work.
paulyy_y · 16 days ago
Burying the lede here - your solution for avoiding using vector search is either offloading to 1) user, expecting them to remember the right terms or 2) using LLM to craft the search query? And having it iterate multiple times? Holy mother of inefficiency, this agentic focus is making us all brain dead.

Vector DB's and embeddings are dead simple to figure out, implement, and maintain. Especially for a local RAG, which is the primary context here. If I want to find my latest tabular notes on some obscure game dealing with medical concepts, I should be able to just literally type that. It shouldn't require me remembering the medical terms, or having some local (or god forbid, remote) LLM iterate through a dozen combos.

FWIW I also think this is a matter of how well one structures their personal KB. If you follow strict metadata/structure and have coherent/logical writing, you'll have better chance of getting results with text matching. For someone optimizing for vector space search, and minimizing the need for upfront logical structuring, it will not work out well.

simonw · 16 days ago
My opinion on this really isn't very extreme.

Claude Code is widely regarded to be the best coding agent tool right now and it uses search, not embeddings.

I use it to answer questions about files on my computer all the time.

mips_avatar · 17 days ago
One thing I didn’t see here that might be hurting your performance is a lack of semantic chunking. It sounds like you’re embedding entire docs, which kind of breaks down if the docs contain multiple concepts. A better approach for recall is using some kind of chunking program to get semantic chunks (I like spacy though you have to configure it a bit). Then once you have your chunks you need to append context to how this chunk relates to the rest of your doc before you do your embedding. I have found anthropics approach to contextual retrieval to be very performant in my RAG systems (https://www.anthropic.com/engineering/contextual-retrieval) you can just use gpt oss 20b as the model for generation of context.

Unless I’ve misunderstood your post and you are doing some form of this in your pipeline you should see a dramatic improvement in performance once you implement this.

yakkomajuri · 17 days ago
hey, author (not op) here. we do do semantic chunking! I think maybe I gave the impression that we don't because of the mention of aggregating context but I tested this with questions that would require aggregating context from 15+ documents (meaning 2x that in chunks), hence the comment in the post!
NebulaStorm456 · 16 days ago
Is there a way to convert documents into a hierarchical connected graph data structure which references each other similar to how we use personal knowledge tools like Obsidian and ability to traverse this graph? Is GraphRag technique trying to do this exactly?
mips_avatar · 16 days ago
Ah so you’re generating context from multiple docs for your chunks? How do you decide which docs get aggregated?
abhashanand1501 · 16 days ago
My advice - use same rigor as other software development for a RAG application. Have a test suite (of say 100 cases) which says for this question correct response is this. Use an LLM judge to score each of the outputs of the RAG system. Now iterate till you get a score of 85 or so. And every change of prompts and strategy triggers this check, and ensures that output of 85 is always maintained.
nilirl · 17 days ago
Why is it implicit that semantic search will outperform lexical search?

Back in 2023 when I compared semantic search to lexical search (tantivy; BM25), I found the search results to be marginally different.

Even if semantic search has slightly more recall, does the problem of context warrant this multi-component, homebrew search engine approach?

By what important measure does it outperform a lexical search engine? Is the engineering time worth it?

kgeist · 16 days ago
It depends on how you test it. I recently found that the way devs test it differs radically from how users actually use it. When we first built our RAG, it showed promising results (around 90% recall on large knowledge bases). However, when the first actual users tried it, it could barely answer anything (closer to 30%). It turned out we relied on exact keywords too much when testing it: we knew the test knowledge base, so we formulated our questions in a way that helped the RAG find what we expected it to find. Real users don't know the exact terminology used in the articles. We had to rethink the whole thing. Lexical search is certainly not enough. Sure, you can run an agent on top of it, but that blows up latency - users aren't happy when they have to wait more than a couple of seconds.
victorbuilds · 16 days ago
This is the gap that kills most AI features. Devs test with queries they already know the answer to. Users come in with vague questions using completely different words. I learned to test by asking my kids to use my app - they phrase things in ways I would never predict.
babelfish · 16 days ago
How did you end up changing it? Creating new evals to measure the actual user experience seems easy enough, how did that inform your stack?
scosman · 16 days ago
Totally depends on use case.

It solves some types of issues lexical search never will. For example if a user searches "Close account", but the article is named "Deleting Your Profile".

But lexical solves issues semantic never will. Searching an invoice DB for "Initech" with semantic search is near useless.

Pick a system that can do both, including a hybrid mode, then evaluate if the complexity is worth it for you.

mips_avatar · 16 days ago
Depends on how important keyword matching vs something more ambiguous is to your app. In Wanderfugl there’s a bunch of queries where semantic search can find an important chunk that lacks a high bm25 score. The good news is you can get all the benefits of bm25 and semantic with a hybrid ranking. The answer isn’t one or the other.
andoando · 16 days ago
The benefit I see is you can have queries like "conversations between two scientists".

Its very dependent on use case imo

autogn0me · 16 days ago
What we use: - https://github.com/ggozad/haiku.rag

Why?

- developer oriented (easy to read Python and uses pydantic-ai)

- benchmarks available

- docling with advanced citations (on branch)

- supports deep research agent

- real open source by long term committed developer not fly by night

mingodad · 15 days ago
I did an experiment while learning about LLMs and llama.cpp consisting in trying to use create a Lua extension to use llama.cpp API to enhance LLMs with agent/RAG written in Lua with simple code to learn the basics and after more than 5 hours chatting with https://aistudio.google.com/prompts/new_chat?model=gemini-3-... (see the scrapped output of the whole session attached) I've got a lot far in terms of learning how to use an LLM to help develop/debug/learn about a topic (in this case agent/RAG with llama.cpp API using Lua).

I'm posting it here just in case it can help others to see and comment/improve it (it was using around 100K tokens at the end and started getting noticeable slow but still very helpful).

You can see the scrapped text for the whole seession here https://github.com/ggml-org/llama.cpp/discussions/17600

nh2 · 16 days ago
I'd like to have a local, fully offline and open-source software into which I can dump all our Emails, Slack, Gdrive contents, Code, and Wiki, and then query it with free form questions such as "with which customers did we discuss feature X?", producing references to the original sources.

What are my options?

I want to avoid building my own or customising a lot. Ideally it would also recommend which models work well and have good defaults for those.

cbcoutinho · 16 days ago
This is why I built the Nextcloud MCP server, so that you can talk with your own data. Obviously this is Nextcloud-specific, but if you're using it already then this is possible now.

https://github.com/cbcoutinho/nextcloud-mcp-server

The default MCP server deployment supports simple CRUD operations on your data, but if you enable vector search the MCP server will begin embedding docs/notes/etc. Currently ollama and openai are supporting embeddings providers.

The MCP server then exposes tools you can use to search your docs based on semantic search and/or bm25 (via qdrant fusion) as well as generate responses using MCP sampling.

Importantly, rather than generating responses itself, the server relies on MCP sampling so that you can use any LLM/MCP client. This MCP sampling/RAG pattern is extremely powerful and it wouldn't surprise me if there was something open source that generalizes this across other data sources.

russdill · 16 days ago
Would love to see someone build an example using the offline wikipedia text.
fragmede · 16 days ago
Given the full text of Wikipedia is undoubtedly part of the training data, what would having it in a RAG add?
davedx · 16 days ago
> we use Sentence Transformers (all-MiniLM-L6-v2) as our default (solid all-around performer for speed and retrieval, English-only).

Huh, interesting. I might be building a German-language RAG at some point in my future and I never even considered that some models might not support German at all. Does anyone have any experience here? Do many models underperform or not support non-English languages?

navar · 16 days ago
You can refer to https://huggingface.co/spaces/mteb/leaderboard and use that to guide your selection.

Check under the "Retrieval" section, either RTEB Multilingual or RTEB German (under language specific).

You may also want to filter for model sizes (under "Advanced Model Filters"). For instance if you are self-hosting and running on a CPU it may make sense to limit to something like <=100M parameters models.

davedx · 14 days ago
Thanks, that's really useful, I had no idea this table existed.
yakkomajuri · 16 days ago
> Do many models underperform or not support non-English languages?

Yes they do. However:

1. German is one of the more common languages to train on so more models will support it than say, Bahasa

2. There should still be a reasonable amount of multi-lingual models available. Particularly if you're OK with using proprietary models via API. AFAIK all the frontier embedding and reranking models (non open-source) are multi-lingual

architectonic · 16 days ago
Yes I can confirm that,we had resorted to a multilingual embedding model back in the day. https://link.springer.com/chapter/10.1007/978-3-031-77918-3_...