Readit News logoReadit News
pamelafox · a year ago
If you're looking for an example of RRF + Hybrid Search with PostgreSQL, I've put together a FastAPI app here that uses RAG with those options:

https://github.com/Azure-Samples/rag-postgres-openai-python/

Here's the RRF+Hybrid part: https://github.com/Azure-Samples/rag-postgres-openai-python/...

That's largely based off a sample from the pgvector repo, with a few tweaks.

Agreed that Hybrid is the way to go, it's what the Azure AI Search team also recommends, based off their research:

https://techcommunity.microsoft.com/t5/ai-azure-ai-services-...

cpursley · a year ago
This is awesome, thank you.
cpursley · a year ago
First take at hybrid search with Postgres pg_vector based on this: https://gist.github.com/cpursley/dae0a0be442f27e6af79d6bfc2b...
thefourthchime · a year ago
I also found pure RAG with vector search to not work. I was creating a bot that could find answers to questions about things by looking at Slack discussions.

At first, I downloaded entire channels, loaded them into a vector DB, and did RAG. The results sucked. Vector searches don't understand things very well, and in this world, specific keywords and error messages are very searchable.

Instead, I take the user's query, ask an LLM (Claude / Bedrock) to find keywords, then search Slack using the API, get results, and use an LLM to filter for discussions that are relevant, then summarize them all in a response.

This is slow, of course, so it's very multi-threaded. A typical response will be within 30 seconds.

s-macke · a year ago
I find these discussions funny.

For decades we had search engines based on the query terms (keywords). Then there were lots of discussions and some implementations to put a semantic search on top of it to improve the keyword search. A hybrid search. Google Search did exactly that already in 2015 [1].

Now we start from pure semantic search and put keyword search on top of it to improve the semantic search and call it hybrid search.

In both approaches, the overall search performance is exactly identical - to the last digit.

I am glad, that so far, no one has called this an innovation. But you could certainly write a lot of blog articles about it.

[1] https://searchengineland.com/semantic-search-entity-based-se...

spencerchubb · a year ago
Except now the semantic capabilities are so much stronger. The transformer allows the model to get meaning from words that are far apart from each other
siquick · a year ago
When you’re creating your embedding you can store keywords from the content (using an LLM) in the metadata of each chunk which would positively increase the relevancy of results turned from the retrieval.

LlamaIndex does this out of the box.

thefourthchime · a year ago
That's interesting! I didn't know that
qeternity · a year ago
Are you doing this for a product or for internal usage?
benreesman · a year ago
Zero shot key phrase extraction is a reasonably well-studied field. I don’t know what the current SOTA is, but the one that was pretty hot shit last time I needed one was kbir-inspec which is on HuggingFace and you can test it right on the page.

Might be worth a shot if performance is a tricky spot in your setup.

edude03 · a year ago
Thanks for sharing, I like the approach and it makes a lot of sense for the problem space. Especially using existing products vs building/hosting your own.

I was however tripped up by this sentence close to the beginning:

> we encountered a significant challenge with RAG: relying solely on vector search (even using both dense and sparse vectors) doesn’t always deliver satisfactory results for certain queries.

Not to be overly pedantic, but that's a problem with vector similarity, not RAG as a concept.

Although the author is clearly aware of that - I have had numerous conversations in the past few months alone of people essentially saying "RAG doesn't work because I use pg_vector (or whatever) and it never finds what I'm looking for" not realizing 1) it's not the only way to do RAG, and 2) there is often a fair difference between the embeddings and the vectorized query, and with awareness of why that is you can figure out how to fix it.

https://medium.com/@cdg2718/why-your-rag-doesnt-work-9755726... basically says everything I often say to people with RAG/vector search problems but again, seems like the assembled team has it handled :)

johnjwang · a year ago
Author here: you're for sure right -- it's not a problem with RAG the theoretical concept. In fact, I think RAG implementations should likely be specific to their use cases (e.g. our hybrid search approach works well for customer support, but I'm not sure if it would work as well in other contexts, say for legal bots).

I've seen the whole gamut of RAG implementations as well, and the implementation, specifically prompting and the document search has a lot to do with the end quality.

verdverm · a year ago
re: legal, I saw a post on this idea where their RAG system was designed to return the actual text from the document rather than a LLM response or summary. The LLM played a role in turning the query into the search params, but the insight was that for certain kinds of documents, you want the actual source because of the existing, human written summary or the detailed nuances therein
visarga · a year ago
> Not to be overly pedantic, but that's a problem with vector similarity, not RAG as a concept.

Vector similarity has a surprising failure mode. It only indexes explicit information, missing out the implicit one. For example "The second word of this phrase, decremented by one" is "first", do you think these strings will embed the same? Calculated results don't retrieve well. Also, deductions in general.

How about "I agree with what John said, but I'd rather apply Victor's solution"? It won't embed like the answer you seek. Multi-hop information seeking questions don't retrieve well.

The obvious fix is to pre-ingest all the RAG text into a LLM and calculate these deductions before embedding.

eskibars · a year ago
Having worked building out a RAG SaaS platform for the past year and having worked on the vendor side of several keyword-based search systems in the past 10 years, I can say it's absolutely necessary to have some kind of hybrid search for most use cases I've seen.

The problem is that most people don't have experience optimizing even 1 of the retrieval systems (vector or keyword), so a lot of users that try to DIY build end up with an awful time trying to get to prod. People are talking about things like RRF (which are needed) but then missing other big-picture things like the mistakes everyone makes when building out a keyword search (not getting the right language rules in place) and also not getting the right vector side (finding the right embedding models, chunking strategies, etc).

I recognize I have a bit of a conflict of interest since I'm at a RAG vendor, but I'll abstain from the name/self-promotion and say: I've seen so many cases where people get this wrong, if you're thinking RAG you really should be hiring a consultant or looking at a complete platform from people that have done it more. Or be prepared to spend a lot of cycles learning and iterating

softwaredoug · a year ago
People dramatically underestimate the complexity of even reasonably relevant search systems.

One reason is unlike other data products - it’s an active, conscious action of users. If ads or recommendations are wrong nobody gets mad. But screw up search and it’s like the shop sales person taking you to the wrong aisle. It’s actively frustrating.

So basically every useful search system is disliked to some degree because it will get some things wrong some of the time.

matthew_mg · a year ago
As someone who has spent way too long building a RAG system for internal use, would be interested to know what your platform is.

Don't think it's overly self-promotional if first asked :)

If you still don't wanna say, feel free to email, email in profile

charliejuggler · a year ago
We've been building some systems for clients recently including Moody's using Lucene-based engines for the R-part - the G part tends to be OpenAI or some such service but there's also appetite for internally hosted LLMs. The trick is good measurement, as I explained in this talk at State of Open Con. https://www.youtube.com/watch?v=Ghbd1RkNgpM
eskibars · a year ago
Vectara
pmc00 · a year ago
For another set of measurements that support RRF + Hybrid > vectors, we (Azure AI Search team) did a bunch of evaluations a few months ago: https://techcommunity.microsoft.com/t5/ai-azure-ai-services-...

We also included supporting data in that write up showing you can improve significantly on top of Hybrid/RRF using a reranking stage (assuming you have a good reranker model), so we shipped one as an optional step as part of our search engine.

cheesyFish · a year ago
RRF is alright, but I've had better results with relative score, or distribution-based scoring.

LlamaIndex has a module for exactly this

https://docs.llamaindex.ai/en/stable/examples/retrievers/rel...

yingfeng · a year ago
RRF is a simple and effective means of fused ranking for multiple recall. Within our open source RAG product RAGFlow(https://github.com/infiniflow/ragflow), Elasticsearch is currently used instead of other general vector databases, because it can provide hybrid search right now. Under the default cases, embedding based reranker is not required, just RRF is enough, while even if reranker is used, keywords based retrieval is also a MUST to be hybridized with embedding based retrieval, that's just what RAGFlow's latest 0.7 release has provided.

On the other hand let me introduce another database we developed, Infinity(https://github.com/infiniflow/infinity), which can provide the hybrid search, you can see the performance here(https://github.com/infiniflow/infinity/blob/main/docs/refere...), both vector search and full-text search could perform much faster than other open source alternatives.

From the next version(weeks later), Infinity will also provide more comprehensive hybrid search capabilities, what you have mentioned the 3-way recalls(dense vector, sparse vector, keyword search) could be provided within single request.

testfoo444 · a year ago
Elastic Search is publishing a lot of interesting posts on this topic although with a bit of marketing for ex https://www.elastic.co/search-labs/blog/semantic-reranking-w...
retakeming · a year ago
pg_search (full text search Postgres extension) can be used with pgvector for hybrid search over Postgres tables. It comes with a helpful hybrid search function that uses relative score fusion. Whereas rank fusion considers just the order of the results, relative score fusion uses the actual metrics outputted by text/vector search.