Readit News logoReadit News
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
dhacks · 7 months ago
There is also engrXiv, which has an OAI endpoint. https://engrxiv.org/oai?verb=ListRecords&metadataPrefix=oai_...
0101111101 · 7 months ago
Amazing!
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
cluckindan · 7 months ago
That method can break when author names and subject matter collide.
0101111101 · 7 months ago
True, but similarly if your embeddings are any good they'll capture interesting associations between authors, topics and your search query. If you find any interesting author overlap results I'd be very interested!
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
heisenburgzero · 7 months ago
So did you just combine Title+Abstracts+Authors into a single chunk and embed them or embedded them individually?
0101111101 · 7 months ago
One chunk embedded together
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
forrestp · 7 months ago
My understanding is that your levers are roughly better / more diverse embeddings or computing more embeddings (embed chunks / groups / etc) + aggregating more cosine similarities / scores. More flops = better search w/ steep diminishing returns

Colbert being a good google-able application of utilizing more embeddings.

Search ends up often being a funnel of techniques. Cheap and high recall for phase 1 and ratchet up the flops and precision in subsequent passes on the previous result set.

0101111101 · 7 months ago
Exactly! A near property of the matryoshka embeddings is that you can compute a low dimension embedding similarity really fast and then refine afterwards.
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
sitkack · 7 months ago
That is neat I like that.

It would be cool if the "More Like This" had a + button that would append the arxiv id to the search query.

0101111101 · 7 months ago
That's a nice idea! Might take a look this weekend!
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
sitkack · 7 months ago
embedding search via https://searchthearxiv.com/ takes either a word vector, or an abs or pdf link to an arxiv paper.

https://news.ycombinator.com/item?id=42519487

I just did a spot check, I think searchthearxiv search results are superior.

0101111101 · 7 months ago
Looks cool! You can input either a search query or a paper URL on arxiv xplorer. You can even combine paper URLs to search for combinations of ideas by putting + or - before the URL, like `+ 2501.12948 + 1712.01815`
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
bbor · 7 months ago
Oh god, there's a medrxiv?? TIL...

Don't forget chemrXiv!

0101111101 · 7 months ago
Sadly I couldn't find a public API for chemrxiv, but would be happy to be proven wrong!
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
madars · 7 months ago
Looks great! Could you add eprint.iacr.org (Cryptology ePrint Archive)?
0101111101 · 7 months ago
Do they have a public API/dataset?
0101111101 commented on Semantic search engine for ArXiv, biorxiv and medrxiv   arxivxplorer.com/... · Posted by u/0101111101
elliotec · 7 months ago
This is really cool, and very relevant to something I'm working on. Would you be willing to do a quick explanation of the build?
0101111101 · 7 months ago
Sure! I first used openai embeddings on all the paper titles, abstracts and authors. When a user submits a search query, I embed the query, find the closest matching papers and return those results. Nothing too fancy involved!

I'm also maintaining a dataset of all the embeddings on kaggle if you want to use them yourself: https://www.kaggle.com/datasets/tomtum/openai-arxiv-embeddin...

u/0101111101

KarmaCake day134August 13, 2018View Original