Readit News logoReadit News
kevincox · 6 years ago
I think more natural language and context is the next huge step in search. The README has a good example where you find a section comparing two things. An example I run into often is trying to find emails or texts about an event. I know the date that the event occurred but I might have said "tomorrow", "tuesday", "the 25th", "2020-08-25" or "yesterday". These all refer to the same date can could be indexed, however Now I need to search for all of these with different date restrictions to find the hits and not show the misses.
yoavz · 6 years ago
I love that fuzzy date search use-case, will try testing that out.
Der_Einzige · 6 years ago
I've been waiting for someone to do a proper semantic search plugin in a browser for awhile. There was one awhile back called... Fuzbal ... which used word2vec and was good but has not been updated. You've implemented a more question-answer based approach. This is awesome!

I think that the real innovation will be when users are given exposure to lots of different models, and have the pros and cons of these models are properly explained to them. Maybe I want to use this on specialized bio-medical literature and would be better off with a model fine-tuned in that domain instead of on Squad.

Also, shameless self-plug, I wrote a system that does extractive summarization/highlighting of documents which is in principle very similar to what is going on here (https://github.com/Hellisotherpeople/CX_DB8). For awhile, I had a hosted, web accessible version of this system available to make it easy to show it off to interviewers. It could highlight the important parts of a web-page based on a user query at either the word, sentence, n-gram, or paragraph level. I figured that the next step was to make it a browser extension. I simply wasn't proficient enough in JS and at the time I was working on this, quantized/pruned models were slightly less good. I firmly believe that making high quality semantic search work everywhere will be an extreme (and obvious) step-forward for most peoples daily tasks. What a brave new world we are entering!

yoavz · 6 years ago
Pretty cool. The embedding similarity approach makes a lot of sense. I actually this project by experimenting with computing cosine similarities of sentence embeddings [1]. But I wasn't very impressed with out-of-the-box results, and I found it difficult to set a similarity threshold for a match. QA was the second try, and the pretrained models worked better out of the box. I'm wondering if I should revisit the embedding approach now...

[1] https://github.com/UKPLab/sentence-transformers

lbj · 6 years ago
Wow. Now that’s an innovative and brilliant way to improve one our oldest tools. Certainly could by relevant in a general sense for much more than browsing
yoavz · 6 years ago
Thanks, really nice to hear :)
adrianmonk · 6 years ago
Interesting idea for sure. I wasn't able to understand much from the demo image, though. The animation is fast, and all I can see about the result is that the word "lower" is highlighted/matched. I was hoping to get an idea of what results it finds and how relevant they are to the search.
throwaway744678 · 6 years ago
They describe the sample search two paragraphs below.
ReD_CoDE · 6 years ago
Wow, great!

I'm looking for an open source solution to find algorithm names inside the academic articles (normally PDF), and perhaps on the web too

Is there any suggestion?

codemonkey-zeta · 6 years ago
I've thought about doing this before as well. One challenge you might face occurs when one algorithm goes by different names in different circles. I can't think of good examples that I ran into, but some statistical methods have one name when physicists use it, another for biologists, another for statisticians, etc.

Could be interesting to compare the similarities of the semantics of the algorithms as understood by an NLP model. E.g. depth-first-search vs. monte-carlo, or dijkstra's vs Kruskal's. Both used in similar contexts, so you could group algorithms into families. I'd love to see more NLP-driven meta-analysis of scientific literature.

ReD_CoDE · 6 years ago
I couldn't agree more, and I think we can make a graph that shows parent and childs. The main algorithm and its children
paraschopra · 6 years ago
Isn’t this exactly like what Google released as open source a couple of months ago https://github.com/tensorflow/tfjs-models/tree/master/qna
compressedgas · 6 years ago
This is glue for that that makes it a browser extension.
roland-s · 6 years ago
OpenAI API has a similar demo, the Wikipedia one at https://openai.com/blog/openai-api/
de6u99er · 6 years ago
Does the use TensoflowJS mean that search is being performed locally?
yoavz · 6 years ago
Yep -- no calls to any API backend.