Readit News logoReadit News
Posted by u/samlhuillier 2 years ago
Show HN: Reor – An AI note-taking app that runs models locallygithub.com/reorproject/re...
Reor is an open-source AI note-taking app that runs models locally.

The four main things to know are:

1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.

2. You can do RAG Q&A on your notes using the local LLM of your choice.

3. Embedding model, LLM, vector db and files are all run or stored locally.

4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.

Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.

Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.

It's available for Mac, Windows & Linux on the project Github: https://github.com/reorproject/reor

kepano · 2 years ago
This is a good reminder of why storing Obsidian notes as individual Markdown files is much more useful than stuffing those notes in a database and having Markdown as an export format. The direct manipulation of files allows multiple apps to coexist and do useful things on top of the same files.
Ringz · 2 years ago
That was the reason why I gave up Joplin very quickly. The last Joplin thread, here on Hacker News, has also shown once again that some still do not understand why "But Joplin can export Markdown from the database!" is not the same as simple, flat Markdown files.
lobocinza · 2 years ago
Last time I used Joplin (many years ago) it stored notes as flat Markdown notes with YAML headers. I stop it using because it gave me lots of headaches and at the end of the day your favorite file browser + your favorite text editor is a far superior solution than a jack of all trades that excels at none. My notes stack is neovim + fzf + git.
traverseda · 2 years ago
Yeah, that's also why I dropped it. Got too complicated when I wanted to start linking my notes into my work timesheets.
erybodyknows · 2 years ago
May I ask what you switched to? Running into the same issue.
michaelmior · 2 years ago
It's very possible to have multiple apps coexisting using a database. Although I'll certainly concede that it's probably a lot easier with just a bunch of Markdown files.
kepano · 2 years ago
Sure, it's possible, but whichever app owns the database ultimately controls the data, the schema, etc. The file system provides a neutral database that all apps can cooperate within.
toddmorey · 2 years ago
Yes it was one of the best product decisions y'all made. Been so useful to have direct access to the files and options on how my data is processed and backed up.
pipnonsense · 2 years ago
The OP and your comment just made me cancel my Milanote subscription, export all my notes to markdown and start using Obsidian (to later experiment with this Reor).

As a side-effect, I just noticed that I prefer a long markdown file with proper headings (and an outline on the side) than Milanote's board view, which initially felt like a more free form better suited for unorganized thoughts and ideas for writing that I had (I use it for my fiction writing).

I still can have documents as a list of loose thoughts, but once I am ready to organize my ideas, I just use well written and organized headers, edit the content and now I have a really useful view of my idea.

snthpy · 2 years ago
Is a filesystem not a database with a varchar unique primary key, a blob data attribute and a few more metadata fields?
pokstad · 2 years ago
Files seem less useful for small bits of information. I feel the urge to fill a file with a minimum threshold. A database makes more sense for that.
lannisterstark · 2 years ago
>I feel the urge to fill a file with a minimum threshold.

Honestly that's more you subjectively than database v files.

xenodium · 2 years ago
I got an iOS journaling app on beta. It’s offline, no sign-in, no lock-in, social, etc. Saves to plain text. Syncs to your desktop if needed.

https://xenodium.com/an-ios-journaling-app-powered-by-org-pl...

samlhuillier · 2 years ago
Absolutely! Really respect the work you folks are doing.
toddmorey · 2 years ago
"crucially, that AI should run as much as possible privately & locally"

Just wanted to say thank you so much for this perspective and fighting the good fight.

samlhuillier · 2 years ago
Thank you!
humbleferret · 2 years ago
Great job!

I played around with this on a couple of small knowledge bases using an open Hermes model I had downloaded. The “related notes” feature didn't provide much value in my experience, often the link was so weak it was nonsensical. The Q&A mode was surprisingly helpful for querying notes and providing overviews, but asking anything specific typically just resulted in less than helpful or false answers. I'm sure this could be improved with a better model etc.

As a concept, I strongly support the development of private, locally-run knowledge management tools. Ideally, these solutions should prioritise user data privacy and interoperability, allowing users to easily export and migrate their notes if a new service better fits their needs. Or better yet, be completely local, but have functionality for 'plugins' so a user can import their own models or combine plugins. A bit like how Obsidian[1] allows for user created plugins to enable similar functionality to Reor, such as the Obsidan-LLM[2] plugin.

[1] https://obsidian.md/ [2] https://github.com/zatevakhin/obsidian-local-llm

dax_ · 2 years ago
Yeah, this is exciting - I'd much rather have it as a plugin for Obsidian though! I have my workflow with that, all the features I need. Having some separate AI notes app isn't what I would like to use.
samlhuillier · 2 years ago
Thank you for your feedback!

Working hard on improving the chunking to improve related notes section. RAG is fairly naive right now, with lots of improvements coming in the next few weeks.

monkmartinez · 2 years ago
I left an issue to explain this in more detail, but I don't think the problem is chunking. The issue is the prompt. The local LLM space does itself no favors by thinking about and using prompts as an after thought.

IME, the prompt should be front/center in terms of importance and the key to unlocking the models potential. It's one of the main reasons why Textgen-Webui is sooooo good. You can really dial-in the prompt, from the template itself to working with the system message. Then begin futzing with the myriad of other parameters to achieve fantastic results.

ilaksh · 2 years ago
Which model exactly did you use and how large? I feel like even the best 7b models are just a bit too dumb for most things that I have tried. A 70b model or Mixtral or sometimes 34b seem to be adequate for some things. But those are several times larger and don't run on my oldish hardware.
humbleferret · 2 years ago
Openhermes 2.5 Mistral 7b
mcbetz · 2 years ago
Interesting project, wishing you all the best!

If you are using Obsidian, Smart Connections in v2 (1) does also support local embeddings and shows related notes based on semantic similarity.

It's not super great on bi/multi-lingual vaults (DE + EN in my case), but it's improving rapidly and might soon support embedding models that cater for these cases as well.

(1) https://github.com/brianpetro/obsidian-smart-connections

fastball · 2 years ago
Does the future of knowledge management involve using lots of AI to organize pieces of knowledge?

I think "here be dragons", and that over-relying on AI to do all your organization for you will very possibly (probably?) cause you to become worse at thinking.

No data to back this up because it is still early days in the proliferation of such tools, but historically making learning and thinking and "knowledge management" more passive does not improve outcomes.

bhpm · 2 years ago
> I think "here be dragons", and that over-relying on AI to do all your organization for you will very possibly (probably?) cause you to become worse at thinking.

Socrates said exactly this.

But when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”

fastball · 2 years ago
Fair, but the difference is that "remembering from the inside" and "writing stuff down" are still both activities that you are doing. And to in spite of this quote, writing does make the process of remembering/synthesizing information more active – you are engaging more parts of the brain in order to think about and write down the material. We have seen this on fMRIs, and there is a decent amount of evidence that handwriting works even better for this than typing, due to the higher level of spatial awareness involved (that's the theory).

An AI doing the work for you is the opposite of that.

OJFord · 2 years ago
> > I think "here be dragons", and that over-relying on AI [...]

> Socrates said exactly this.

I roughly recalled where you were going to go with that afterwards, but I couldn't help but 'spit take' at that given some of the quotes he does get credited with!

davidy123 · 2 years ago
So if you only converse with LLMs (and never write or read anything), is the problem solved?
GTP · 2 years ago
I don't think that the problem would be becoming worse at thinking, but I see a possible related problem. Each one of us has its own way of organizing things, that looks logical to us but not necessarily to others: think about how you organize things inside your home vs where other people put their stuff. A similar issue could arise with AI tools, that will classify and organize documents based on their logic, which doesn't necessarily align with ours.
samlhuillier · 2 years ago
I agree with this.

In some cases, hard thinking and searching for things manually can really enhance understanding and build your knowledge.

In other cases, particularly when ideating for example, you want to be given "inspiration" from other related ideas to build upon other ideas you've had previously.

I think it's a mix of both - reaching for AI as and when when you need it - but avoiding it intentionally at times as well.

hgomersall · 2 years ago
Honest discussion point: do you think organisational stuff is important thinking? IME it's precisely this sort of stuff that distracts me from thinking about hard stuff - the urgent displacing the important.
turnsout · 2 years ago
You’ve discovered the dirty secret of PKM… it’s most useful for shuffling stuff around and feeling productive to avoid doing real work
ParetoOptimal · 2 years ago
I think you want to organize your own knowledge graph and then use the LLM to find novel connections or answer questions based upon it.
fastball · 2 years ago
But if you are the one finding connections in your knowledge graph, then the neurons are not only connected on your machine but in your brain as well.

Probably a moot point once we have brain-machine interfaces, but we're not quite there yet.

CrypticShift · 2 years ago
Some suggestions :

- Create multiple independent "vaults" (like obsidian).

- Append links to related notes, so you can use (Obsidian's) graph view to map the AI connections.

- "Minimize" the UI to just the chat window.

- Read other formats (mainly pdfs).

- Integrate with browser history/bookmarks (maybe just a script to manually import them as markdown ?)

Thanks for Reor !

samlhuillier · 2 years ago
Thanks for your feedback!

- Multiple vaults is in fact in a PR right now: https://github.com/reorproject/reor/pull/28

- Manual linking is coming.

- Minimizing the UI to chat is interesting. Right now I guess you can drag chat to cover anything - but yes perhaps a toggle between two modes could be interesting.

- Read other formats also something in the pipeline. Just need to sort out the editor itself to support something like this. Perhaps pdfs would just be embedded into the vector db but not accessible to the editor.

- Integrating with browser history and bookmarks is a big feature. Things like web clipping and bringing in context from different places are interesting...

zarathustreal · 2 years ago
The problem with PDFs is that text isn’t necessarily text. Most RAG implementations that support them don’t do any sort of OCR or use local offline OCR implementations that have really low accuracy.
haswell · 2 years ago
Literally yesterday I spun up a project with the intent to build something exactly like this for Obsidian.

Excited to see something already far more realized, and I’m looking forward to trying this out.

I’ve been working on a larger than small writing project using Obsidian, and my ultimate goal is to have conversations with the corpus of what I’ve written, and to use this to hone ideas and experiment with new ways of exploring the content.

Not sure if local LLMs are powerful enough yet to enable meaningful/reliable outcomes, but this is the kind of stuff that really excites me about the future of this tech.

sofixa · 2 years ago
There are these plugins:

https://github.com/zatevakhin/obsidian-local-llm

https://github.com/hinterdupfinger/obsidian-ollama

Which already exist and if nothing else are decent starting points.

> Not sure if local LLMs are powerful enough yet to enable meaningful/reliable outcomes

I've dabbled, briefly, with Ollama running Mistral locally on an M1 MacBook Pro with 32GB of unified memory, and throwing a couple of hundred markdown documents at it via a RAG resulted in quite decent output to prompts asking questions about abstract contents/summariesbbased on those docs.

So I'd say we're already at a point where you can have meaningful outcomes; reliability is a whole other issue though.

haswell · 2 years ago
Thanks for sharing these; I’ll definitely check these out. I somehow missed these during my initial search for similar projects.

I recently got my hands on an RTX 3090 for my Linux workstation and I’m planning to try getting some kind of remote setup going for my MacBook Air.

Great to hear about decent output. Reliability is negotiable as long as there’s some value and hopefully a path to future improvements.

wbogusz · 2 years ago
Great to see something like this actualized. I’m a huge fan of Obsidian and its graph based connections for note taking.

Always see parallels drawn between Obsidian note structures and whole “2nd brain” idea for personal knowledge management, had seemed like a natural next step would be to implement note retrieval for intelligent references. Will have to check this out