The four main things to know are:
1. Notes are connected automatically with vector search. You can do semantic search + related notes are automatically connected.
2. You can do RAG Q&A on your notes using the local LLM of your choice.
3. Embedding model, LLM, vector db and files are all run or stored locally.
4. Point it to a directory of markdown files (like an Obsidian vault) and it works seamlessly alongside Obsidian.
Under the hood, Reor uses Llama.cpp (node-llama-cpp integration), Transformers.js and Lancedb to power the local AI features.
Reor was built right from the start to support local models. The future of knowledge management involves using lots of AI to organize pieces of knowledge - but crucially, that AI should run as much as possible privately & locally.
It's available for Mac, Windows & Linux on the project Github: https://github.com/reorproject/reor
As a side-effect, I just noticed that I prefer a long markdown file with proper headings (and an outline on the side) than Milanote's board view, which initially felt like a more free form better suited for unorganized thoughts and ideas for writing that I had (I use it for my fiction writing).
I still can have documents as a list of loose thoughts, but once I am ready to organize my ideas, I just use well written and organized headers, edit the content and now I have a really useful view of my idea.
Honestly that's more you subjectively than database v files.
https://xenodium.com/an-ios-journaling-app-powered-by-org-pl...
Just wanted to say thank you so much for this perspective and fighting the good fight.
I played around with this on a couple of small knowledge bases using an open Hermes model I had downloaded. The “related notes” feature didn't provide much value in my experience, often the link was so weak it was nonsensical. The Q&A mode was surprisingly helpful for querying notes and providing overviews, but asking anything specific typically just resulted in less than helpful or false answers. I'm sure this could be improved with a better model etc.
As a concept, I strongly support the development of private, locally-run knowledge management tools. Ideally, these solutions should prioritise user data privacy and interoperability, allowing users to easily export and migrate their notes if a new service better fits their needs. Or better yet, be completely local, but have functionality for 'plugins' so a user can import their own models or combine plugins. A bit like how Obsidian[1] allows for user created plugins to enable similar functionality to Reor, such as the Obsidan-LLM[2] plugin.
[1] https://obsidian.md/ [2] https://github.com/zatevakhin/obsidian-local-llm
Working hard on improving the chunking to improve related notes section. RAG is fairly naive right now, with lots of improvements coming in the next few weeks.
IME, the prompt should be front/center in terms of importance and the key to unlocking the models potential. It's one of the main reasons why Textgen-Webui is sooooo good. You can really dial-in the prompt, from the template itself to working with the system message. Then begin futzing with the myriad of other parameters to achieve fantastic results.
If you are using Obsidian, Smart Connections in v2 (1) does also support local embeddings and shows related notes based on semantic similarity.
It's not super great on bi/multi-lingual vaults (DE + EN in my case), but it's improving rapidly and might soon support embedding models that cater for these cases as well.
(1) https://github.com/brianpetro/obsidian-smart-connections
I think "here be dragons", and that over-relying on AI to do all your organization for you will very possibly (probably?) cause you to become worse at thinking.
No data to back this up because it is still early days in the proliferation of such tools, but historically making learning and thinking and "knowledge management" more passive does not improve outcomes.
Socrates said exactly this.
But when they came to writing, Theuth said: “O King, here is something that, once learned, will make the Egyptians wiser and will improve their memory; I have discovered a potion for memory and for wisdom.” Thamus, however, replied: “O most expert Theuth, one man can give birth to the elements of an art, but only another can judge how they can benefit or harm those who will use them. And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”
An AI doing the work for you is the opposite of that.
> Socrates said exactly this.
I roughly recalled where you were going to go with that afterwards, but I couldn't help but 'spit take' at that given some of the quotes he does get credited with!
In some cases, hard thinking and searching for things manually can really enhance understanding and build your knowledge.
In other cases, particularly when ideating for example, you want to be given "inspiration" from other related ideas to build upon other ideas you've had previously.
I think it's a mix of both - reaching for AI as and when when you need it - but avoiding it intentionally at times as well.
Probably a moot point once we have brain-machine interfaces, but we're not quite there yet.
- Create multiple independent "vaults" (like obsidian).
- Append links to related notes, so you can use (Obsidian's) graph view to map the AI connections.
- "Minimize" the UI to just the chat window.
- Read other formats (mainly pdfs).
- Integrate with browser history/bookmarks (maybe just a script to manually import them as markdown ?)
Thanks for Reor !
- Multiple vaults is in fact in a PR right now: https://github.com/reorproject/reor/pull/28
- Manual linking is coming.
- Minimizing the UI to chat is interesting. Right now I guess you can drag chat to cover anything - but yes perhaps a toggle between two modes could be interesting.
- Read other formats also something in the pipeline. Just need to sort out the editor itself to support something like this. Perhaps pdfs would just be embedded into the vector db but not accessible to the editor.
- Integrating with browser history and bookmarks is a big feature. Things like web clipping and bringing in context from different places are interesting...
Excited to see something already far more realized, and I’m looking forward to trying this out.
I’ve been working on a larger than small writing project using Obsidian, and my ultimate goal is to have conversations with the corpus of what I’ve written, and to use this to hone ideas and experiment with new ways of exploring the content.
Not sure if local LLMs are powerful enough yet to enable meaningful/reliable outcomes, but this is the kind of stuff that really excites me about the future of this tech.
https://github.com/zatevakhin/obsidian-local-llm
https://github.com/hinterdupfinger/obsidian-ollama
Which already exist and if nothing else are decent starting points.
> Not sure if local LLMs are powerful enough yet to enable meaningful/reliable outcomes
I've dabbled, briefly, with Ollama running Mistral locally on an M1 MacBook Pro with 32GB of unified memory, and throwing a couple of hundred markdown documents at it via a RAG resulted in quite decent output to prompts asking questions about abstract contents/summariesbbased on those docs.
So I'd say we're already at a point where you can have meaningful outcomes; reliability is a whole other issue though.
I recently got my hands on an RTX 3090 for my Linux workstation and I’m planning to try getting some kind of remote setup going for my MacBook Air.
Great to hear about decent output. Reliability is negotiable as long as there’s some value and hopefully a path to future improvements.
Always see parallels drawn between Obsidian note structures and whole “2nd brain” idea for personal knowledge management, had seemed like a natural next step would be to implement note retrieval for intelligent references. Will have to check this out