Readit News logoReadit News
yichuan commented on First lightweight local semantic search MCP for Claude Code   github.com/yichuan-w/LEAN... · Posted by u/yichuan
yichuan · 7 months ago
@Berkeley SkyLab, we’re the first to bring semantic search to Claude Code with a fully local index in a novel, lightweight structure — check it out at LEANN(https://github.com/yichuan-w/LEANN). Unlike Claude-context, which uploads all data to the cloud, or Serena, which is heavy and limited to keyword search, our solution installs in just 1 minute and instantly enhances Claude Code’s capabilities.Unlike Claude-context, which uploads everything to the cloud, and Serena, which is heavy and limited to keyword search, our method sets up in just 1 minute and instantly boosts Claude Code’s quality.
yichuan commented on How I code with AI on a budget/free   wuu73.org/blog/aiguide1.h... · Posted by u/indigodaddy
yichuan · 7 months ago
I think there’s huge potential for a fully local “Cursor-like” stack — no cloud, no API keys, just everything running on your machine.

The setup could be: • Cursor CLI for agentic/dev stuff (example:https://x.com/cursor_ai/status/1953559384531050724) • A local memory layer compatible with the CLI — something like LEANN (97% smaller index, zero cloud cost, full privacy, https://github.com/yichuan-w/LEANN) or Milvus (though Milvus often ends up cloud/token-based) • Your inference engine, e.g. Ollama, which is great for running OSS GPT models locally

With this, you’d have an offline, private, and blazing-fast personal dev+AI environment. LEANN in particular is built exactly for this kind of setup — tiny footprint, semantic search over your entire local world, and Claude Code/ Cursor –compatible out of the box, the ollama for generation. I guess this solution is not only free but also does not need any API.

But I do agree that this need some effort to set up, but maybe someone can make these easy and fully open-source

yichuan commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
yichuan · 7 months ago
That's my vision, hope it can help. I think that if we combine all our personal data and organize it effectively, we can be 10 times more efficient. Long-term AI memory, all you speak and see will secretly be loaded to your own personal AI, and that can solve many difficulties, I think. https://x.com/YichuanM/status/1953886817906045211
yichuan commented on I want everything local – Building my offline AI workspace   instavm.io/blog/building-... · Posted by u/mkagenius
oblio · 7 months ago
It feels weird that the search index is bigger than the underlying data, weren't search indexes supposed to be efficient formats giving fast access to the underlying data?
yichuan · 7 months ago
I guess for semantic search(rather than keyword search), the index is larger than the text because we need to embed them into a huge semantic space, which make sense to me

u/yichuan

KarmaCake day10March 8, 2024View Original