Readit News logoReadit News
MarkMarine · 3 months ago
I saw this comment a little bit back and I don’t think the OP expanded on it, but this looks like a fantastic idea to me:

sam0x17 20 days ago:

Didn't want to bury the lead, but I've done a bunch of work with this myself. It goes fine as long as you give it both the textual representation and the ability to walk along the AST. You give it the raw source code, and then also give it the ability to ask a language server to move a cursor that walks along the AST, and then every time it makes a change you update the cursor location accordingly. You basically have a cursor in the text and a cursor in the AST and you keep them in sync so the LLM can't mess it up. If I ever have time I'll release something but right now just experimenting locally with it for my rust stuff On the topic of LLMs understanding ASTs, they are also quite good at this. I've done a bunch of applications where you tell an LLM a novel grammar it's never seen before _in the system prompt_ and that plus a few translation examples is usually all it takes for it to learn fairly complex grammars. Combine that with a feedback loop between the LLM and a compiler for the grammar where you don't let it produce invalid sentences and when it does you just feed it back the compiler error, and you get a pretty robust system that can translate user input into valid sentences in an arbitrary grammar.

https://news.ycombinator.com/item?id=44941999

rictic · 3 months ago
One thing to take care with in cases like this, it probably needs to handle code with syntax errors. It's not uncommon for developers to work with code that doesn't parse (e.g. while you're typing, to resolve merge conflicts, etc).

In general, a drum I beat regularly is that during development the code spends most of its time incorrect in one way or another. Syntax errors, doesn't type check, missing function implementations, still working out the types and their relationships, etc. Any developer tooling that only works on valid code immediately loses a lot of its value.

digdugdirk · 3 months ago
Isn't that the benefit of treesitter? I was under the impression that it's more accepting of these types of errors, at least to a degree where you can get enough info to fix it.
dorian-graph · 3 months ago
There's also https://github.com/bartolli/codanna, that's similarly new. I'll have to try that again, and this one.
CuriouslyC · 3 months ago
I've benchmarked the code search MCPs extensively and agents with LSP-aware mcps outperform agents using raw indexed stores quite handily. Serena, as janky as it is, is a better enabler than Codanna.
athrowaway3z · 3 months ago
> thread 'main' (17953) panicked at ck-cli/src/main.rs:305:41: byte index 100 is not a char boundary

I seem to have gotten 'lucky' and it split an emoji just right.

---

For anyone curious: this is great for large, disjointed, and/or poorly documented code bases. If you kept yours tight and files smaller than ~600 lines, it is almost always better to nudge llm's into reading whole files.

Runonthespot · 3 months ago
Nice catch- should be fixed in latest
ozten · 3 months ago
This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.
abeyer · 3 months ago
> UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use

Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.

Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.

rane · 3 months ago
Cool. Some AI fluff can be detected in the README.

For example under the "Why CK?" section, "For teams" is of no substance compared to "For developers"

CuriouslyC · 3 months ago
I actually have a WIP library for this, the indexing server isn't where I want it just yet, but I have an entire agent toolkit that does this stuff, and the indexing server is quite advance, with self-tuning, raptor/lsp integration, solves for optimal result set using knapsack, etc.

https://github.com/sibyllinesoft/grimoire

threecheese · 3 months ago
I have to know, what is the Lens SPI? The link in your readme is broken, and Kagi results for this cannot possibly be right.
CuriouslyC · 3 months ago
Lens is basically a rust local first mmapped file base search store, it combines RAPTOR with LSP, semantic vectors and a dual dense/sparse encoding, and can learn a function over those to tune the weights of the relevance sources adaptively per query using your data. It also uses linear programming to select an "efficient" set of results that minimizes mutual information between result atoms -- regular rag/rerank pipelines just dump the top K, but those often have a significant amount of overlap so you bloat context for no benefit.

Deleted Comment

rane · 3 months ago
I tried in my relatively small project.

    ~/c/l/web % ck --sem 'error handling'
    ℹ Semantic search: top 10 results, threshold ≥0.6
    ⠹ Searching with semantic mode...
All I got was spinning M2 Mac fan after a minute, and gave up.

Runonthespot · 3 months ago
interesting - can I ask you to try a ck --index . ?
postalcoder · 3 months ago
It'd be nice if respected gitignore. It's turning my M4 MBP into a space heater too.