Readit News logoReadit News
nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
csomar · a month ago
I'm working on a similar tool (https://codeinput.com/products/merge-conflicts/online-diff), specifically focusing on how to use the diff results. For semantic parsing, I think the best option available right now is Tree-sitter (https://tree-sitter.github.io/tree-sitter), which has decent WASM support. If this interests you, feel free to shoot me an email. I'm always looking to connect with other devs who want to discuss this stuff.
nuky · a month ago
Oh yeah tree-sitter it's a great foundation for semantic structure.

What I'm exploring is more about what we do with that structure once someone/smth starts generating thousands of changed lines: how to compress change into signals we can actually reason about.

Thank you for sharing. I'm actually trying your tool right now - it looks really interesting. Happy to exchange thoughts.

nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
veunes · a month ago
I totally get the fear regarding probabilistic changes being reviewed by probabilistic tools. It's a trap. If we trust AI to write the code and then another AI to review it, we end up with perfectly functioning software that does precisely the wrong thing.

Diffs are still necessary, but they should act as a filter. If a diff is too complex for a human to parse in 5 minutes, it’s bad code, even if it runs. We need to force AI to write "atomically" and clearly; otherwise we're building legacy code that's unmaintainable without that same AI

nuky · a month ago
Agreed - that trap is very real. The open question for me is what we do when atomic, 5min readable diffs are the right goal but not realistically achievable always. My gut says we need better deterministic signals to reduce noise before human review. Not to replace it.
nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
veunes · a month ago
Reading works when you generate 50 lines a day. When AI generates 5,000 lines of refactoring in 30 seconds, linear reading becomes a bottleneck. Human attention doesn't scale like GPUs. Trying to "just read" machine-generated code is a sure path to burnout and missed vulnerabilities. We need change summarization tools, not just syntax highlighting
nuky · a month ago
This is exactly the gap I'm worried about. human review still matters, but linear reading breaks down once the diff is mostly machine-generated noise. Summarizing what actually changed before reading feels like the only way to keep reviews sustainable.
nuky commented on A tool to see what changed in your code (beyond the diff)   gitlab.com/hilly.bopper/n... · Posted by u/nuky
nuky · a month ago
This came out of reviewing a lot of large refactors and AI-assisted changes.

The goal isn't to replace code review (ofc) or relax standards, but to make the consequences of changes visible early — so reviewers can decide where to focus or when to push back.

nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
DiabloD3 · a month ago
You know there are other kinds of diffs, right?

Its common to change git's diff to things like difftastic, so formatting slop doesn't trigger false diff lines.

You're probably better off, FWIW, just avoiding LLMs. LLMs cannot produce working code, and they're the wrong tool for this. They're just predicting tokens around other tokens, they do not ascribe meaning to them, just statistical likelihood.

LLM weights themselves would be far more useful if we used them to indicate statistical likelihood (ie, perplexity) of the code that has been written; ie, strange looking code is likely to be buggy, but nobody has written this tool yet.

nuky · a month ago
It was precisely because this was going too far that I thought the consequences of the active adoption of LLM tools could be made visible. I'm not saying LLM is completely bad—after all, and not all tools, even non-LLM ones, are 100% deterministic. At the same time, reckless and uncontrolled use of LLM is increasingly gaining ground not only in coding but even in code analyze/review.
nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
uhfraid · a month ago
> How do you review large AI-assisted refactors today?

just like any other patch, by reading it

nuky · a month ago
fair — that’s what I do as well)
nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
DiabloD3 · a month ago
You know there are other kinds of diffs, right?

Its common to change git's diff to things like difftastic, so formatting slop doesn't trigger false diff lines.

You're probably better off, FWIW, just avoiding LLMs. LLMs cannot produce working code, and they're the wrong tool for this. They're just predicting tokens around other tokens, they do not ascribe meaning to them, just statistical likelihood.

LLM weights themselves would be far more useful if we used them to indicate statistical likelihood (ie, perplexity) of the code that has been written; ie, strange looking code is likely to be buggy, but nobody has written this tool yet.

nuky · a month ago
Yeah difftastic and similar tools help a lot with formatting noise really.

My question is slightly orthogonal though: even with a cleaner diff, I still find it hard to quickly tell whether public API or behavior changed, or whether logic just moved around.

Not really about LLMs as reviewers — more about whether there are useful deterministic signals above line-level diff.

nuky commented on Ask HN: Are diffs still useful for AI-assisted code changes?    · Posted by u/nuky
ccoreilly · a month ago
There‘s many approaches being discussed and it will depend on the size of the task. You could just review a plan and assume the output is correct but you need at least behavioural tests to understand what was built fulfilled the requirements. You can split the plan further and further until the changes are small enough to be reviewable. Where I don’t see the benefit is in asking an agent to generate test as it tends to generate many useless unit tests that make reviewing more cumbersome. Writing the tests yourself (or defining them and letting an agent write the code) and not letting implementation agents change the tests is also something worth trying.

The truth is we’re all still experimenting and shovels of all sizes and forms are being built.

nuky · a month ago
That matches my experience too - tests and plans are still the backbone.

What I keep running into is the step before reading tests or code: when a change is large or mechanical, I’m mostly trying to answer "did behavior or API actually change, or is this mostly reshaping?" so I know how deep to go etc.

Agree we’re all still experimenting here.

nuky commented on I built a geocoder for AI agents because I couldn't afford Google Maps   jonready.com/blog/posts/g... · Posted by u/mips_avatar
mips_avatar · a month ago
So I don't support queries with a radius larger than 50km (if an AI agent doesn't know where it's looking within 50km there usually is a context issue upstream), but i have a larger h3 index and a tighter h3 index. Then I have a router that tries to find the correct h3 indexes for each query. For some queries I'll need up to 3 searches, but most map to a single search. (sorry I probably won't be able to reply below here since the max hn comment depth is 4)
nuky · a month ago
Makes sense. What about latency? for typical and hard queries

u/nuky

KarmaCake day4January 9, 2026
About
geek
View Original