Readit News logoReadit News
Posted by u/somesortofthing a month ago
Show HN: GitHub "Lines Viewed" extension to keep you sane reviewing long AI PRschromewebstore.google.com...
I was frustrated with how bad a signal of progress through a big PR "Files viewed" was, so I made a "Lines viewed" indicator to complement it.

Designed to look like a stock Github UI element - even respects light/dark theme. Runs fully locally, no API calls.

Splits insertions and deletions by default, but you can also merge them into a single "lines" figure in the settings.

crote · a month ago
Sure, it looks neat, but why would you ever want this? What happened to closing PRs like thise with a short and simple "This is unreadable. Split it into smaller self-contained commits, and write proper commit messages explaining what they do and why" comment?

Massive walls of code have always been rejected simply for being unreviewable. Why would you suddenly allow this for AI PRs - where you should be even more strict with your reviews?

rokkamokka · a month ago
I'm on the fence about this. Sometimes a new feature needs maybe 2k lines of code split over 10-20 or so files. Sure, you could split it up, but you can't necessarily run the parts in isolation and if they get split over multiple reviewers it might even be that no one reviewer gets the whole context.

So, I'm kind of okay with large PRs as long as they're one logical unit. Or, maybe rather, if it would be less painful to review as one PR rather than several.

captainbland · 24 days ago
I think the particular problem is if AI is just producing large volumes of code which are unnecessary, because the LLM is simply not able to create a more concise solution. If this is the case it suggests these LLM generated solutions are likely bringing about a lot of tech debt faster than anyone is ever likely to be able to resolve it. Although maybe people are banking on LLMs themselves one day being sophisticated enough to do it, although that would also be the perfect time to price gouge them.
strken · 24 days ago
I'm okay with long PRs, but please split them up into shorter commits. One big logical unit can nearly always be split into a few smaller units - tests, helpers, preliminary refactoring, handling the happy vs error path, etc. can be separated out from the rest of the code.
sublinear · 24 days ago
> Sometimes a new feature needs maybe 2k lines of code split over 10-20 or so files

I still disagree. Why was the feature not split up into more low-level details? I don't trust that kind of project management to really know what it's doing either.

I am not promoting micromanagement, but any large code review means the dev is having to make a lot of independent decisions. These may be the right decisions, but there's still a lack of communication happening.

Hands off management can be good for creativity and team trust, but ultimately still bad for the outcome. I'm speaking from my own experience here. I would never go back to working somewhere not very collaborative.

cik · 24 days ago
This is very much my take. As long as the general rule is a lack of long PRs, I think we get into a good place. Blueskying, scaffolding, all sorts of things reasonably end up in long PRs.

But, it becomes incumbent on the author to write a guide for reviewing the request, to call the reviewer's attention to areas of interest, perhaps even to outline decisions made.

dboreham · 24 days ago
That's fine but such a PR doesn't need to be (and actually can't) be reviewed. Or at least it can only be reviewed broadly: does it change files that shouldn't change, does it have appropriate tests, etc.
fastasucan · 24 days ago
Maybe AI should not author big and complex features that cant be split up into parts and thus easily reviewed.
somesortofthing · a month ago
I'm reviewing PRs I wrote myself. Valid concern in a real org though.
caseyohara · 24 days ago
I don’t understand. Are they AI PRs (as in the title), or did you write them yourself?
funkattack · 24 days ago
This is fundamentally a scaling problem, not a tooling problem. When AI generates PRs that no single person can fully grasp, the question isn't "how do we make reviewing 5,000 lines more comfortable" – it's "who is actually vouching for this code?" The answer is already deeply embedded in Git's tooling: every commit carries both an author and a committer field. The author wrote the code, the committer is the person who put it into the codebase. With git blame you always know who is to blame – in both senses. In the age of AI-generated code, this distinction matters more than ever: the author might be an LLM, but the committer is the human who vouches for it. Disclosure: non-native English speaker, used AI to help articulate these thoughts – the ideas are my own.
moritzwarhier · 24 days ago
So who authored your comment?

What would you put into the commit message fields if it were a git commit?

funkattack · 24 days ago
Currently you'd read quite a lot of: "Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>"
alan-stark · 24 days ago
But why would you want to review long AI PRs in the first place? Why don't we apply the same standards we apply to humans? Doesn't matter if it was AI-generated, outsourced to Upwork freelancers or handcrafted in Notepad. Either submit well-structured, modular, readable, well-tested code or PR gets rejected.
fotcorn · a month ago
Related to this, how do you get your comments that you add in the review back into your agent (Claude Code, Cursor, Codex etc.)? Everybody talks about AI doing the code review, but I want a solution for the inverse - I review AI code and it should then go away and fix all the comments, and then update the PR.
voidUpdate · 24 days ago
What you do is actually read the comments, think about how you can improve the code, and then improve it, whether by telling the agent to do that or doing it yourself
ctmnt · a month ago
There’s a bunch of versions of this out there. This one’s mine, but it’s based on other ones. It works really well. It assesses the validity and importance of each comment, then handles it appropriately, creating issues, fixing the code, adding comments, updating the GH Copilot instructions file, etc.

https://github.com/cboone/cboone-cc-plugins/blob/main/plugin...

alehlopeh · a month ago
I tell claude code “review the comments on this PR” and give it the url, and that’s enough. It then uses the gh cli tool and fetches the PR and individual comments.
somesortofthing · a month ago
I suspect you don't need anything special for this. The GH API has support for reading comments from PRs. Maybe have it maintain a small local store to remember the IDs of the comments it's already read so it doesn't try to re-implement already-implemented fixes. Another similar thing you can do is a hook that reminds it to start a subagent to monitor the CI/autofix errors after it creates/updates a PR.
danappelxx · a month ago
GitHub API is actually quite tricky here because there is a different between “comment” and “review” and “review comment” (paraphrasing, I don’t remember the details). So it’s not as simple as one API call that grabs the markdown. Of course you can write a creative one-liner to extract what you need, though.

Deleted Comment

epolanski · a month ago
I don't use it, but you can tag @copilot on GitHub comments and it will do so.

I don't do it because the chances of me reviewing vomited code are close to 0.

kloud · 24 days ago
Reviewing large volume of code is a problem. In the pre-LLM era, as a workaround to occasionally review large PRs, I used to checkout the PR, reset commits, and stage code as I would review. In the first pass I would stage the trivial changes, leaving the "meat" of the PR that would need deeper thinking for later passes.

With the increased volume of code with agentic coding, what was once occasional is now a daily occurrence. I would like to see new kinds of code review to deal with larger volume of code. The current Github review interface does not scale well. And I am not sure Microsoft has organizational capacity to come up creative UI/UX solutions to this problem.

scottbez1 · 24 days ago
GitHub seems entirely uninterested in improving the code review experience (except maybe the stacked PRs thing if that ends up shipping) for well over a decade now.

Things that I’d consider table stakes that Phabricator had in 2016 - code movement/copying gutter indicators and code coverage gutters - are still missing, and their UI (even the brand new review UI that also renders suggestion comment diffs incorrectly) still hides the most important large file changes by default.

And the gutter “moved” indicators would be more useful than ever, as I used to be able to trust that a hand-written PR that moves a bunch of code around generally didn’t change it, but LLM refactors will sometimes “rewrite from memory“ instead of actually moving, changing the implementation or comments along the way.

oefrha · 24 days ago
You review long PRs by checking out the branch, git reset, then stage hunks/files as you review them. Reviewing long PRs in GitHub UI is never sane.
sgarland · 24 days ago
Or you just view each commit separately, assuming the author made reasonable commits.
tkzed49 · 24 days ago
Can I get an AI that automatically nitpicks AI PRs with the goal of rejecting them?
jbonatakis · 24 days ago
I built (using AI) a small cli that provides the breakdown of changes in a PR between docs, source, tests, etc

https://github.com/jbonatakis/differ

It helps when there’s a massive AI PR and it’s intimidating…seeing that it’s 70% tests, docs, and generated files can make it a bit more approachable. I’ve been integrating it into my CI pipelines so I get that breakdown as a comment on the PR