Hilarious because onesociety2022 seems so earnest. Someone who is shocked at the idea that job search isn’t a pure meritocracy.
Horrifying because kuang_eleven points out just how easy it is to pass a qualified candidate if you want to.
The truth is somewhere in the middle…
I’m guessing you’re suggesting it’s ok to lose time if you’re away from your computer enjoying life, and I agree. I also don’t see the issue in finding ways to be save time with work.
If you mean something different, please elaborate.
Maintainers won’t have to deal with an endless stream of PRs. Now people will just clone your library the second it has traction and make it perfect for their specific use case.
Cherry pick the best features and build something perfect for them. They’ll be able to do things your product can’t, and individual users will probably find a better fit in these spinoffs than in the original app.
Thanks
> When it doesn’t work though, things get ugly too.
wat dis den?
I’m just predicting what will happen. I think it’s a really good thing.
Effort asymmetry is inheret to AI's raison d'être. (One could argue that's true for most consumer-facing technology.)
The problem is AI.
I think AI is going to create a whole new class of people that take a tiny output and turn it into an outsized output.
When this works, it is really nice. Think Cursor, Lovable, or OpenClaw.
When it doesn’t work though, things get ugly too. The same power that allows a small team to build a billion dollar company also allows rouge agents to industrialize their efforts as well.
Combine this with the rise of headless browsers and you have a dangerous cocktail.
I wouldn’t be surprised if we see regulation or licensing around frontier AI APIs in the near future.
Apparently this is in support of their 2.0 release: https://www.qodo.ai/blog/introducing-qodo-2-0-agentic-code-r...
> We believe that code review is not a narrow task; it encompasses many distinct responsibilities that happen at once. [...]
> Qodo 2.0 addresses this with a multi-agent expert review architecture. Instead of treating code review as a single, broad task, Qodo breaks it into focused responsibilities handled by specialized agents. Each agent is optimized for a specific type of analysis and operates with its own dedicated context, rather than competing for attention in a single pass. This allows Qodo to go deeper in each area without slowing reviews down.
> To keep feedback focused, Qodo includes a judge agent that evaluates findings across agents. The judge agent resolves conflicts, removes duplicates, and filters out low-signal results. Only issues that meet a high confidence and relevance threshold make it into the final review.
> Qodo’s agentic PR review extends context beyond the codebase by incorporating pull request history as a first-class signal.
A lot of this stuff is really new, and we will need to find ways to standardize, but it will take time and consensus.
It took 4 years after the release of the automobile to coin the term milage to refer to miles driven per unit of gasoline. We will in due time create the same metrics for AI.