It’s very easy to filter out the weeds: read the classics and any book that breaks through the noise with consistently high reviews. We don’t need to waste time on low-quality literature or AI-generated slop.
It’s very easy to filter out the weeds: read the classics and any book that breaks through the noise with consistently high reviews. We don’t need to waste time on low-quality literature or AI-generated slop.
https://web.archive.org/web/20250625181250/https://www.terms...
it takes a good 5-10 minutes to boot up
We’re talking about less than a 10% performance gain, for a shitload of data, time, and money investment.
This is not to say that traps don’t make your house more livable. Once, I lived in a house connected to a forest in Brazil—no real neighbors, and a shitload of mosquitoes.
I did buy some fancy traps with UV lights and fans, and oh boy, I killed a shitload of them. Not to say I fully solved the mosquito problem, but I significantly reduced the bites. My wife is allergic to them, so she’s a great sensor—if there’s even one mosquito in the room, she knows.
What is also interesting is one of the biggest search companies is using it to steer traffic away from its former 'clients'. The very websites google talked into slathering their advertisements all over themselves. By giving them money and traffic. But that worked because google got a pretty good cut of that. But now only google gets the 'above the fold' cut.
That has two long term effects. One the place they harvest the data will go away. The second is their long term money will decrease. As traffic is lowered and less ads shown (unless google goes full plaster it everywhere like some sites).
AI is going to eat the very companies making it. Even if the answers are kind of 'meh'. People will be fine with 'close enough' for the majority of things.
Short term they will see their metric of 'main site retention' going up. It will however be at the cost of the websites that fed the machine.
Looking ahead, Search will become a de facto LLM chatbot, if it isn't already.
Spanish is totally systematic in this sense and once you can read it, you can pronounce it.
English is a bit messy regarding to this, for whatever reasons.
You’ve never seen the word before, but when reading it for the first time, you’ll probably pronounce it correctly.
English is awful, but French takes the crown on this one—though more because it has the same pronunciation for many different words and written forms.
English, on the other hand, the alphabet doesn’t map well.
Mood and flood both have “oo”, yet each is pronounced differently. You need to know the word beforehand to know exactly how it’s pronounced.
PRs in general shouldn't require elaborate summaries. That's what commit messages are for. If the PR includes many commits where a summary might help, then that might be a sign that there should be multiple PRs.
Granted, it is not only summaries that go into the description—how to test, if there is any pre-deploy or post-deploy setup, any concerns, external documentation, etc.
Less is more. A summary serves to clarify, not to endlessly add useless information.
⸻
2. about the usefulness of summaries.
Summaries always provide better information—straight to the point—than commits (which are historical records). This applies to any type of information.
When you’re reporting a problem by going through historical facts, it can lead to multiple narratives, added complexity, and convoluted information.
Summaries that quickly deliver the key points clearly and focus only on what’s important offer a better way to communicate.
If the listener asks for details, they already have a clear idea of what to expect. A good summary is a good introduction to what you are going to see in the commits messages and in the code changes.
______________________
3.About multiple Prs.
Summary helps to clarify what is scope creep (be it a refactor or unrelated code to the ticket);
it make it easier for the reviewer demand a split in multiple PRs.
examples: A non-summary PR/MR might lead to the question—“WHY is this code here?"
"he touched a class here, was he fixing something that the test missed out ? or is just a refactor?"
_______________
As a reviewer you can get those information by yourself, although summary helps you to get it much quicker.
Not only did they produce about the same amount of code in a day that they used to produce in a week (or two), several other things made my work harder than before:
- During review, they hadn't thought as deeply about their code so my comments seemed to often go over their heads. Instead of a discussion I'd get something like "good catch, I'll fix that" (also reminiscent of an LLM).
- The time spent on trivial issues went down a lot, almost zero, the remaining issues were much more subtle and time-consuming to find and describe.
- Many bugs were of a new kind (to me), the code would look like it does the right thing but actually not work at all, or just be much more broken than code with that level of "polish" would normally be. This breakdown of pattern-matching compared to "organic" code made the overhead much higher. Spending decades reviewing code and answering Stack Overflow questions often makes it possible to pinpoint not just a bug but how the author got there in the first place and how to help them avoid similar things in the future.
- A simple, but bad (inefficient, wrong, illegal, ugly, ...) solution is a nice thing to discuss, but the LLM-assisted junior dev often cooks up something much more complex, which can be bad in many ways at once. The culture of slowly growing a PR from a little bit broken, thinking about design and other considerations, until its high quality and ready for a final review doesn't work the same way.
- Instead of fixing the things in the original PR, I'd often get a completely different approach as the response to my first review. Again, often broken in new and subtle ways.
This lead to a kind of effort inversion, where senior devs spent much more time on these PRs than the junior authors themselves. The junior dev would feel (I assume) much more productive and competent, but the response to their work would eventually lack most of the usual enthusiasm or encouragement from senior devs.
How do people work with these issues? One thing that worked well for me initially was to always require a lot of (passing) tests but eventually these tests would suffer from many of the same problems
One thing I do that helps clean things up before I send a PR is writing a summary. You might consider encouraging your peers to do the same.
## What Changed?
Functional Changes:
- New service for importing data
- New async job for dealing with z.
Non-functional Changes: - Refactoring of Class X
- Removal of outdated code
It might not seem like much, but writing this summary forces you to read through all the changes and reflect. You often catch outdated comments, dead functions left after extractions, or other things that can be improved—before asking a colleague to review it.It also makes the reviewer’s life easier, because even before they look at the code, they already know what to expect.
Nothing suggests it would have been free — in fact, if I owned a ford (a shallow crossing point) running through my property, you can bet I would charge for it.