Readit News logoReadit News
willcipriano · 3 years ago
The more experienced I get the more I want to work on teams with a higher degree of trust than ones like this. Don't get me wrong I think code review can be valuable but on high preforming teams it works best on a as needed basis.

The way this has worked successfully for me in the past is as a new joiner to a team you submit for code review every time. My first couple of reviews tend to have a good bit of feedback as I settle in to the style of the team and often a new language or framework. After a while though I become a domain expert for the projects I am working on, and code reviews end up becoming a rubber stamp. At that point it's just something slowing me down. I'd request review when I need it, or I'm making a big change that will effect everyone, otherwise there are many cases where there is really only one way to skin this cat and it's totally perfunctory.

This sort of arrangement also requires a bit of ownership, if you see reports floating around about an endpoint you just worked on having issues you have to be able to be trusted to jump on it. I feel like trust and ownership are touchy feely people problems so tech people invented code reviews and linters to try to make up for not cultivating the sort of culture where they aren't required on every single little thing.

quadrifoliate · 3 years ago
The popularity this sentiment on HN terrifies me.

Please don't do this. I keep finding rubber stamp code reviews (or none at all) while diving into codebases for where bugs were introduced, and 9 times out of 10, it's someone who didn't bother to get proper reviews for their code because "it's just something slowing me down". I can assure you that you have bugs and usability issues in your code, not just "linter issues". Everyone does.

If your code reviews are becoming rubber stamps, that's an indication that you need to slow down and teach other team members about the system, or hire smarter people who can understand your reviews more quickly. Don't respond by getting rid of the review process as a knee-jerk response to these sorts of organizational constraints.

ryandvm · 3 years ago
I disagree.

If you have a practice that isn't effective (e.g. code reviews that aren't preventing bugs), doubling-down is almost never the correct solution.

Code reviews are like unit tests; they're helpful under the right conditions. Mostly on the tricky bits or when a dev is working in an unfamiliar area.

It's a common fallacy for folks to believe that if a little of something is good, then a lot of it must be better. I've been at places with 100% code test coverage or requiring EVERY SINGLE PULL REQUEST to be reviewed. It's obnoxious, and it doesn't do shit to prevent bugs. Instead it causes folks to burn out on the practice and write crap unit tests or stamp "LGTM" on every PR they're forced to evaluate. People have a limited capacity for cognitive engagement and wasting that on piddly crap means they can't spend it on the important bits.

librish · 3 years ago
Velocity vs correctness is a trade-off. Stringent review, testing, and ownership requirements can easily slow down developers by an order of magnitude. Which is often the right call, but not always.
Too · 3 years ago
Been on both camps of this argument. Now tend to favor that responsible seniors should be allowed a fast lane. It depends very much on your team dynamics. Seniors can be very cowboy so some of them actually need even more review than others.

Have been in some teams with very high junior/senior ratio where a junior would need a whole day to review a small thing a senior implemented in an hour, often because understanding the context leads to hundred rabbit holes of more new information, not the small diff itself. Add the one week that PR was waiting for someone to even notice it needed review and you realize the frustration. Vice versa a junior would take a week to implement something a senior can review in 15mins.

Do the math and you realize the senior will quickly get blocked.

Slowing down and teaching others in that situation is a as you say the right thing to do. But that also saturates after a while. You can only pair program x number of hours per day before getting exhausted, especially for a beginner this can feel like a lot of pressure to ingest so much knowledge at that pace. They need to slow down and get in their own zone as well. In the meantime, backlog is on fire with tasks that need to be released yesterday.

Hire smarter people? Yes please, would love to do that, where do I call? “Smarter” doesn’t always help either, the reality is that you often end up with domain experts within some areas of the product, even if you try to keep everyone on board.

hsn915 · 3 years ago
I have some sad news for you. Code reviews don't eliminate bugs.

Some times, they actually hinder the process of fixing bugs.

Cerium · 3 years ago
I work on a team that runs following a high trust and domain ownership model. When I joined our process was "write some code, test it, be confident in the code, commit it to trunk". A number of years ago we worked our way through "Clean Code" in a reading group and decided to try out code reviews. Each week one of us would present our recent work and the team would tear it apart line by line, and in the process we developed a better understanding of our group philosophy around style and practices.

When we switched to Git we decided that we liked code reviews and wanted to make them part of merging to develop. Depending on the content the reviews are deeper or more superficial. Many times even when I review the code of somebody making changes in their own subject matter domain I have found bugs that simply needed another set of eyes to catch.

I like code reviews and in the context I use them find that they are a complement to trust, not a substitute.

Kranar · 3 years ago
I feel I work on a high performance team and code reviews are a joy. When I submit my code for review it's an opportunity to share my work with my peers, potentially explain my thought process and my work process and we get to learn from one another, and when I get code to review it's the opposite, I get to see how my coworkers think and tackle problems.

I have never once felt like a code review did anything to erode trust and I don't see how a code review could slow down a work process. Using git, as soon as I submit code for review I begin work on the next task and if I need anything from the branch I submitted for review, I just merge that into the new branch. If changes are requested I make the changes and then merge again.

Code reviews are among the best ways of ensuring that code remains consistent and that everyone is learning from everyone else. That's why in my team all pull requests are reviewed by two people including one junior developer.

bmj · 3 years ago
On my team, the level of scrutiny a merge request receives depends on the author and the functionality. MRs from a principal or lead engineer tend to get rubber-stamped at a higher rate, though, to your point, many of us will explicitly call out code that we would like to have reviewed in more detail. If a more junior level developer submits a merge request, it is highly likely their code will get a close review.

I'm not sure I agree with your premise that code reviews are the product of a culture with a lack of trust and ownership. I think, in fact, submitting to a review process is a sign of ownership -- it's saying "hey, the quality of the code we produce as a team is important to me, and I know I make mistakes."

larusso · 3 years ago
I‘m called a principal engineer in my organization (I know that can mean everything or nothing) I like to use code review requests and sometimes gates also to put a fence in front of my ego. Also this is a great way of sharing knowledge. I had great sessions where questions about X and then answers verbally or in text suddenly put stuff into different perspective. Sometimes I realize that my solution might be to complicated or an more generic, performan, (paste fancy adjective to describe your code here) is easy to achieve.
the_jeremy · 3 years ago
I trust the people whose code I review to be competent developers. But everyone occasionally makes typos, misreads requirements, has blind spots, etc. Even if the creator is the expert and the reviewer is there to catch only obvious mistakes, that still has value.
bavila · 3 years ago
> But everyone occasionally makes typos, misreads requirements, has blind spots, etc.

Case in point: OP used the word "effect" when he should have used "affect".

Just like pudding, there's always room for a code review.

kkirsche · 3 years ago
This is scary to read. We use code reviews for changes and we’ve found that our more junior team members can bring a point of view our more senior team members can’t and vice versa. It’s one of the best parts of working in code for me.

With that said, code reviews only work, in my experience, if team members are strong communicators. Understanding each other, empathizing with issues, and being collaborative rather than tearing people down is critical.

Code reviews are a “we” activity not a “your” and “my” activity.

digisign · 3 years ago
Yes on trust. However… I'm pretty darn experienced in what I do these days, and every day I still write a bit of crappy code, usually as I come to an understanding over time of the current problem. WTFs come from code written six months ago as well as yesterday. If someone can point out an issue early, it saves everyone time. So, I welcome it.
hsn915 · 3 years ago
This used to be common sense in the few companies I worked at around 2012.

People just pushed code. We occasionally held some review sessions to go over code and exchange comments / ideas about what might be wrong with this or that code and how to prevent this kind of mistake in the future.

But starting from around 2016, I could never ever find a company that does not have mandetory code reviews for every single commit, and this bothers me tremendously.

It's one of the reasons I don't want to work at any programming company anymore.

The really good programmers I know don't look positively at code reviews.

If you need code reviews, ask for it. If not having mandetory reviews results in a huge stream of bugs, maybe you should hire better programmers instead of cheap ones.

nradov · 3 years ago
Where do we find all of those better programmers?
hinkley · 3 years ago
There's a flip between when the feedback is super, super helpful (consensus building, as you say), to a moderate inconvenience to screwing up your estimates because you seem to always post a PR right after the reviewers have started some new thing.

Then farther down that continuum someone is busting your chops for some stupid reason that, after you get back from lunch or coffee, you realize is completely valid (or even gives you an idea for an enhancement) and okay you change how a conditional works or add some more tests.

It makes me wonder if in a corporate setting PRs would be better treated as pseudorandom audits, based on a random factor and a weight for automated linting (so people don't learn to game the linter, which can be some of the worst code you ever see.)

A related issue is that most tools make it pretty difficult to look at and interact with old PRs. Feedback loops have some slop, and while it's better that you get the feedback as you are writing the code, fast follow often works as well, and I think we'd be better as a group if you could request revisions on a recently landed PR after the fact. This is mostly okay but you really need to fix your variable names, conditional block, and you missed a boundary condition over here that might not break today, but there are already stories in the backlog that will break it for sure.

edgyquant · 3 years ago
On my team we have one other engineer, no matter your or their official position, just do a quick glance over for obvious issues. This does not prevent real bugs from getting merged in, the test cases are supposedly for that but even those have some slippage.
smokey_circles · 3 years ago
I see 2 clear problems with this ideology:

1) There is too much knowledge to fit inside one person's head. Cooperation is the keystone of success.

2) Your team will grow. You will need to share this knowledge somehow before the trust can be given

But in defense of this view point: code reviews shouldn't be the opposite of what you're talking about.

I trust my teammates to catch any mistakes I've made. I'm human and therefore fallible. It's going to happen.

I trust mistakes they've made are for the same reasons.

What you're trusting is your teammates are top tier 1% every time and bordering on being super human.

I'm trusting that my team has my back and we're in it together.

owlbite · 3 years ago
Completely different to the order I use, which basically amounts to "how hard is it to fix this later".

So most important questions are: (1) API must be as correct as possible, especially if its exposed outside of the immediate team. Changing APIs is a royal pain. (2) Documentation of what's going on, especially for non-obvious stuff. If we come back to this in 3 years time and the author has moved on, do we have any hope of fixing it? (3) Testing. What degree of confidence do we have this is doing the right thing? Will it protect me when I change it in a few years time?

Any minor bugs or code style issues can be fixed later as/when they are found and cause problems.

greymalik · 3 years ago
I think you might be misreading the pyramid? The most important things are at the bottom. It aligns exactly with what you’re saying.
gunnarmorling · 3 years ago
Exactly that. "how hard is it to fix this later" is what drives the ordering in the pyramid.
mannykannot · 3 years ago
While I recognize the problem as real and significant, and I think a hierarchy of concerns is a valid way to mitigate it, I feel there are a few problems in this particular pyramid.

Of all the issues that might be raised in a review, which are the most important to fix? I find it difficult to imagine a ranking in which incorrect functioning (including security vulnerabilities and truly inadequate performance) are not at the top (answering the ranking question with "all of them" would just be a way to avoid the issue.)

In this pyramid, issues of this nature are to be found at all levels except the top one, mixed in with more-or-less subjective ones, such as "Is a new API generally useful and not overly specific?" - pointless flame wars have erupted over issues such as this (also, orthogonally, code review is late in the game to be asking this one.)

For a review to be successful, its participants need to be able to restrain themselves from going down rabbit holes that could not lead to necessary rework - or a strong moderator, if the individual participants cannot act with adequate self-restraint.

th3iedkid · 3 years ago
Another aspect, the pyramid makes code review look like the only thing around the actual value outcome. There’s a lot of other aspects like pair-programming that eliminates most things in the pyramid, taking it out of scope from a review process
ryathal · 3 years ago
I think you are getting too into the weeds with these complaints. This hierarchy is pretty good. Performance and security are important, but they aren't always a full stop, where a change the breaks the API should always be a full stop. Each higher level is less likely to be a hard stop, and the border between levels is necessarily somewhat grey.
mannykannot · 3 years ago
I'm not so concerned with the broad levels, but the specific items within them. So, for example, "is a new API generally useful and not overly specific?" is way too broad a question to elicit only breaking errors in the API, and is an invitation to exactly the sort of loss of focus that the author rightly deprecates.

WRT performance, I specifically wrote "truly inadequate" in an attempt to distinguish between breaking and merely undesirably bad performance, but that is a somewhat vague line - unavoidably, I think.

Off the top of my head, the only sort of security 'problem' that can safely be put aside would be one that cannot lead to serious or irreversible harm, or requires the organization to have already been terminally compromised.

That I have to make these distinctions is, in fact, the basis of my concerns. They follow from two premises: firstly, the purpose of an inspection is to find problems that need to be fixed, and secondly, whether a problem falls into that category is orthogonal to the questions of what functional, structural or operational categories it falls in.

pcmoney · 3 years ago
If your team does not share a subjective understanding of what makes their product/API useful none of this matters. Focusing on objective technicalities is the least useful aspect of code review. If you have someone who constantly is like "Well that's subjective" and derails conversation that way you need to coach them or move them off the team.
mannykannot · 3 years ago
Are you saying that if you enter a code review without having a shared subjective understanding of what makes your product/API useful then the review is not likely to be successful, or that code review is the place to form such a shared subjective understanding? I can agree to the first interpretation, but about the latter I would say that code review is quite a bit later than optimal, and that there are other, better ways of achieving it.

If your organization is such that you cannot even discuss these issues until you have working code, then you have other, bigger, problems.

AngeloR · 3 years ago
I've been thinking about code reviews a lot.. and I think a big problem with existing code reviews is that for most orgs, code reviews were a way to move planning to the end of the development process instead of keeping it up-front.

I think the foundation of the pyramid (API and Implementation semantics) is often discussed in code reviews when it should have almost no place. By the time you get to the code review not only should these have been ironed out and shared with the whole team. What you should be looking at is simply adherence to the plan, and calling out deviations.

I wrote about a small rat about this: https://xangelo.ca/posts/code-reviews-are-failure/

gunnarmorling · 3 years ago
> is often discussed in code reviews when it should have almost no place. By the time you get to the code review not only should these have been ironed out and shared with the whole team.

That's a fair point. It's my background in open-source and I suppose the specific ways we work in the projects I'm involved with which made me add these things at the bottom. Oftentimes, contributors come up with PRs for new features, and there was no prior discussion about API design etc. (although we highly recommend to have such discussion before starting with the work on a large PR).

I.e. oftentimes a PR review will be the first time I see how something is implemented, how the API looks like etc. It may be different with more controlled settings in in-house development teams adhering to more formally defined processes.

AngeloR · 3 years ago
This is 100% a side-effect of GitHub. Open Source used to have the additional barrier of mailing lists to detract from "drive-by PRs". The side effect, of course, is that it made it much harder to get involved. GitHub really brought a lot of new eyes on open source software and lowered the barrier to contribute exacerbating (an argument could be made for creating) the drive-by PR problem.
mpalczewski · 3 years ago
First thing I review is readability.

Code should be readable. Once code is readable it makes reviewing the rest much easier. Readable code is more maintainable and problems jump out at you.

Stuff other than readability is important, but if you focus on readability it makes the rest of the review go very smoothly.

convolvatron · 3 years ago
unfortunately, 'readability' is fairly subjective. I was part of a group recently that spent a good couple hours every week arguing whether 'ctxt' or 'context' was more readable.

when confronted, they explained to me as one would a child, that they cared deeply about code quality

_nhynes · 3 years ago
That's ridiculous.

Obviously `ctx` is the most readable.

Deleted Comment

mpalczewski · 3 years ago
> when confronted, they explained to me as one would a child, that they cared deeply about code quality

these people sound useless.

Readability as it applies to code is how easy it is to read the code and understand it in relation to how complicated the problem domain is.

Bjartr · 3 years ago
The best choice is whichever is more consistent with the surrounding codebase. For an outsider to the codebase, you could argue for 'context' being better, But to someone familiar to the codebase, if contexts are always in a variable 'ctxt', then 'ctxt' will instantly parse correctly without a hitch since it'll be interpreted as a single symbol.
TheCoelacanth · 3 years ago
There's a simple test for readability. Someone familiar with the codebase reads the code and tries to understand what it does (a.k.a. a code review).

If it takes a low effort, then it's high readability. If it takes a high effort, then it's low readability.

Deleted Comment

thanhhaimai · 3 years ago
+1, I also focus first on readability while reviewing code.

Software engineering is a collaborative process. I don't write code for the machine; I write it for my colleagues and my future self to read. Code we write 6 months ago looks like someone else code. Readability is important unless you're convinced that it's prototype/throwaway code.

After readability then I'd look for tests. It's a theme: good tests are also meant to be read as documentation. Beside ensuring correctness, it's also the example my colleagues or future self can refer for the module usage.

seadan83 · 3 years ago
This is where I have found review to be really tricky. I have reviewed so much unreadable code that I lose faith sometimes. More to the point, readability issues are often perceived as nitpicks. Eventually after a few rounds of readability issues being addressed I can tell what the actual code is doing. At that point the major suggestions and some rework comes out, but now the author feels burned, nitpicked, and now instead of being done "their perfectly functional" code needs more work and the whole thing is not done. Do this a few times and relationships begin to break down.

Hence, I now try really hard to ignore readability until the end.

I don't know if this partly just hostile developers, different understanding of code. Ie, is code for humans to read, or for machines to execute? Or if it's dealing with feature factory developers that just want to keep adding despite how brittle the code is becoming. I presume the problem is somewhere in between, but to another extent the normalcy of the absurd where it is normal to spend 5 minutes per line of code to understand it, it is normal to havk in features and do manual regression testing, etc..

nine_zeros · 3 years ago
If the code is doing the wrong thing in the first place, no amount of nitpicking on readability will save time.

First pass: Functional correctness

Second pass: Better ways to do the same thing (optional)

Third pass: your favorite nitpicks.

sgeisenh · 3 years ago
I think you're missing the point: it's incredibly hard to determine functional correctness of unreadable code.

So the first priority is getting the change to a readable state. I'm not talking about nits, I'm talking about minimizing the cognitive overhead to truly understand what the code is doing.

That being said, it is also easy to identify nits during this phase. In codebases that have collections of best practices, this often helps to ensure that the change is expressed in terms of a common vocabulary. Once this pass is done, the reviewer can usually better understand the intent of the changes.

thanhhaimai · 3 years ago
It's assumed in your post that readability review is nitpicking. It's actually about making the code idiomatic, consistent, predictable, quickly understandable, and low mental stress while reading. The "nitpicking" parts you referred mostly is about ensuring code consistency and predictability. But there are more to readability than that:

1) did the code use a third party library while a standard lib is sufficient? Is there a way to use native language features (e.g. list comprehension) to make the code more idiomatic to people familiar with the language?

2) are the namings make sense in the business context? Can a new hire read just the public interface and be able to guess the usage?

3) do people have to jump around, full-text-search the code base to understand what's going on? Is automatic dependencies injection being overused? How many things people need to keep in their head to be able to follow this code?

It's all readability there, and I'd argue it's not only about style or formatting.

pc86 · 3 years ago
This structure (especially the optionality of the second pass) is how you end up with perfectly readable code iterating over List<T> when a hash map would suffice and be orders of magnitude more performant.

Nobody can do it perfectly, but I think trying to keep personal nitpicks out of reviews as much as possible - ideally by codifying team nitpicks in automated formatting/.prettierrc/whatever - lets people focus on things that matter: what the code does and how it does it.

GaveUp · 3 years ago
I don't know if I agree that the second pass is optional. I've found that pass is the one I've seen have the most benefit, particularly with newer developers. It serves two purposes when done well.

First it gives you an idea of how well thought out the implementation was (i.e. was this a quick hack to just finish a asked for requirement) or was a best design targeted. It also helps newer developers develop a voice. Often, at the start, newer devs will take what a more senior dev says as gospel, but by striking up a conversation and, in some ways, making them defend their choices it can help build confidence and that voice to speak up when something doesn't seem right.

Second, I've found it a good way to introduce people to new approaches to accomplishing tasks. Not everyone spends their off hours studying patterns and practices and rarely is there time during a work day to do this properly so code reviews are a natural place to bring these things up as there's concrete comparisons and examples to work with. That helps spark a dev's interest to look in to the topics further.

mdtusz · 3 years ago
As a counter argument, it's much easier for a developer to write code that does the right thing when that code is readable.

Too often I see code that is _maybe_ correct, but the reviewer can't actually be sure of it without manually testing it, and the chances that a future reader in 4 months will have any idea what it does or why are extremely low. Good variable and function names get you 90% of the way there yet for some reason cryptic code is still all too common in the world.

To me, reviewing for readability isn't nitpicking - it's equivalent to a mechanical engineering design review pointing out that design for maintenance or design for assembly has been entirely ignored.

mpalczewski · 3 years ago
yes, the strawman of nitpicking has been killed.
simonbarker87 · 3 years ago
I like the attempt at making this process smoother, I guess where to place importance of different aspects relative to each other is always down to opinion and interpretation.

I wrote this piece a while ago focussing on using the Must/Should/Could principle to speed up the review process and put a less opinionated framework in place to help keep code review moving.

https://careerswitchtocoding.com/blog/moscow-the-best-code-r...

It’s not an original thought from me but many people hadn’t heard about it and off the back of this post I’ve had a few people get in touch to say they’ve implemented it in their team and it’s helped improve code review speed.

sz4kerto · 3 years ago
Ours is different.

Tests (end-to-end and integration) are at the bottom. If there are no tests that prove that the code works, we won't really look further because we don't even know if the code implements the right thing. Then comes interfaces, then implementation, then style.

sublimefire · 3 years ago
What is more the tests will highlight the API and the expectations (see API semantics in the pyramid). By nature, the process of writing tests will usually catch some bugs. I'm not arguing for the tests to cover each possible scenario, but they must exist. If, on the other hand, it is hard to add the tests then it probably means you have bigger problems.
makecheck · 3 years ago
An important one often overlooked (due to our tendency to focus on just the “diff”): what else should be changing or could be affected?

Very easy to forget to change code that isn’t highlighted in the review.