Readit News logoReadit News
nojvek · 3 years ago
The core issue here is reviewer saying “if you change this, you also have to fix other pending issues in codebase”.

That should be the point of push back “Great feedback, and appreciate the direction to improve code quality, however the implications of changing Y means we need X, Y, Z approval and it will many days. I am making a tech debt task with what you described and will be done in a follow up PR based on priority and bandwidth.

Let’s focus on what it would take to get this surgical PR out.”

The biggest lesson I learned is to make focused PRs, and learn to pushback when reviewer asks for a scope creep. Mostly other engineers have been pragmatic.

It’s unrelated to number of lines as you can reformat while codebase but not change anything logically, or change some feature flags which could have a huge impact. But ONE FOCUSED CHANGE AT A TIME

Paul-Craft · 3 years ago
> The core issue here is reviewer saying “if you change this, you also have to fix other pending issues in codebase”.

I don't agree. IMO, the worst issue being displayed here is that of the "6 days to change 1 line of code," almost half of them elapsed before an engineer ever looked at the issue. If this is such a high priority issue that the company would "need" (scare quotes intentional) to do a layoff if it's not implemented soon, those 2-3 days before anyone looked at it should never have happened. And that is apparently what passes for "the fast track" in this development process.

And then there's the 2 days at the end of this 6 day period where it looks like nothing happened, because the test plan was deemed insufficient.

"If you change this, you also have to fix other pending issues" only took up 2 hours here. There are at least 2 or 3 other things I could fill in here that I'd flag as core issues with this process before I'd even think about dealing with this part of the process.

nojvek · 3 years ago
You are correct. Orthogonal to the PR process is incompetence of management being unable to identify key stakeholders and communicating the urgency and importance.

It should have been some VP realizes how important this is. Gets the security folks, testing folks, Senior devs, team manager in a room and task them at figuring how how to solve this very specific problem before it impacts a layoff.

I didn’t really understand the difference between good and bad management until I had multiple different managers and VPs over a decade at different companies.

The difference between good and great is night and day once you witness an exemplary communicator who can get the right people together and give them focus.

figassis · 3 years ago
Usually I avoid making any improvement that is not directly related to the tasks at hand. Even adding a missing semicolon risks bringing this to the attention of an overzealous reviewer that will then make me follow a rabbit hole of legacy fixes.

I’d rather create issues silently so I don’t forget instead of even adding a FIXME or TODO. I think this part of reviews is broken. Resolving tech debt should not be a requirement for completing a task, it should be planned.

tremon · 3 years ago
Resolving tech debt should not be a requirement for completing a task, it should be planned.

Such tasks will never be planned because there's always higher-priority tasks to be done. I actively encourage my developers to clean up the code they'll be working on, otherwise you're just piling tech debt upon existing tech debt.

The only exception is repository-wide refactorings: if those are necessary to implement a feature, we block the feature and create a prerequisite task outlining the refactoring needed. But this is mostly to aid the review process, in my mind these two tasks are still one (and if I work on it myself, I usually already have the feature half-implemented so I can double-check that the refactoring does indeed make the feature easier to implement).

Tainnor · 3 years ago
I fundamentally disagree. If every minor fix needs to be planned, it never gets done. The boyscout rule wins, and if your reviewers are not pragmatic, then that's an issue with their attitude.
hinkley · 3 years ago
Score creepers have no idea the architectural damage they cause. When you get over invested in one block of code, people will try to route around it. Then you get layers of these and pretty soon your code is the moral equivalent of Atlanta, GA, which is notorious for the number of ring roads it has.
mabbo · 3 years ago
An even better solution, imho, is that rules should be automated.

New rules are added, and the automation adds rule override comments to every existing violation - which also lets you track them. When you need to rush code out that needs to violate a rule, you add an override comment as well, with your name assigned to fix it.

Then over time, you build a culture that wants to fix these rule violations separate from feature development.

foogazi · 3 years ago
> The biggest lesson I learned is to make focused PRs, and learn to pushback when reviewer asks for a scope creep.

Whenever that happens just add a TODO ticket. Unblock production and the system is not worse for it

rad_gruchalski · 3 years ago
“I am very sorry to hear but my job depends on making sure we follow our procedures. Please follow outlined procedures before your change is promoted to production. Thank you for your cooperation. Sincerely, your IT department.”
guluarte · 3 years ago
it is not about code quality, it is about power lmao
crop_rotation · 3 years ago
That is absolutely true. Most companies have code review process that is totally full of nitpicking and trivial comments. I once suggested replacing that with static analysis tools to remove such comments and make feedback faster in our org, and was told that such code review is necessary for everyone. It helps people get promoted, makes people feel they prevented code issues, keeps the code review metrics looking good for the top level overlords who sit and monitor number of reviewer comments.
arp242 · 3 years ago
> replacing that with static analysis tools to remove such comments

I dislike excessive usage of these kind of tools because not infrequently you end up making your code worse just to satisfy some dumb tool.

The real solution is just accepting that not all code needs to be like you would have written it yourself, and asking yourself "does this comment address anything that's objectively wrong with the code?" A lot of times the answer to that will be "no".

ddejohn · 3 years ago
I suppose it depends on what kind of tooling you specifically are talking about, but I strongly disagree in the case of Python. When you have such an "anything goes" language, it really helps to have a tool that enforces consistent style across a large group of developers.

Unpopular opinion: I think individualism showing in code is a bad thing. Code should be textbook boring with as little personality showing as possible. This, of course, is due to my experience working in a department with ~40 developers. I work so much faster when I don't have to decipher "clever" code or somebody's personal style for how to line break their over-complicated comprehensions. It slows everybody down when there are 40 different dialects being "spoken". Homogenizing our code with linters and formatters means that any part of the codebase I jump into will look completely familiar and all I have to do is focus on the logic.

a_cardboard_box · 3 years ago
Human reviewers can also demand your code be made worse. Sometimes, the choice is making your code worse to satisfy a dumb tool (software), or making your code worse to satisfy a dumb tool (person).
jacquesm · 3 years ago
This is exactly it. Most code reviewers make other people jump through hoops to force them to code 'their' way, whereas more often than not there are multiple ways to solve a problem neither of which is substantially better than the others. In the rare cases where there is a much better way of doing something there won't be anything to argue about, it will be obvious.
renegade-otter · 3 years ago
It doesn't have to be excessive - it has to be reasonable. But I agree, outside of the strictly-enforced automation, just let it go and stop going "nitpick!"

The less communication between devs during code reviews - the better. It's exhausting to be diplomatic to your coworkers asking them to change something. It's a minefield full of feelings and egos.

RHSeeger · 3 years ago
Where I work, we have static analysis tools.. plus the ability to "override" (tell the tool to ignore this case) when we feel it makes sense ("I know what I'm doing here"). And then peer reviews to help make sure those overrides make sense.
Pannoniae · 3 years ago
The main thing is that static analysis tools, strict code style and whatnot put both a cap and a ceiling on the code quality. It will be a bit harder to write complete garbage, but at the same time, it has a negative impact on code quality because many of those rules are completely nonsensical and decrease code readability. Also, it puts a severe limitation on expressivity, which is a negative thing if you want your programmers to actually enjoy their job (which increases productivity...)

Of course, if all you care about is the fungibility of your programmers, then having those overly strict rules completely makes sense, but if you want to produce a quality product, not so much.

Deleted Comment

flippinburgers · 3 years ago
Working on codebases - like rails without rubocop - is not something I would like to consider. There is already too much variability in how people code.
dharmab · 3 years ago
I use these tools but make sure to turn off any subjective checks in their configurations.
Macha · 3 years ago
Sometimes there's a real prisoner dilemma thing going on here too. As a senior reviewing a junior's PR there are things that could be better but sometimes it's just not important. Are their variable names a bit too wordy? Did they have uneven amounts of whitespace between methods? In an ideal world these are more a "feedback for next time if it becomes a pattern" type thing. But as a reviewer, someone is looking at your comments per PR as an indicator for how much guidance you're providing, or going "oh who let that get merged" so you make the comment anyway. And then as the reviewee, they're concerned that leaving comments unaddressed looks bad for their level of responsiveness to feedback, or they think the reviewer might give them bad feedback for pushing back, so they'll then go and make the changes. But now they need someone to approve the updated version and the latency cycle starts again.
delusional · 3 years ago
I always make it very clear in my comments if something is a blocker (this can't be merged before this is fixed) or a note (this would better fit the style of the existing code), and then i make it very clear in person that I do not take it as a personal affront if they don't fix my note comments. I find that this works pretty well. It has the additional positive effect of opening me up to criticism. If they understand that not all my comments are always vital or "right" I imagine it's easier to dispute anything else I say too.
GrinningFool · 3 years ago
In some environments, that is true. But the process of review can also help build shared knowledge and understanding of the changes and the code base.
MrJohz · 3 years ago
I'm increasingly of the opinion that this should be the primary value of code review. The goal is to make sure that multiple people working on a codebase have at least some idea of what's going on in every corner of it. Obviously that's never going to be perfect, and you won't understand code as well by reading it as by working with it, but at least having some sort of familiarity is invaluable.

The other important benefit is catching stupid mistakes - formatting, missing code, > vs >=, etc - dumb mistakes that everyone makes. But ideally most of those mistakes are caught by types, tests, and linting, so the code review just makes sure that a different pair of eyes has sanity checked things. If code reviews become mainly about this sanity check, then it's often a sign that the types, tests, or linting are inadequate (or that someone is being too picky).

The other problem is when reviews too often focus on the big problems - architectural discussions about how to implement a feature - in which case there's usually a need to have more architectural discussion before implementation starts.

Philip-J-Fry · 3 years ago
Nitpicking is definitely a real thing. Perhaps it's the feel that they need to find something wrong with the code.

But there's also very real issues that one person thinks is a nitpick that really isn't. Perhaps because they can't see the issue with their own eyes, they don't understand the issue or they lack the ability to leave feelings aside and rethink what they've written.

We've all been attached to code we've written. We probably think it's the most elegant code ever written. But sometimes you've just gotta admit you were wrong, it's not readable or it's flawed and it will be worse off for the codebase.

I've also been called a nitpicker by someone more senior than me for pointing out some very real and potentially problematic race conditions in their code. To me, a race condition is a fundamental issue with what has been written and should be fixed. But to them it was acceptable because they hadn't seen it break naturally yet.

nickjj · 3 years ago
> Nitpicking is definitely a real thing. Perhaps it's the feel that they need to find something wrong with the code.

You can use code formatting and linting tools to automate the low hanging fruit but there's still opportunities to offer feedback that could be considered nitpicking but IMO isn't.

For example variable names, breaking up long functions, not enough test coverage which hints maybe the author didn't consider these cases or changing this type of code:

    if not something:
        # Unhappy case
    else:
        # Happy case
To:

    if something:
        # Happy case
    else:
        # Unhappy case
If you have 10 devs working on a project you're going to get 10 different ways of thinking. If you're working on a 10 year old code base with a million lines of code and you don't address things that could sometimes feel like nitpicking then you wind up with a lot of technical debt and it makes it harder to work on the project.

I like when people comment on my code and find ways to improve it in any way shape or form. I don't care if they have 5, 10 or 15 years less experience. I treat it like a gift because I'm getting to learn something new or think about something from a perspective I haven't thought of.

RHSeeger · 3 years ago
Static analysis tools and peer reviews can catch different types of issues, though. Just like a statically compiled language can catch some bugs that a dynamic language would not; but not everything. I'm a big fan of peer reviews, and I tend to focus my comments on

- This won't do what you appear to be expecting it to

- This will prevent <some other feature we're likely to need> from being implemented; at least without a much higher cost

- This will work, but is very hard to understand; it will negatively impact maintenance. Consider doing it this other way, or adding an explanation about what's going on

- This code is ok, but might read/work better <this other way>. I'm not failing the review because of it, but it's worth keeping in mind for future code.

Waterluvian · 3 years ago
This is what I implemented prior to my team growing beyond 1 person. If linting passes, it’s valid style. If you think the style is problematic, that’s an issue with the linting and formatting and not the PR. In practice this hasn’t been a problem. We’ve made 2 or 3 linting PRs over the years.
mtsr · 3 years ago
It’s why opinionated tools like rustfmt are so good. They set and enforce a standard which is good enough that even perfectionists don’t feel like arguing them ad nauseam.
hereonout2 · 3 years ago
Introducing linting in the test stage (flake8 with python) solves so many problems.

By completely removing any personal preferences and disagreements on style, the whole team just gets to hate on and berate an inanimate tool instead.

We all just seem to equally dislike flake8 and move on with our lives.

Tainnor · 3 years ago
Many code bases that I've worked on suffered from too much, and badly engineered, code - not from not having enough code.

Yes, you can go to extremes, and some people are annoyingly nitpicky when it doesn't matter. But generally, code is often a liability, stuff that is hard to understand (or even potentially subtly wrong) will cause issues down the line, and so on. Code review helps mitigate these issues, something which I think is one of the few empirical results we actually have. Plus, it also spreads knowledge about the code base.

The submission is in any case not pointing out that code review itself is bad (or even that having to adapt to a new code style is the problem), but rather the whole bureaucracy around it. Code reviews are good when the turnaround is swift.

ReaLNero · 3 years ago
> Many code bases that I've worked on suffered from too much, and badly engineered, code - not from not having enough code.

Because the projects that weren't built fast enough were eventually thrown away, meaning they never require maintenance, which is the source of the sampling bias you are observing. I've seen it happen in some cases (engineer with the wrong personality leading an R&D project).

ryukoposting · 3 years ago
I've worked in spaces where static analysis tools were run automatically on every new PR. Trust me, it's not as good as it sounds. Static analyzers are fully incapable of detecting the more nuanced mistakes, so a human touch will still be necessary. Nearly all of the "bugs" found by the static analyzer won't actually be bugs, but you'll have to "fix" them anyway because the reviewer (again, you'll still need one) will demand that all the warnings be cleared before approving your code.

Build a culture that prefers succinct, non-nitpicky code reviews. Static analysis tools only give reviewers more crap to nitpick.

NavinF · 3 years ago
> Nearly all of the "bugs" found by the static analyzer won't actually be bugs

Which static analyzer is this? Every tool I've used only finds bugs the are provable so the false positive rate is essential zero

javajosh · 3 years ago
It depends on your team. If you have good juniors, they will relish the feedback. Of course it needs to be substantive. Often junior code will be far more complex and difficult to read than senior code, and code reviews are a chance to get that feedback across.

That said, often teams don't give credit for code reviews, or take into account the (sometimes considerable) time required to write them. To some extent, tools are also to blame - I think all teams would do well to pick a system that integrates with the dominant IDE. Doing code reviews in the same space you write code just makes sense; requiring that you use a friggin' webapp is HORRIBLE.

crop_rotation · 3 years ago
The thing is, almost all the trivial comments can be easily automated. That would both give the junior dev quicker feedback before even raising the review, and save time for everyone.
ghqst · 3 years ago
As an intern doing work in an unfamiliar framework, it was really valuable to get code review from seniors about how to better go about solving a problem structurally. Not something static analysis would've caught.
Forricide · 3 years ago
I _kind of_ understand in the sense that - it can feel bad/like wasted time to do code review when you don't leave a single comment.

But of course, it's very very important to ignore that feeling and do the sensible thing, which is be happy that the code review is smooth and the code can be merged. I can't imagine the existential dread of realizing that half your job is pointing out the same common errors every single time. Tools like clang-tidy, clang-format, etc are so vital for staying sane.

lrem · 3 years ago
I’ve reached the point where my reviews tend to be either instant LGTM, or “please formalise the idea behind this into a design doc and submit for review to the TL forum” (I.e. a polite “I think you don’t know what you’re doing”). The usual “how about ${BETTER_NAME}” can be left optional, there’s very rarely anything actually bad by the time people submit for review. Is this a rare culture?
jpollock · 3 years ago
This week during code reviews, I caught:

1) O(N^2) in the hottest code we have.

2) O(N^3) from someone else in the same area.

3) Filling a cache on first access - this causes the system to page (high latency!).

4) Hunting for data themselves and creating null pointer errors.

5) Hunting for data themselves and missing all the fun edge cases.

6) Duplicating a feature that already existed.

I'm guessing (4) could have been caught by static analysis?

regularfry · 3 years ago
Possibly (6), depending on the nature of the duplication.
swader999 · 3 years ago
If you can't change the Company, change your company.
BSEdlMMldESB · 3 years ago
if it is "your" company but you cannot change it, I put that it is not your company. but that in fact you are just its employee and not the other way around.
AdrianB1 · 3 years ago
I am doing code reviews for a large manufacturing company with dozens of plants; that code is supposed to be deployed everywhere, so the entire company is at stake.

I insist on doing code review in areas where static analysis tools are completely useless, like in SQL code: without context, any SELECT can be good enough to pass static analysis, but crash in production due to performance problems.

Nobody was ever promoted in my company for doing code reviews. None of the people that were ever promoted know how to do any sort or code review. I can say that the correlation between promotion and code reviews is negative.

dkarl · 3 years ago
I agree that formatters and static analysis tools are the best way to reduce trivial PR comments. However, I would say that the kinds of things addressed by formatters and static analysis tools, and similar issues that tools don't catch such as misspellings, are worth addressing in PRs. "Nitpicking" is in the eye of the beholder, and your company might have a particular culture problem that I haven't seen, but many of the things I've seen called "nitpicking" are legitimate readability issues.

For example, a field named "customers" that holds a single customer can be a major readability hurdle for someone who is looking at code for the first time. Even if you discount readers' annoyance and discomfort, which I don't think you should, a single field named "cusotmer" can become a drag on readability if the misspelling gets amplified in the codebase or if some programmers on the project aren't native English speakers. (It can be much harder to mentally gloss over language errors that aren't in your native language.)

Something I learned in school is that it's extremely hard to read something you've written as another person would read it. I had an English teacher that would split the class into pairs every time we had a paper to turn in, and in each pair, you would read the other person's paper out loud to them. The results were comical, and to most students entirely unexpected. Their whole school careers, they had thought teachers were being mean and nit-picky, until they got to see their friends struggle to read what they had written. Then they started to understand that a wrong word, a misspelling, a wrong verb tense, an ambiguous pronoun, could stop a reader in their tracks or send them down the wrong path trying to understand what the writer meant. They also learned that they could not see these issues in their own writing. When they read somebody else's writing, they would stumble over errors, but in their own writing, they had to make a special effort to see them, and even then, readers would effortlessly "discover" errors that the author had missed.

For many programmers, PRs are their first and only opportunity to develop this empathy for readers, and if they reject the feedback, they're going to remain forever in the dark about how other people experience their code.

qudat · 3 years ago
Agreed. We don’t need human linters: https://bower.sh/human-linting
PartiallyTyped · 3 years ago
I have been told that we need 2 people to do code review, in a team of 3 people, and we can just “ask a member from another team who works on the same product”. We were told that outsiders can act as external bar raisers who will comment on “overall flow and code quality”.

Sounds like glorified and highly opinionated linters whom will just gatekeep progress because they “don’t get it”.

avidiax · 3 years ago
> keeps the code review metrics looking good for the top level overlords who sit and monitor number of reviewer comments

Wouldn't the top level overlords want better code review metrics? Padding your stats with comments that can be handled by static analysis makes such noisy metrics even more noisy.

crop_rotation · 3 years ago
Running SQL queries on the data warehouse and generating reports saying very few comments on reviews is much easier than bothering to look into the comments. I ran into a high level engineer making very high amounts of money who would just do such vague things all day and claim the org needed more comments.

Deleted Comment

Dead Comment

Dead Comment

pjmlp · 3 years ago
I have given up on review process.

Just implement whatever clean code guidelines everyone is keen in having, with their dependency injection guidelines of an interface for every single class, and move on with the actual work instead of burning discussion cycles in pull request reviews.

politelemon · 3 years ago
> Julie: Then contact Joe in IT Security. He'll get you permissions. > 2 hours later.

This is completely unrealistic, security teams would never respond so quickly.

switch007 · 3 years ago
Unless it’s them getting a P1 security alert because you ran “npm install”
fargle · 3 years ago
our security team actually responds even faster. they simply deny all requests automatically, but immediately.
542458 · 3 years ago
Hah, I have a very different experience where I work. At my org every time I submit a ticket requesting access for somebody to such-and-such a system it typically gets done within a couple minutes regardless of what priority I give. I’ve sometimes wondered if help desk staff snap them up as soon as they come in because they’re fast tickets to close and lets them improve their personal metrics.
aezart · 3 years ago
It takes multiple weeks to get someone added to the AD group required for wiki edit permissions.
seanthemon · 3 years ago
2 programmer hours, obviously
bsuvc · 3 years ago
It sucks when you say it like the title: "6 days to change 1 line of code",

But... The system improved in a few other ways:

1. The setting became configurable via the parameters table instead of being hard coded.

2. Auditing was introduced to track changes to that setting.

I'm not trying to defend bureaucracy, because I truly hate that aspect of large organizations, but just point out that additional value was created during those 6 days beyond what was originally set out to be achieved.

Incidentally, this is why you have to bake in a set amount of overhead into your estimates, and need to take this red tape into consideration if you do story pointing.

yuliyp · 3 years ago
The only reason the parameters table would be useful is because making changes to the code had so many things blocking it. Similarly, auditing for this sounds unnecessary because it used to be in code which means that you had source control as your audit trail.

So your two wins were basically a "win" of not dealing with extra ceremony around code changes, followed by a "win" of recovering the functionality loss of the first "win" because future changes to this wouldn't go in the code.

axpy · 3 years ago
Yes but it also did something that is potentially way more dangerous than the original ask. I would argue that pushing a parameter from a hard-coded value during an immediate outage or real production concern is stupid. There are way more potential pitfall there. IMO, This should have been handled with a:

OP: Please accept the 1 char PR, this is urgent. I just created a ticket to track the requested enhancements. Let's address production concern first then then I will get the rest after.

Reviewer: LGTM!

This is actually where seniority pays, if most of your engineering base can't navigate between rules and guidelines, the organization is crazy.

mesarvagya · 3 years ago
This. If it is a production issue that needs to be fixed, fix it fast with priority and create another ticket to take care of reviewers comments.
ebiester · 3 years ago
Step 1: determine true priority. How long can this take before it affects jobs? Everyone needs that in mind.

If a week won’t affect anyone’s jobs, follow the process or minimally alter it. If people are on furlough because of IT, you have everyone needed in the same room (physical or virtual) until the problem is solved.

We don’t have that context here.

But if Ed and the entire chain doesn’t have that context, That’s a system failure. The senior would likely just insist on a second ticket to fix it right after if he knew someone’s rent check was on the line. (And if not? That’s also a management issue to solve.)

bsuvc · 3 years ago
That's a good point.

The urgency of this problem was not communicated broadly enough and the impact of the simple fix was not communicated back up the organization so that it could be prioritized correctly.

It seems like Ed is just being "helpless" and blaming the organization when maybe he could have raised the issue back up to the president saying "my 1 line of code fix to your problem is being blocked by these people. Can you help?"

This was a communication problem as much as anything.

pif · 3 years ago
> It sucks when you say it like the title: "6 days to change 1 line of code"

Which just states the fact.

> The system improved in a few other ways

Which were not required.

bsuvc · 3 years ago
I think I agree with you, but to play devil's advocate:

> Which were not required

Sure they were not required by Ed, but they were required by other stakeholders who are responsible for testing and maintaining this system.

I agree though, it would be frustrating to get into this sort of "yak shaving" situation where you don't even care about all these other requirements that are being added.

Like I said in a sibling comment, this is really more of a communication problem on Ed's part, and maybe he could have neutralized all these problems with better communication.

omoikane · 3 years ago
Auditing requirements could have been covered by version history for the file containing that hardcoded value, I think? They would have other problems if they didn't have version control.
cjalmeida · 3 years ago
And it just cost 6 x 10% x $MM worth of daily products! /s
hmaarrfk · 3 years ago
This is a story where a one line change of a hardcoded value actually went well.

I could image a scenerio where somebody stored the number of months of backlog as a 2 bit value. 0, 1, 2 or 3, you know, to be smart and clever. This may not appear as a problem during testing because it may be hidden many layers down, in some downstream service that is untested. Maybe in some low code automation service....

Changing it to 4 would mean the backlog is 0. Who knows what the consequences might be. Would that service go and cancel all jobs in the production queue? Would it email all customers mentioning their stuff is cancelled?

I get that this is a seem gly easy change, but if a change of policy is expressed to the software team as an urgent problem, this seems like the management team needs better planning, and not randomly try to prioritize issues.....

woooooo · 3 years ago
None of the requested changes involved more testing or risk-reduction.

They actually increased risk by insisting on refactoring a bunch of nearby things as the "cost" of the change.

morelisp · 3 years ago
The audit trail likely represents actual risk reduction of someone undoing or misunderstanding the change later, since the change has no meaning outside the context of the request.

"Fixing preexisting errors that violate new company policy" also arguably involves real risk reduction; you gotta do that work sometime, and if everyone in the company agrees the time is now, the best time is now.

Using Marge instead of Homer is not "risk reduction" but presumably testing accounting close is also critical.

Tony's request is also reasonable, unless you want to leave the next dev in the same shithole you were in re. the wiki state.

cratermoon · 3 years ago
On the flip side, if nearby things are never updated to match changing understanding of the system, then very shortly the code will be cluttered with possibly dozens of different styles: multiple naming conventions, constants in some places, hard-coded values in others, and values read from a parameter file in others, and other kinds of variations. The result will be a chaotic scramble that has no clear structure, and requires programmers to know and understand multiple different ways of expressing the same business concept.

Now that is truly increased risk.

twic · 3 years ago
There's all sorts of ways it could go wrong. Perhaps the real question is where blame will fall if it does. If the big boss says "I decided to take the risk and push this through, I accept this was a consequence of that", great. If the programmers get beatings, not so great.
hmaarrfk · 3 years ago
the other thing i notice from the story was that an update on something considered mission Critical was not given an update on within 24 hours.

IT should have volunteered the info regarding how far back in the backlog this was classified as soon as that prioritization was made. "Behind 14" and with many people on the testing side occupied is obviously not going to help with "layoff level priority".

To me, the classification of "enhancement" just doesn't seem to capture the urgency.

stasmo · 3 years ago
I think the correct people and processes were followed, but they could have saved a great deal of time aligning on the importance and priority of the task by putting together a meeting with the leads.

For a time-sensitive and critical update to core functionality, the director of operations should have been aware of the mean time to deployment for the software and put together a team to fast track it, instead of entering it into the normal development pipeline with a high priority.

qxmat · 3 years ago
Knight Capital!
hmaarrfk · 3 years ago
https://www.bugsnag.com/blog/bug-day-460m-loss/

It made me laugh! And cry inside

JoeAltmaier · 3 years ago
Code review starts out with good intentions. But some gatekeeper inevitably sets up shop and starts rejecting everything for trivial reasons. Says they're interested in keeping up 'code quality'. But nothing is worse than keeping buggy code around long after the fix is ready, or delaying features so nobody can try them.

I'd recommend a process where comments are allowed but reviewers cannot block commits. Each developer is trusted to be careful, make such changes as are pertinent to the task. Use CI as well and depending on the team it can all go very well.

willseth · 3 years ago
Then an engineering leader needs to stop that person. Dysfunction can manifest in many ways, and overzealous reviewing is one of them. Changing the process so people can ignore pathological reviewers is a half measure at best.

I’m torn about blocking. I do get the sense that it’s frustrating to get that big red block, so in many cases people ask for changes without blocking - basically a “soft block”. But I’ve dealt with plenty of cases where a PR is so far off the rails (usually a jr. developer), I think it’s appropriate to send a message.

Tainnor · 3 years ago
For me, it really depends on the other person. There are some devs that I trust to carefully consider my comments (even if they end up disagreeing), so unless something is catastrophically wrong, I will still approve.

But there are some other developers who seem content to just routinely (not just sometimes) ignore anything that is not phrased as a command (such as "I think we should test this better"), and I've adopted the habit of not approving such PRs immediately.

black_puppydog · 3 years ago
this works well when coverage/test quality is high. which is in itself not something that magically happens if you just let devs move as quickly as whatever manager thinks is necessary right now.
qudat · 3 years ago
I abhor “every code change needs a reviewer” rule. They are a massive distraction and don’t necessarily lead to better code.
Tainnor · 3 years ago
There are empirical results about code reviews and they contradict your intuition: https://en.m.wikipedia.org/wiki/Code_review
maccard · 3 years ago
The rule isn't just for code quality, it's to increase knowledge sharing.
hermannj314 · 3 years ago
My comment is more a meta commentary on factory workers vs software developers.

This company leader is willing to lay off factory worker because of 10% under utilization, he can tweak some variables to increase productivity but ultimately the choice is either full utilization or unemployment. He can do this largely because, I assume, these workers are replaceable and can be rehired during busy season and because the profit generated per employee doesn't allow for any inefficiency.

I work as a software developer. Our under utilization has to get well over 90% before anyone is even considering getting rid of us. Many of us are putting in four hour work weeks. No one manages our minutes, our bathroom breaks, etc.

We are in a period of major capitalization of software. It won't last forever. Eventually the major infrastructure of the IT world will be built and the industry will shift to a maintenance mode. Most of us won't be needed, we will become replaceable and the profit we generate in maintenance mode will be insignificant to what we see today.

Factory workers are usually fired within minutes or hours of detections in low productivity of an individual worker. In our life time, I believe this will start happening with software developers.

starcraft2wol · 3 years ago
> These workers are replaceable and can be rehired during busy season

That's exactly the difference. The factory is a system of processes designed to remove decision making and variation from each person.

You have to evaluate the degree to which that could be done with your skillset.

> in a period of major capitalization of software. It won't last forever.

I agree with your basic point-- not every company needs engineers developing novel software all the time. It's more of a boom and a bust creative venture, like producing a movie. If you choose to work in development over IT you should accept that risk. But I don't see why this has time has to be the peak.

zerr · 3 years ago
Anecdotal experience: after some years in the team with formal code reviews, moved to the team/company without code reviews. Everyone is free to commit/merge to any branch. When joining the company, I had somewhat mixed feelings about this. Apparently, it felt so refreshing and empowering, I became productive within few days.
anthomtb · 3 years ago
A while back, I also worked for a team that did "Catholic Code Reviews" (push and pray).

No code review worked very well given the objectives of the team. It was an R&D group who's primary objective was "demo the nifty new thing to the executives". That meant a lot of short notice requests but also a lot of throwaway code. We'd demo the thing, the exec would be like "looks great, no business case" and the repo would never be touched again. Of course, every now and then our thing WOULD get productized and the downstream team would be responsible for turning chicken-scratch code into something production worthy. Those guys hated us with a burning passion.

swader999 · 3 years ago
I've seen this work really well on small teams where there's high degree of trust, ~80% test coverage. It was a no PR process, just merge with master when tests pass, UX(if applicable) demoed successfully to stakeholders and you are satisfied with it. New team members were assigned a mentor that sat beside and frequently paired and reviewed their code for the first 2-3 months.

Was a 2.5 year project, went live at 20 months in, was on time and meet budget with more capability than originally scoped.

Many days had 2-3 hours of discussion at the whiteboard. It was informal, didn't always involve everyone.

Oddly, we had three different pms during this project. We had a hard rule of no email or contact outside of stand-up with them. Two of the three pms were able to 'work' with this setup. The head of airport IT eventually realized after two years of this that we didn't need a PM.

We had a rule that if you are doing anything novel in the code base you had to have a conversation about it with at least another dev.

We all sat within a few feet of each other in a large private office with a huge whiteboard. We used index cards taped to another dedicated whiteboard for stories and if you couldn't describe the essentials on that, you had to break the story into smaller pieces. We were allowed to build our own machines and use as many monitors as we wanted. Was an Airport billing and tariff system for a large international airport, the accounting department head, director and other users were a couple of office doors away. They rarely missed a stand-up and they had an open policy for any real time questions. Standups were informal discussions, demos, question and answers not typically for status. The status update just took one glance at the cards on the whiteboard.

The final system improved revenue by 8% in its first and every following month. The director of Accounting was brought before the Airport Authority Board to explain. Billing disputes and adjustments with airlines went from nine days a month to one. The monthly billing workload went from 18 days to 5. Instead of a senior accountant being the primary user, they were able to offload this to one junior accountant with three years experience.

Bug rate was six production bugs, zero incorrect invoices in the first year live. I don't have any data after that.

The previous rewrite attempt failed after a three year attempt.

jefozabuss · 3 years ago
sounds like a security and audit nightmare to be honest but I guess it’s an agency or similar with smaller projects