500 - the number of employees before your CEO becomes a political figure
1000 - the size of company when you’re at risk of losing accountability
From my own experience and other sources (https://news.ycombinator.com/item?id=35206141) those numbers seem too high. I've seen this happen 2 times in front of my eyes while both companies were in the 100-200 range.
One of them managed to go to the next level and adapt to the new reality the other is still struggling.
I remember reading a CEO's categorisation of companies: he said, when I do to the office restroom, and another guy is at the urinal next to me, there are three different results based on the size of the company:
1. I know who he is, and he knows who I am.
2. I don't know who he is, but he knows me.
3. Neither of us has any idea who the other one is.
These numbers are probably the least valuable numbers you will see on Hacker News today. Almost every one is an exception to an anecdote, and an anecdote with exceptions.
Things like "90 - the number of days a role should stay open
90 days is the industry standard time-to-hire metric."
So is 90 days the average in this industry? Do you want to be average? Is it really your goal to do things they same way all of your competitors do? A role should stay open EXACTLY as long as it needs to. Putting artificial numbers in front of it and acting like there is some magic 'best practice' is ridiculous. There may be some jobs that take longer, and some that take much much less. The last thing you need to do is sit around answering questions like "Why is this taking longer than the industry standard?".
If you really need to manage by following a list of 'numbers' you are a fantastic illustration of why there can be so much differentiation among companies. Great companies can build great successes on the back of thoughtless process.
On a tangential note I've also asked here (https://news.ycombinator.com/item?id=35245329) what will be some performance indicators to take into consideration when delivering software. That '50 PRs in 6 months' is interesting. You might argue that "it's important to deliver business value" and "as long as your hitting your market/business capabilities targets" everyone is happy. But my question is still: "how do you know you are being efficient? can you still deliver with half the number of people?".
A favorite “tactic” of some developers Ive seen is to deliver a half baked solution quickly, then a continuous stream of fixes to that solution. An easy recipe for creating merging a lot of PRs.
In general (and not just in software), anything that becomes a known productivity metric will rapidly converge towards the mean.
The worst example I have seen is incentivized story points. Bad news: you just ruined your estimation process.
Managers end up trying to put in counter measures and the whole thing becomes a convoluted mess.
Personally, I think PR are a decent metric, but you need an engineering manager to review (and understand) the work that the team is doing (and the team dynamics—who’s doing what).
Basically, the only way to really understand productivity is to be close to the team. This takes time, and it requires hiring the right manager, so a lot of companies opt for off the shelf solutions (Pluralsight, Jellyfish, etc).
Not saying this is good or bad, it’s just what I have observed.
I totally agree. This is why is bad to "measure". When you say measure, everything will look right in the end. But if you say "monitor", at least to me it feels more towards: "I'm proactively looking at my team to see how it goes. I have a way to see when things will start to go wrong. I can also see how productive people are by correlating data. If I just take PRs as an isolated number, it's not relevant. But if I take it, and see that 80% are actually bug fixes, that raises an issue in itself"
1000 - the size of company when you’re at risk of losing accountability
From my own experience and other sources (https://news.ycombinator.com/item?id=35206141) those numbers seem too high. I've seen this happen 2 times in front of my eyes while both companies were in the 100-200 range.
One of them managed to go to the next level and adapt to the new reality the other is still struggling.
So around the 150-200 mark.
Things like "90 - the number of days a role should stay open 90 days is the industry standard time-to-hire metric."
So is 90 days the average in this industry? Do you want to be average? Is it really your goal to do things they same way all of your competitors do? A role should stay open EXACTLY as long as it needs to. Putting artificial numbers in front of it and acting like there is some magic 'best practice' is ridiculous. There may be some jobs that take longer, and some that take much much less. The last thing you need to do is sit around answering questions like "Why is this taking longer than the industry standard?".
If you really need to manage by following a list of 'numbers' you are a fantastic illustration of why there can be so much differentiation among companies. Great companies can build great successes on the back of thoughtless process.
You can hope or inspire or ask for more. But don't expect it. And don't take it for granted.
We also have a flat structure.
I’m seriously considering leaving for an IC role at this point, though the market is terrible and I’ve never been good with leetcode style interviews.
The worst example I have seen is incentivized story points. Bad news: you just ruined your estimation process.
Managers end up trying to put in counter measures and the whole thing becomes a convoluted mess.
Personally, I think PR are a decent metric, but you need an engineering manager to review (and understand) the work that the team is doing (and the team dynamics—who’s doing what).
Basically, the only way to really understand productivity is to be close to the team. This takes time, and it requires hiring the right manager, so a lot of companies opt for off the shelf solutions (Pluralsight, Jellyfish, etc).
Not saying this is good or bad, it’s just what I have observed.