Always good to remember "what gets measured gets managed". I.e., every single one of these metrics can easily be gamed in a way that would be hard for a non-technical manager or "bean counter" to determine.
Ex: oh, you're measuring number of commits, well, I'll stop squashing and start breaking commits down to single lines. Boom, I'm a 100x software engineer.
Does anyone have a decent list of individual contributor KPI's that are actually useful or insightful? I've, so far (~20 years of experience), never run into individual KPI's that weren't able to be gamed and were actually meaningful.
In this sense I do consider many KPIs often to be good metrics to find out whether the company is on the right path, or as an early warning system for problems that the company might become confronted with in the future.
But as soon as you use KPIs to assess managers, they will become gamed, and thus loose a lot of their value.
i think that KPIs often make sense at an organization level of a business, such as "We want to boost revenue from this product line by X%".
but when you try to translate that to the individual level, it's almost impossible because it's hard/impossible to properly attribute value to individual actions.
...and attitudes like these are why most Software Engineering theory is BS. We refuse to measure things in a reproducible way, and would rather stick with subjective pre-scientific concepts such as "clean code" and hit-or-miss rules of thumb for project estimation. You need an objective way of measuring and testing whether your hypothesis (be it that clean codes improves maintainability, or that X strategy is reducing dev churning) is true, and good KPI's are that.
we use the number of closed tasks/user stories but also came across people gaming those too. Lower the scope and cut corners (shitty design, superficial meaningless unit tests, ...). The point I understood, is stay away from people gaming things and have a management tech savy too.. I wroked in a company where the CTO is tech illeterate and everyone is gaming everything, I worked somewhere else where the CTO is the smartest and most tech savy person I have ever met and gaming anything there becomes obvious and people are shown the door out of nowhere (yet for a good reason).
Measuring commits, loc, tickets, and story points, individually, might be gamble, but seeing all those consistently higher or lower seems like an okay indicator.
Aren't most of these actually anti-patterns? The best example is lines of code written. If we measure individual performance on that, it's trivial for people to game the system. Also, number of lines of code doesn't correlate to business outcomes. I much prefer to solve the same business problem with half the lines of code if possible!
If I had to choose KPIs to measure the performance of teams (yes, teams), I would choose the DORA metrics, or something of the sort.
I think the author wanted to improve on the more flawed idea of counting the number of hours worked.
DORA metrics seems to be an even better improvement on the author’s propositions. Thanks for sharing, I did not knew about it and it is interesting.
If I can share my experience, i know that at the root, any measure is flawed and incomplete. However it is always better to measure than not to measure if something is important to you. I know time spent is not a good measure, but I also know I will make more progress at a task if I allocate and track time spent on it. And engineering output is knowingly hard to measure.
Same. Most technical debt is caused by folks with “impressive velocity” in my experience. If any prospective employer tells you that they are measuring your lines of code written, run. They’re too lazy and cost driven to have understand.
Oh god no. None of these are good metrics. All are gameable. If my workplace enacted this then I would quit.
And KPIs are the truncheon that managers use to smash others' heads in. There should be no metrics, for anyone, only RESULTS. "What gets measured gets managed"... why do they not teach the management class this
They also teach you about leading and lagging KPIs. Results are lagging KPIs: nice to know if you are on track to achieving your goals, but useless if you want to know how to go faster/do better/etc.
Counting lines of code from a developer is like doing a word count on a poet's work.
Every single KPI listed above can be manipulated, means nothing, shows nothing except many will try to game the system by:
- Needlessly adding lines of code.
- Committing excessively.
- Closing too many PRs too early or opening too many PRs too soon.
- Meeting attended, people will needlessly attend meetings or create meetings.
- Prod deployments, people will needlessly push to prod.
your manager likely has some rubric for how they'll measure your success.
it may not even be explicitly formulated in their mind, but they have some sort of subjective/qualitative set of expectations. instead of going through the work of precisely defining those expectations and communicating them to the employee, they "empower" the employee to "set their own definition of success".
however, this turns into a game of the IC guessing what the manager wants to hear, which results in two problems:
- if you guess correctly, the manager is happy because they didn't have to go through the work of defining and communicating their expectations and they get to pretend that they are generous and letting employees "drive your own development"
- if you guess incorrectly, you better hope that you accidentally meet the secret rubric your manager has in their head because you know that that's what is used to evaluate performance and determine raises/promotions regardless of what's written down in your quarterly KPIs
Ex: oh, you're measuring number of commits, well, I'll stop squashing and start breaking commits down to single lines. Boom, I'm a 100x software engineer.
Does anyone have a decent list of individual contributor KPI's that are actually useful or insightful? I've, so far (~20 years of experience), never run into individual KPI's that weren't able to be gamed and were actually meaningful.
> https://en.wikipedia.org/wiki/Goodhart%27s_law
In this sense I do consider many KPIs often to be good metrics to find out whether the company is on the right path, or as an early warning system for problems that the company might become confronted with in the future.
But as soon as you use KPIs to assess managers, they will become gamed, and thus loose a lot of their value.
but when you try to translate that to the individual level, it's almost impossible because it's hard/impossible to properly attribute value to individual actions.
If I had to choose KPIs to measure the performance of teams (yes, teams), I would choose the DORA metrics, or something of the sort.
"When a measure becomes a target, it ceases to be a good measure" https://en.wikipedia.org/wiki/Goodhart%27s_law
> We want to avoid subjective judgements of how much of task X engineer 1 did vs engineer 2.
None of these metrics remove subjective judgments.
> meetings attended
Beginning to think this is some sort of joke.
And KPIs are the truncheon that managers use to smash others' heads in. There should be no metrics, for anyone, only RESULTS. "What gets measured gets managed"... why do they not teach the management class this
They also teach you about leading and lagging KPIs. Results are lagging KPIs: nice to know if you are on track to achieving your goals, but useless if you want to know how to go faster/do better/etc.
If you've upvoted and see this comment, can you please reply?
FWIW, I agree with the rest of the comments here, I think the points on the list can easily be gamed and should not be used/trusted.
Every single KPI listed above can be manipulated, means nothing, shows nothing except many will try to game the system by:
- Needlessly adding lines of code. - Committing excessively. - Closing too many PRs too early or opening too many PRs too soon. - Meeting attended, people will needlessly attend meetings or create meetings. - Prod deployments, people will needlessly push to prod.
And like that for all the listed.
your manager likely has some rubric for how they'll measure your success.
it may not even be explicitly formulated in their mind, but they have some sort of subjective/qualitative set of expectations. instead of going through the work of precisely defining those expectations and communicating them to the employee, they "empower" the employee to "set their own definition of success".
however, this turns into a game of the IC guessing what the manager wants to hear, which results in two problems:
- if you guess correctly, the manager is happy because they didn't have to go through the work of defining and communicating their expectations and they get to pretend that they are generous and letting employees "drive your own development"
- if you guess incorrectly, you better hope that you accidentally meet the secret rubric your manager has in their head because you know that that's what is used to evaluate performance and determine raises/promotions regardless of what's written down in your quarterly KPIs