Every time I've seen it tried (over and over again in the span of a 30-year career), the "goals" are based on whatever the priority/flavor-of-the month happens to be when goal-setting is announced. I've never seen those priorities last an entire year, but I have been called to account for why I didn't personally meet the goals that I was pressured into setting "for myself" even though the priorities shifted over the course of the year.
I worked at Google and been on teams where OKRs are used very ineffectively. Since then, I've read one thing that really changed the way I think about "goals" and could alleviate the problem you mention here. It's called the product strategy stack:
If you have time, listen to the podcast. This is really the most comprehensive treatment I have seen related to OKRs and goal setting. Basically, goals are the last thing you consider. What is often missing is the high levels of the stack clearly articulated such that the goals make sense and measure progress towards a strategic outcome:
"Our strategy is to increase revenue by 5%’ or ‘Increase retention by 10%.’ That’s not a strategy, that’s a goal. It’s great if you can achieve that goal, but only if it’s actually part of a larger strategy that the company is trying to advance,” Mehta says.
“I often see teams get into a mode where they’re just doing anything and everything to move the goal, without actually realizing they’re headed in the wrong direction from a strategic standpoint to create long-term value.” "
One of the downfalls of OKRs is that in many cases the true objective for people and/or teams is not something that can be mentioned without being unspeakably impolite. For example, the political goal "I as CTO want my department to grow by N FTEs so that I will gain in prestige compared to my peers in the C-suite" is definitely not something you will ever see on an OKR sheet but is definitely something that happens. On the other end of the influence spectrum, "I want to learn technology X because it will look good on my resume when I job-hop in a year from now" is also something you can't really use as a reason in a corporate setting but still definitely something that happens all the time.
If you cannot start from the real Objectives and have to make up fake ones, determining effective Key Results to go with them is bound to lead to confusion.
If you want to become fitter, don't think "I want to lose 10kg of weight", think about finding a sport which you like and healthy food which you enjoy eating.
Your belly might not disappear 100%, but you will feel better, and you are in for the long run.
Thanks for this. I recently left an otherwise awesome job mostly because of OKRs. I am still trying to articulate why.
I'm going to correct you, because it's relevant. Mehta says, "but only if it is actually accretive to the strategy." I think the accretive is important here, because one of my observations on how we were doing it wrong was that there were no stated goals for security against the company as a whole. This made it seem like each teams' OKRs were a chaotic free-for-all. Intuitively, the goals for security as a whole should be based on the overall needs of the company, divided up across the appropriate teams. These teams will have the tribal knowledge to write the best roadmap, determine who will own the workflow that the project generates on completion and generally know how to scope each task.
Going to listen to the podcast now. Maybe I will have more to say after. Cheers.
What you are describing is the opposite of agile, and also flies in the face of devops, so it's really hard to sell it in today's management culture. However, I've never seen agile or devops development processes work. I only work on big systems with year-plus timespans.
I can imagine those processes working well for lean, piecemeal teams (like refreshing frontend site cosmetics once a year, or pumping out contract work every few weeks), or for technology-lean consumer startups building an MVP they plan to throw away once they have market fit.
When I've seen project management work well, it's always been of the form (all these bullet points are mandatory, in my experience):
- The goal for the product for the next 3-24 months is clearly articulated.
- Each sub-team's 1-6 month goals are clearly articulated.
- ICs produce a list of projects that should take one IC about a month, and that, if completed, will meet the sub-team's goal (deadline requirements are ignored during this part of planning).
- Is "Number of projects / number of ICs" significantly less than the number of months to the deadline?
- If No, this sub-team won't be able to ship, so adjust the scope, the resources, or the deadline.
- Compensation is based on the performance of the product group, not the sub-teams. Somewhere between 1-10% of new hires end up being let go within 6 months. Hiring is never perfect, and firing people less bad for morale than having people sitting around being dead weight.
Number of months varies depending on project maturity, business needs, and scope of the changes. Anything past 24 months is best done in a graduate program, not a company.
Honestly, what I'm hearing in this podcast is that front line teams end up with OKRs that don't anchor into any strategic product vision. I agree, OKRs should be able to trace down. Since OKRs are supposed to cascade down, this failure falls squarely on upper management, for either failing to provide and communicate a strategy, or for failing to demand that recursively itemized OKRs actually trace up to the holistic vision.
And as always, there is no process substitute for adequate management.
> “I often see teams get into a mode where they’re just doing anything and everything to move the goal, without actually realizing they’re headed in the wrong direction from a strategic standpoint to create long-term value.”
I see this all the time. Everyone is so focused on the "KR" that they know nothing about the "O". I want my teams to be bought into the overall Objective well before we start thinking about objective measures of progress and/or success.
Goal setting is like other techniques in management.
The quality movement went terribly wrong when the motivation became "We want to have an ISO 20022 sign in front of the factory" as opposed to "we want to crush the competition". See
Probably the best phrase of that novel is "There is only one goal".
I worked at an AI startup which was struggling to balance the long term needs of developing a technologically advanced product and the short term needs of delivering projects to major corporations.
I saw the adoption of OKRs to be the beginning of the end of my time there because instead of carefully teasing apart what goals were necessary to realize their strategy everybody was told that they needed to list 20 or 30 goals, simply to list 20 or 30 goals.
Not too long after I left the CEO announced that he was proud that the company had been acquired by a major athletic footware manufacturer. My hot take was "that's incredible!" because "incredible" was the CEO's favorite adjective, but really I told people all along that one of our engagements, if it fully realized its potential, would generate enough value for a customer that they'd see it as a bargain to buy the company.
So of course this just makes people even more scatterbrained than they were before and worse yet it's rocket fuel for the psychopaths and narcissists in your organization because they are geniuses at playing that kind of game and they will use it to make themselves look good while making hard working people who are more interested in doing work and realizing the strategy look bad.
I worked at a smallish (~20 employees, at the time) agency that hard pretty good management overall, but one day decided to do OKRs. I assume one of the owners read a book or attended a talk or something.
But we had no historical metrics on... anything, really, related to software development or design. Nothing that'd be useful for quarterly goals, anyway. "Close X tickets" or "make X commits" are famously shit measurements. We did client work, so we couldn't just try to decrease load times on one of our own products, or something like that.
Having nothing meaningful to use for OKRs related to our actual jobs, developers & designers just latched on to sales & marketing projects and set our OKRs for those. Video views are relatively easy to measure. Sales funnel stuff's easy to measure. "Net promoter score". All that shit. The goal-setting for OKRs strongly favored sales & marketing, for whom measuring stuff was already a lot of what they did.
I'm not sure that's what they were aiming for, but it's what they got. I hope it was at least kinda useful for the company.
NPS, specifically, is a hard metric to set an objective against. You can never really separate the stuff you did to your software from all the rest of the stuff that happened to your company.
And yeah, using tickets as a success metric is gonna produce some incredibly awful behavior as people learn to game the system.
Good OKRs are not easy. It’s very hard to come up with good key results that incentivize the right behavior while actually measuring progress towards the goal. It takes quite a lot of skill to do that properly.
To me the most important part of OKR’s isn’t really the metrics… it’s the understanding that the team has been tasked with solving some problem without being told how to solve it. The team itself gets to own the solutions that drive the metric. Done correctly it lets everybody on the team take ownership of the product they are working on. It is far better than just being a feature factory that cranks out whatever sales or upper management thinks is a good solution.
> called to account for why I didn't personally meet the goals that I was pressured into setting "for myself"
This is one of the most hostile things an employer can do to a person. I don't want them involved in my personal growth -- fuck allllll of that. I will grow when and how I want to, if I even want to.
IME this never-ending push for continual improvement discourages me when I inevitably fail to meet goals they want me to want.
I want to say "just leave me alone I'm speedrunning to early retirement" but that'll just get me in more trouble.
I've been so busy at work I haven't had time to "define" my Q1 goals...it's Q2. I grow my skillset by doing my job well and attending trainings, I appreciate the idea of goals but they seem to set themselves.
Annually, we're supposed to cut-n-paste several goals from a long list of mgmt approved blurbs. They are either hard-to-measure or metrics that I cannot really control. Whateva.
Copy and paste is the opposite of how okrs are intended to work. But traditional annual goal-setting models do cascade.
Unfortunately Doerr presented a football team example in a pathetic attempt to explain how okrs cascade directly.
Because of this, okrs have been done poorly at so many orgs. Sadly, okrs software companies saw the simple computation from set theory to make krs be children of higher level objectives so they could make cool visual maps illustrating how okrs connect.
I think a lot of people's introduction to OKRs is John Doerr's book "Measure What Matters". That's where I learned about them.
The book explains how Andy Grove introduced the practice at Intel and it was very effective. The book seems to attribute the success to the practice itself and seems to say "if you adopt OKRs, you will succeed like Intel did".
I suspect that this success is misattributed. I suspect that Andy Grove was probably an excellent manager and I think he could have succeeded with something other than OKRs. I think he understood that what was really important was to get everybody across the organization to focus on essentially one big goal. He needed to make sure that everybody was pulling in the same direction and together, and OKRs provided a tool to do that.
When my organization decided to implement OKRs, my question to my peers was "who is our Andy Grove?"
If the people implementing OKRs focus too much on the practice and not enough on the motivation, I think you just end up with cargo-culting. The setting and tracking of KRs becomes the objective. So people treat it like busywork because OKRs don't really seem to matter - they just gets in the way of the "important" stuff.
As one of my coworkers says, the title of the book is "Measure What Matters", but it's too easy to slide into "What Is Measured Is What Matters".
> I think he understood that what was really important was to get everybody across the organization to focus on essentially one big goal.
I think “focus” is the keyword and am very glad to see this pointed out.
The company I’m at has gone into OKRs thinking that it’ll be some magical, productivity unlocking tool. However, when multiple people asked what the company strategy was and what we were planning to focus on, the response was that they (C-suite leaders) wanted to continue tackling every opportunity. That they think having OKRs will lead to the same workforce being able to tackle more things. It was really weird to try and write OKRs where literally any measurable outcome could be considered success. It wasn’t a surprise when I read the OKRs for other teams and orgs and saw everyone rowing in different directions ¯\_(ツ)_/¯
You touch on a really good point. Ime it’s easy to set decent KRs but Os are usually pulled out of rear bottom. No process whatsoever just on a whim of upper management. The result is they quickly realize Os weren’t good and flip flopping on goals starts after which this whole thing quickly unravels
I used OKRs on several teams to varying degrees at Google. Not a fan. Here are my complaints:
1. It makes organizations slow and inflexible. I used to joke that as soon as another team was involved in something you needed to do it would probably take a quarter. why? Well what you wanted probably wasn't on this quarter's OKRs so it would be an uphill battle to get them to do it. You'd have to argue about getting it into next quarter's OKRs;
2. OKRs can be structured in such a way that you can grade quite well while having achieved absolutely nothing;
3. Teams can be held to different standards. Some get easy OKRs. Some get harder OKRs. So it's still subject to the political-perception problems inherent in such organizations;
5. It is largely for show for upper management. I've been in 2 hour planning meetings where a bunch of teams speak for 2 minutes about what they're working on. This might be useful for directors+ but is really a waste of the time of 50-100 other people. This is a problem with status meetings too;
6. Even grading OKRs can be subjective and political. I recall one famous example where someone (cough Vic cough) said they had a goal of 100M users. They actually only got to 10M. Grade? 0.7.
There was a running meme at Google about feedback that went something like this: This project would've failed without this person. It failed anyway but it definitely would've without this person.
I like this meme because it illustrates how the same set of facts can be used to argue how someone did a good job or a bad job and the difference between the two is whether or not the org likes them. The same set of facts can be summarized as "this probject failed to ship" and "we failed fast, learned a lot and will take those learnings into future projects".
> what you wanted probably wasn't on this quarter's OKRs so it would be an uphill battle to get them to do it
Everywhere I've ever worked - OKRs or DPMs or Goals or whatever you want to call them or not - I've had a set of work that's assigned to me via JIRA tickets or Bugzilla tickets or some kind of ticket tracking system, and I've also had a constant deluge of co-workers asking for my help with something or other. The longer I work there, the greater the deluge of both requests for help and volume of work assigned. Every day I've ever come into work in my life (besides the one-two month "honeymoon period" when I start a new job), I have to decide between ignoring my coworkers pleas for help or slipping the deadlines on the individual tasks assigned to me.
Over the course of the past three decades, I've tried both ways - first, focusing on my assigned tasks and second, dropping everything every time I get a request for help. I've found that, overall, the second way works best. I have to constantly apologize for constantly missing deadlines, but if I keep turning away coworkers, not being enough of a "team player" shows up as a nastier mark in my performance reviews than always being behind on tasks.
One way I deal with this is having co-workers also create tickets if they need help with something. That way I can still be a team player but there is also documentation about how much I helped which is otherwise hard to quantify.
> 1. It makes organizations slow and inflexible. I used to joke that as soon as another team was involved in something you needed to do it would probably take a quarter. why? Well what you wanted probably wasn't on this quarter's OKRs so it would be an uphill battle to get them to do it. You'd have to argue about getting it into next quarter's OKRs;
This. It's not a joke either. At a previous job, we wanted to use the G Suite API to access calendar data from GCP, but that required help from the Info Sec and Cloud teams (we didn't have the right permissions to do it ourselves). We got push back for asking for help because it wasn't in those teams' OKRs. So we effectively gave up for an entire quarter and then some.
Even so, how does one write an OKR for a scenario like this? Objective: be able to access GSuite API from tool Foobar so it knows about everyone's availability. Key result: Umm, can complete a feature everyone wants?
But this does reveal an organizational problem right; either the security teams have it right and their current work is more important, in which case maybe they are under-staffed and need more slack. Or maybe they failed to add a KR for “perform timely security review” if that is a service that they provide to the org. If you’re blocked, you should be able to raise this up and learn something as an org.
Or maybe the KRs are correct as written, but the security teams have it wrong and their KRs should be deprioritized only favor of delivering yours. When KRs clash you need to have a conversation and resolve the conflict in the way that maximizes value for the business.
Ultimately prioritization of work always involves trade-offs, and there needs to be some mechanism for handling these. No system can handle 100% of cases without adding human judgement.
To me it seems the pathology in your example is that everyone was just sticking to the KRs, instead of coming to the best business outcome (and if your request was not prioritized, you should be able to understand the reasoning used to decide that). I can see how OKRs could be used as a shield to hide from those difficult conversations, but it’s not clear to me that they necessarily make things worse; if you didn’t have OKRs maybe your teams would be finding other ways to avoid working on things that they don’t want to help with.
Putting this in a separate reply as it's a separate train of thought.
> Even so, how does one write an OKR for a scenario like this? Objective: be able to access GSuite API from tool Foobar so it knows about everyone's availability. Key result: Umm, can complete a feature everyone wants?
The key thing about an objective is that it is a high-level goal. You've written the implementation into the O, which is a bit of an anti-pattern.
It's a bit hard to extrapolate exactly what the context was for your feature, but let's say your team owns an internal tool that orchestrates a business ops system that manages a task queue for "agents"; say "review these potential fraud cases that the system flagged".
As this is an existing system that you're trying to improve, not a new product, your team's objective for the Q might be attacking the main stakeholder-facing pain point with "Improve response time for manual review tasks". A KR that measures this might be "P50 latency for handling fraud reviews is reduced from 5d to 1d". (This formulation gives you a progress bar -- "0% complete" is >=5d, "100% complete" is <= 1d. You can easily build a dashboard for this, which is the holy grail for KRs.)
Now, based on the observation that most of your significant delays for this process are due to tasks being scheduled on agents that are on vacation, an initiative that you propose to progress the KR is to add calendar-awareness to your service's scheduling process. But, there are a bunch of possible initiatives that could move the needle here, and so as a team you get together and figure out which are going to have the best bang-for-buck.
Formulating the OKRs like this gives the team freedom to figure out exactly how they are going to solve the problem, while also communicating to the rest of the org what you're working on (and giving the org a chance to say "isn't <other objective> way more important?". And turning the implementation into a precise KR gives you the opportunity to discuss with stakeholders whether P50 is really the metric they care about, or if the business would be better served with P99 or some other way of measuring things. These metric discussions can be annoying, but at their best they can uncover subtle differences in expectations.
Note -- this is quite process-heavy; you wouldn't go into this much detail with a two-pizza-team startup! But if your team is dedicated to this internal tool in a company of tens of teams, then this level of detail might make sense for you.
What's worse is when that team gets around to it next quarter, and then in their research process they realize another team needs to be involved, etc. Eventually you have a two-week project that doesn't get delivered for a year.
For sure #1 is a big problem. But are OKRs causing this, or are the big companies that are using OKRs just fundamentally slow and inflexible? (In other words, are OKRs a symptom rather than a cause?) Thinking about how to coordinate 100k engineers, the trade-offs are pretty brutal. Startup-style “move fast, break stuff” simply doesn’t scale well.
I’d like to see some case studies here. Anyone got examples they like? I hear patio11 talking about Stripe a lot, and he says stuff like “could we deliver something faster?” which I think gets at an urgency for results, interested how they coordinate the layers and orgs.
I think it’s plausible that OKRs forestall agility and innovation as-used. But (at risk of invoking the “no true Scotsman” argument), maybe these companies are just doing OKRs wrong? OKRs are supposed to be abstract enough that you can change your plan if something better comes along. So if your KR is tightly coupled to a particular implementation then you can’t pivot in the face of new information. A quarter is a long planning horizon for small/medium sized companies! But if your KR is loosely-coupled, then it leaves room for new approaches. This balance is hard! The maximally-loose O/KR for many companies is just “succeed/increase share price by X% QoQ”. This doesn’t give you any direction.
I think at their best OKRs provide bi-directional information flow; both giving a way to make output from the lower levels more legible to the upper levels, while also making the objectives, desires, priorities of the upper levels more legible to the lower levels. I think any replacement has to achieve both of these things too.
#1 is a real issue, but is actually a problem of management and company as a whole. Teams want to be silo-ed and not bothered by anyone else hence set their goals and just do them, engineering managers just care about their teams or don’t even communicate between them to check for cross team projects, and upper management never sets some actual objectives and roadmap which could have been mapped into cross team projects and goals. And goals become a religious quarter to quarter thing without allowing any deviation, and then everything is slow.
* Reverse-engineering the todo list to produce OKRs
* OKRs being the wrong fit for the team and/or the company's lifecycle.
The second one is more damaging, because it's subtle, whereas the first is obvious to everyone involved.
Fundamentally, OKRs are a tool that should allow teams to make decisions about what to focus on, with the knowledge that they're aligned with the business objectives. If a team already has an immutable quarter+ roadmap, they're not making any decisions, they're just working; OKRs aren't a good fit for this kind of team. OKRs done well _should_ result in teams feeling empowered, because they can see the link between their actions/decisions and overall success. OKRs done poorly have the exact opposite effective; not just benign, but harmful.
> Reverse-engineering the todo list to produce OKRs
I recently saw that. In a training. By a professional, paid, external trainer.
We are supposed to take our backlog, comb it for stuff that always falls between the cracks, collect that, these are our key results. [1]
Then we are supposed to invent a headline for all that assorted stuff. That's our objective.
The whole procedure is supposed to be called "bottom-up OKR". [2]
You don't have to tell me, I have actually read Doerr's book and know better, but everyone in that training went from knowing nothing about OKRs to knowing falsehoods about OKRs.
[1] yes, he really confused tasks with measurable outcomes. And yes, that procedure ensures that the not-so-important stuff gets highest priority :-)
[2] in reality, "bottom-up" refers to the company hierarchy. Bottom-up OKRs are set by lower-tier employees.
It's not totally insane to want bottom-up OKRs. One of the common failure modes of OKRs is that the top-down view of the management doesn't connect well to the reality of day-to-day operations. So you often end up with people engaged in Kabuki theater (making up a nice story about how the things they were going to do anyway really are the KRs for any particular stated objective).
Of course, having teams produce bottom-up OKRs has failure modes too -- it's not really strategic planning if the leaves of the org tree are setting the goals. So you need both.
A cycle of planning seems to work best -- a high-level goal is transmitted down, then each team engages in a local goal-setting process, then these goals go up, then there is coordination and refinement. But this takes forever and requires incredible discipline, so it's hard to do well.
I actually don’t think it is too bad to reverse engineer OKRs from the backlog.
Well, maybe not completely reverse engineer.. just strongly influence whatever the OKRs are. I guess… provided whatever is in the backlog is solving the highest priority problems.
I've been through several jobs that used OKRs, and even had to read a book on it at one point, and to be completely honest I still have no idea what they are. Maybe I'm just dumb or can't grasp it, but while I know what all the words mean, I still don't understand what it is or why it's useful. I'd write some "OKRs" based on what my boss told me to write, who was told be their boss, and then I'd enter them into some HR system and never saw or heard from them again. It was all very cargo cult-y.
I never really understood how an OKR is at all applicable to an individual. Maybe our OKRs were bad but they were always related to increasing some metric or measure. I can't personally, directly affect any of that. I can do my damnedest to do good work that I think will help with that, but ultimately I have no control over whether our conversion rate goes down because some other department did something that hurt it. Yeah we did a great job on some landing page but then marketing pushed the wrong audience to it so it tanks. How does an OKR help with that? Now it looks like I haven't met any of my goals?
Maybe the problem was we always had these executive-level, vague "objectives" set by the C-suite, but then nobody knew what to actually do with that. Object: be an industry leader in innovation. What does that even mean?
It's another corporate flavor of the "month" trying to solve a realistic problem developed by corporates themselves, in Corporatese.
In theory the idea of OKRs is fine. They are "supposed" to give individuals a form of autonomous growth which can be related to the company and has a way to be measured and traced back. It gives them the "personal responsibility", "growth" and "autonomy" which tickles a particular type of people.
In practice, there are too many conflicting viewpoints, agendas, power hierarchies, goals and more, to make them work out. That's assuming everyone works in good faith with one another, too. As others point out, nothing's keeping a manager from using OKRs against you.
Stay tuned when in 10-20 years it will silently die off to be replaced by another flavor, and reinvent the square wheel once more.
> In theory the idea of OKRs is fine. They are "supposed" to give individuals a form of autonomous growth which can be related to the company and has a way to be measured and traced back. It gives them the "personal responsibility", "growth" and "autonomy" which tickles a particular type of people.
I still don’t even understand how that’s supposed to work unless you’re an executive with wide latitude to do what you think is best. I worked on what my manager told me to work on. All I can control is the quality of my own work. Maybe it wasn’t meant for the rank and file.
They're meaningless. I don't understand how these things can continue to exist and evolve in ever more ridiculous incarnations of the same stuff. If, next year, they all disappeared and managers just "cut the pie" for their own sub-org-- nothing bad would happen and the vast majority of folks would see it as a relief.
The weird thing is that almost everyone, even high up in the organization, thinks these performance eval mechanisms are a baloney waste of time. But somewhere, at some level, somebody is really pushing hard for these and values them.
Is it a suit thing? At what point in one's career does someone all of sudden decide "yeah, having everyone write paragraphs of BS justifying their work performance is a good use of time and THAT will REALLY solve problems"?
> At what point in one's career does someone all of sudden decide "yeah, having everyone write paragraphs of BS justifying their work performance is a good use of time and THAT will REALLY solve problems"?
When somebody starts making a lot of noise that people should do it, the people under you seem to unanimously support it, and the people above you seem to unanimously support it. That is, except for the very few that you can get a honest answer from, those are the only few that think it's bullshit, but put-up with it because everybody else is pushing it.
A better question is how the hell somebody getting money to teach your people and organize the process get enough credibility that they can mess with all the communication channels. Maybe an even better question is if there is any way to make people feel safer at work than in a crazy absolutist king's court, so that they have less need to lie.
Our corporate environments and capitalist structure in the US consolidate power in people. Once you start being an exec who can fire individuals on a whim or hand out raises/promotions they end up getting surrounded by yes men and then you start seeing the crazy ideas start flowing down.
The worse corporate jobs I’ve had have been the privately held large firms as the places are big enough you never know the executive but you still end up spending days/weeks doing ice breakers and personality assessments because the CEO’s life guru convinced him it would align the astrological feng shui.
It doesn’t do anything for the company but waste time and money, but until it wastes enough to cause the leaders to lose all power it’ll continue flowing down to us peons
I think of "OKRs" as just a fancy name for two things:
(1) What are you going to do in the next quarter/year/etc? i.e. what are you & your team's "objectives"
(2) How are you going to achieve that? i.e. what are the "key results" you're going to hit to achieve your objective.
I hope most people will agree that, in an organization, having an understanding - and agreement - of what each team plans to work on, along with the trust that they have reasonably well-thought steps on how to achieve that objective, is incredibly helpful.
The problem is, as with "Agile", "TDD", etc, people will run with this and lose track of what the actual end-goal is, i.e. to build things that matter while ensuring coordination between teams. So, you will see things that don't make much sense, like OKRs for individual contributors, OKRs that change monthly, objectives that aren't necessarily right for the company, key results that don't really tie into the objective, etc.
I'm not sure what the solution is. At a previous gig, I tried asking management plainly to state publicly what they're going to do and how they're going to do it, and no one did it. Then I tried, let's do OKRs "just like Google" and everyone jumped on, even though IMO it's the same thing...
I was head of Engineering at exec level in different startups that grew past series B and started implementing OKRs... they never made sense to me fir the tech side. I could see sales, marketing and customer success OKRs that were aligned with their day to day tasks.
But for Engineering? I had all kind of nice reading OKRs: reduce mttr, mttf, reduce bugs, reduce feature implementation time, etc. But in reality the day-to-day work of Engineering was "build the shit that the product team defines", we had little influence in stories that went on each sprint, and invariably every time we pushed for some technical matter, business always trumped us and the stories went to the backlog.
> But in reality the day-to-day work of Engineering was "build the shit that the product team defines", we had little influence in stories that went on each sprint, and invariably every time we pushed for some technical matter, business always trumped us and the stories went to the backlog.
in the end i think this is basically the unstated goal... no one can say it out loud because its a bad look, so they use an intermediary (some framework like okrs, or scrum etc) to attain that "business trumps engineering" culture...
> then I'd enter them into some HR system and never saw or heard from them again
My favorite version of this is when it's personal goals attached to some kind of performance review framework, but the company keeps on changing its performance review system every 6 months so you literally never can see them again.
> the company keeps on changing its performance review system every 6 months so you literally never can see them again.
Yes! In my past two jobs I finally learned to never pay attention to official performance reviews and career goal plans and personal goals tracking systems: every 6 months to 1 year, like clockwork, someone important in HR gets replaced, a new tool to track goals gets bought, and everything is redesigned from scratch.
The only way to get a promotion or a pay rise is to push for it on a completely different band, nothing to do with these systems.
There are so many ways to do OKRs poorly, and I've seen many. I love the book in theory, but when I saw OKRs in practice, every company implemented this poorly.
> Let's not have objectives just yet, we are not ready to communicate our goals yet with you, let's come up with key results... More like tasks, really just Jira tickets, our team already had to commit with management for the next year, we have a release plan and everything, so there is very little wiggle room there.
> And please, don't ask about any of those goals, you are just a code monkey, and you weren't there at all the meetings, but trust us, it's the most important and impactful thing we can do...
> Anyway, What's in the next 6 sprints? Let's just put them somehow in this OKR spreadsheet. Let's hope we don't need to change anything in the next sprint...
> Hmm, alright, it looks like we have some space there, let's come up with 2-3 extra projects that realistically would need a team effort and a month focused work to complete, but you will do it on your own... Just remember that you should only work on items in the sprint, and the next 12 sprints are already planned, and we can't add your tasks to any of the sprints. You also need to convince the team that it's important, but please don't bother the team with your tasks, they will lose focus on sprint items.
> I remember the book said something about something-something measurable. Unfortunately, no matter how many analysts we hire, they all quit around 3-4 months after they join, I wonder why that is. Anyway, we don't really have "numbers", so we can't come up with metrics, and we can't measure our success in any way, and we don't know whether gigantic projects bring any improvement at all.
> Oh, and I'm pretty sure I will not be your boss, whoops, sorry, "competency lead" in three months because I'm actively interviewing to get out of here. Cheerio!
Quite recently I quit a company just as they were bringing in OKRs. The reason they bought in OKRs was because they felt that the engineering organisation was failing to meet the needs of the company. So they put together a list of Objectives and key results. They basically said "This is what we have to do to be successful". There were a few small hitches. Half the Objectives were outside the scope of the engineering team. We could execute perfectly, but if our internal customer fucked up, our objective would be blown. They explained this was because it doesn't matter if we meet some technical objective if it turns out that wasn't necessary for the company to make money. The second small hitch was they were diametrically opposed objectives. We were to going to increase our release cadence 10x. We were also going to reduce production issues by 100%. That's right, our objective was 0 production issues whilst massively increasing our release cadence with 0 extra resources. The third issue is that they were unacheivable - we were supposed to deliver a 3 year project that would take 10 people in 1 year with 5 people.
It missed that the reason the engineering org failed to deliver was that the internal customer would change the requirements of 6 month projects roughly once a week.
It was basically an exercise in trying to pin blame on engineers for the failure of the company. It didn't bother me too much because I was quitting anyway.
Having measures of success that are opposites of each other actually makes sense (as explained also in the High Output Management). That way you ensure you won't go too much into increasing release cadence at the cost of introducing too many bugs and vice versa.
However -- note that the process of implementing OKRs has caused the leadership team to write down their expectations, which previously might have been unspoken. This is a first step towards resolving the problem. The next step is for the CTO to push back, hard, on the bits of these that aren't actually realistic, and hopefully get the leadership team aligned on what can actually be done. And so, the process of OKRs potentially has value here in flushing out unrealistic or mismatched expectations between different parts of the org.
This is one of those rare "my way or the high way" moments in leadership; as CTO at this company it's your job to either get the leadership team to realize that they are asking you for more than you can be expected to deliver, or quit. You can't stick around and put your name on a plan that you know is impossible; otherwise you're going to be the one that failed for every quarter to come. And even worse, you can't sign your team up for this BS. It's your job to shield them from this kind of shit.
> Half the Objectives were outside the scope of the engineering team.
Shared OKRs are difficult, but sometimes they are unavoidable. The most difficult business problems usually are cross-functional. At their best, OKRs can help to make these cross-functional dependencies more explicit, and foster communication and collaboration around them.
If they are trying to have engineering take full ownership of a shared OKR, that's a big problem. But if you clearly call out the shared ownership, and consider both parties responsible for implementation, then I think that's OK.
> We were to going to increase our release cadence 10x. We were also going to reduce production issues by 100%. That's right, our objective was 0 production issues whilst massively increasing our release cadence with 0 extra resources.
As a tangential point, increasing release cadence can definitely decrease your long-term rate of issues (see "Continuous Delivery" by Humble[1]) -- this forces you to automate manual processes, and manual processes are one of the main places that errors creep in. Though I think "count of issues" is a very poor metric, you're better off with an uptime metric. And "100%" is the only strictly-incorrect number to pick for uptime, because it is literally impossible; my (super-unscientific, don't hold me to this) rule of thumb for business people is "every extra 9 costs you 10x". So do you need 99.9% uptime or 99.99%?
"We're introducing OKRs, they're for your benefit so you know where to focus and can track how you're improving and the team success, they wont be used to judge your salary"
6 months later
"So yeah that's the most we're gonna offer you for this raise because your OKR score was only..."
My experiences have tended more toward "everybody stresses out about OKRs for a couple weeks every quarter and then you might as well have deleted the file because nobody will ever, ever, ever talk about them again".
https://review.firstround.com/set-non-goals-and-build-a-prod...
If you have time, listen to the podcast. This is really the most comprehensive treatment I have seen related to OKRs and goal setting. Basically, goals are the last thing you consider. What is often missing is the high levels of the stack clearly articulated such that the goals make sense and measure progress towards a strategic outcome:
"Our strategy is to increase revenue by 5%’ or ‘Increase retention by 10%.’ That’s not a strategy, that’s a goal. It’s great if you can achieve that goal, but only if it’s actually part of a larger strategy that the company is trying to advance,” Mehta says.
“I often see teams get into a mode where they’re just doing anything and everything to move the goal, without actually realizing they’re headed in the wrong direction from a strategic standpoint to create long-term value.” "
If you cannot start from the real Objectives and have to make up fake ones, determining effective Key Results to go with them is bound to lead to confusion.
If you want to become fitter, don't think "I want to lose 10kg of weight", think about finding a sport which you like and healthy food which you enjoy eating.
Your belly might not disappear 100%, but you will feel better, and you are in for the long run.
I'm going to correct you, because it's relevant. Mehta says, "but only if it is actually accretive to the strategy." I think the accretive is important here, because one of my observations on how we were doing it wrong was that there were no stated goals for security against the company as a whole. This made it seem like each teams' OKRs were a chaotic free-for-all. Intuitively, the goals for security as a whole should be based on the overall needs of the company, divided up across the appropriate teams. These teams will have the tribal knowledge to write the best roadmap, determine who will own the workflow that the project generates on completion and generally know how to scope each task.
Going to listen to the podcast now. Maybe I will have more to say after. Cheers.
I can imagine those processes working well for lean, piecemeal teams (like refreshing frontend site cosmetics once a year, or pumping out contract work every few weeks), or for technology-lean consumer startups building an MVP they plan to throw away once they have market fit.
When I've seen project management work well, it's always been of the form (all these bullet points are mandatory, in my experience):
- The goal for the product for the next 3-24 months is clearly articulated.
- Each sub-team's 1-6 month goals are clearly articulated.
- ICs produce a list of projects that should take one IC about a month, and that, if completed, will meet the sub-team's goal (deadline requirements are ignored during this part of planning).
- Is "Number of projects / number of ICs" significantly less than the number of months to the deadline?
- If No, this sub-team won't be able to ship, so adjust the scope, the resources, or the deadline.
- Compensation is based on the performance of the product group, not the sub-teams. Somewhere between 1-10% of new hires end up being let go within 6 months. Hiring is never perfect, and firing people less bad for morale than having people sitting around being dead weight.
Number of months varies depending on project maturity, business needs, and scope of the changes. Anything past 24 months is best done in a graduate program, not a company.
And as always, there is no process substitute for adequate management.
I see this all the time. Everyone is so focused on the "KR" that they know nothing about the "O". I want my teams to be bought into the overall Objective well before we start thinking about objective measures of progress and/or success.
The quality movement went terribly wrong when the motivation became "We want to have an ISO 20022 sign in front of the factory" as opposed to "we want to crush the competition". See
https://en.wikipedia.org/wiki/The_Goal_%28novel%29
Probably the best phrase of that novel is "There is only one goal".
I worked at an AI startup which was struggling to balance the long term needs of developing a technologically advanced product and the short term needs of delivering projects to major corporations.
I saw the adoption of OKRs to be the beginning of the end of my time there because instead of carefully teasing apart what goals were necessary to realize their strategy everybody was told that they needed to list 20 or 30 goals, simply to list 20 or 30 goals.
Not too long after I left the CEO announced that he was proud that the company had been acquired by a major athletic footware manufacturer. My hot take was "that's incredible!" because "incredible" was the CEO's favorite adjective, but really I told people all along that one of our engagements, if it fully realized its potential, would generate enough value for a customer that they'd see it as a bargain to buy the company.
So of course this just makes people even more scatterbrained than they were before and worse yet it's rocket fuel for the psychopaths and narcissists in your organization because they are geniuses at playing that kind of game and they will use it to make themselves look good while making hard working people who are more interested in doing work and realizing the strategy look bad.
But we had no historical metrics on... anything, really, related to software development or design. Nothing that'd be useful for quarterly goals, anyway. "Close X tickets" or "make X commits" are famously shit measurements. We did client work, so we couldn't just try to decrease load times on one of our own products, or something like that.
Having nothing meaningful to use for OKRs related to our actual jobs, developers & designers just latched on to sales & marketing projects and set our OKRs for those. Video views are relatively easy to measure. Sales funnel stuff's easy to measure. "Net promoter score". All that shit. The goal-setting for OKRs strongly favored sales & marketing, for whom measuring stuff was already a lot of what they did.
I'm not sure that's what they were aiming for, but it's what they got. I hope it was at least kinda useful for the company.
And yeah, using tickets as a success metric is gonna produce some incredibly awful behavior as people learn to game the system.
Good OKRs are not easy. It’s very hard to come up with good key results that incentivize the right behavior while actually measuring progress towards the goal. It takes quite a lot of skill to do that properly.
To me the most important part of OKR’s isn’t really the metrics… it’s the understanding that the team has been tasked with solving some problem without being told how to solve it. The team itself gets to own the solutions that drive the metric. Done correctly it lets everybody on the team take ownership of the product they are working on. It is far better than just being a feature factory that cranks out whatever sales or upper management thinks is a good solution.
This is one of the most hostile things an employer can do to a person. I don't want them involved in my personal growth -- fuck allllll of that. I will grow when and how I want to, if I even want to.
IME this never-ending push for continual improvement discourages me when I inevitably fail to meet goals they want me to want.
I want to say "just leave me alone I'm speedrunning to early retirement" but that'll just get me in more trouble.
Unfortunately Doerr presented a football team example in a pathetic attempt to explain how okrs cascade directly.
Because of this, okrs have been done poorly at so many orgs. Sadly, okrs software companies saw the simple computation from set theory to make krs be children of higher level objectives so they could make cool visual maps illustrating how okrs connect.
So, yeah, I feel you, “Whateva”
The book explains how Andy Grove introduced the practice at Intel and it was very effective. The book seems to attribute the success to the practice itself and seems to say "if you adopt OKRs, you will succeed like Intel did".
I suspect that this success is misattributed. I suspect that Andy Grove was probably an excellent manager and I think he could have succeeded with something other than OKRs. I think he understood that what was really important was to get everybody across the organization to focus on essentially one big goal. He needed to make sure that everybody was pulling in the same direction and together, and OKRs provided a tool to do that.
When my organization decided to implement OKRs, my question to my peers was "who is our Andy Grove?"
If the people implementing OKRs focus too much on the practice and not enough on the motivation, I think you just end up with cargo-culting. The setting and tracking of KRs becomes the objective. So people treat it like busywork because OKRs don't really seem to matter - they just gets in the way of the "important" stuff.
As one of my coworkers says, the title of the book is "Measure What Matters", but it's too easy to slide into "What Is Measured Is What Matters".
I think “focus” is the keyword and am very glad to see this pointed out.
The company I’m at has gone into OKRs thinking that it’ll be some magical, productivity unlocking tool. However, when multiple people asked what the company strategy was and what we were planning to focus on, the response was that they (C-suite leaders) wanted to continue tackling every opportunity. That they think having OKRs will lead to the same workforce being able to tackle more things. It was really weird to try and write OKRs where literally any measurable outcome could be considered success. It wasn’t a surprise when I read the OKRs for other teams and orgs and saw everyone rowing in different directions ¯\_(ツ)_/¯
Goals != Strategy and that took a while to realize.
I wish I read Good Strategy Bad Strategy and The 7 Powers first.
It offers no value to anyone trying to actually deploy okrs in a given organization or team.
But hey, it does fire up execs!
1. It makes organizations slow and inflexible. I used to joke that as soon as another team was involved in something you needed to do it would probably take a quarter. why? Well what you wanted probably wasn't on this quarter's OKRs so it would be an uphill battle to get them to do it. You'd have to argue about getting it into next quarter's OKRs;
2. OKRs can be structured in such a way that you can grade quite well while having achieved absolutely nothing;
3. Teams can be held to different standards. Some get easy OKRs. Some get harder OKRs. So it's still subject to the political-perception problems inherent in such organizations;
5. It is largely for show for upper management. I've been in 2 hour planning meetings where a bunch of teams speak for 2 minutes about what they're working on. This might be useful for directors+ but is really a waste of the time of 50-100 other people. This is a problem with status meetings too;
6. Even grading OKRs can be subjective and political. I recall one famous example where someone (cough Vic cough) said they had a goal of 100M users. They actually only got to 10M. Grade? 0.7.
There was a running meme at Google about feedback that went something like this: This project would've failed without this person. It failed anyway but it definitely would've without this person.
I like this meme because it illustrates how the same set of facts can be used to argue how someone did a good job or a bad job and the difference between the two is whether or not the org likes them. The same set of facts can be summarized as "this probject failed to ship" and "we failed fast, learned a lot and will take those learnings into future projects".
OKRs suffer from exactly those same problems.
Everywhere I've ever worked - OKRs or DPMs or Goals or whatever you want to call them or not - I've had a set of work that's assigned to me via JIRA tickets or Bugzilla tickets or some kind of ticket tracking system, and I've also had a constant deluge of co-workers asking for my help with something or other. The longer I work there, the greater the deluge of both requests for help and volume of work assigned. Every day I've ever come into work in my life (besides the one-two month "honeymoon period" when I start a new job), I have to decide between ignoring my coworkers pleas for help or slipping the deadlines on the individual tasks assigned to me.
Over the course of the past three decades, I've tried both ways - first, focusing on my assigned tasks and second, dropping everything every time I get a request for help. I've found that, overall, the second way works best. I have to constantly apologize for constantly missing deadlines, but if I keep turning away coworkers, not being enough of a "team player" shows up as a nastier mark in my performance reviews than always being behind on tasks.
This. It's not a joke either. At a previous job, we wanted to use the G Suite API to access calendar data from GCP, but that required help from the Info Sec and Cloud teams (we didn't have the right permissions to do it ourselves). We got push back for asking for help because it wasn't in those teams' OKRs. So we effectively gave up for an entire quarter and then some.
Even so, how does one write an OKR for a scenario like this? Objective: be able to access GSuite API from tool Foobar so it knows about everyone's availability. Key result: Umm, can complete a feature everyone wants?
Or maybe the KRs are correct as written, but the security teams have it wrong and their KRs should be deprioritized only favor of delivering yours. When KRs clash you need to have a conversation and resolve the conflict in the way that maximizes value for the business.
Ultimately prioritization of work always involves trade-offs, and there needs to be some mechanism for handling these. No system can handle 100% of cases without adding human judgement.
To me it seems the pathology in your example is that everyone was just sticking to the KRs, instead of coming to the best business outcome (and if your request was not prioritized, you should be able to understand the reasoning used to decide that). I can see how OKRs could be used as a shield to hide from those difficult conversations, but it’s not clear to me that they necessarily make things worse; if you didn’t have OKRs maybe your teams would be finding other ways to avoid working on things that they don’t want to help with.
> Even so, how does one write an OKR for a scenario like this? Objective: be able to access GSuite API from tool Foobar so it knows about everyone's availability. Key result: Umm, can complete a feature everyone wants?
The key thing about an objective is that it is a high-level goal. You've written the implementation into the O, which is a bit of an anti-pattern.
It's a bit hard to extrapolate exactly what the context was for your feature, but let's say your team owns an internal tool that orchestrates a business ops system that manages a task queue for "agents"; say "review these potential fraud cases that the system flagged".
As this is an existing system that you're trying to improve, not a new product, your team's objective for the Q might be attacking the main stakeholder-facing pain point with "Improve response time for manual review tasks". A KR that measures this might be "P50 latency for handling fraud reviews is reduced from 5d to 1d". (This formulation gives you a progress bar -- "0% complete" is >=5d, "100% complete" is <= 1d. You can easily build a dashboard for this, which is the holy grail for KRs.)
Now, based on the observation that most of your significant delays for this process are due to tasks being scheduled on agents that are on vacation, an initiative that you propose to progress the KR is to add calendar-awareness to your service's scheduling process. But, there are a bunch of possible initiatives that could move the needle here, and so as a team you get together and figure out which are going to have the best bang-for-buck.
Formulating the OKRs like this gives the team freedom to figure out exactly how they are going to solve the problem, while also communicating to the rest of the org what you're working on (and giving the org a chance to say "isn't <other objective> way more important?". And turning the implementation into a precise KR gives you the opportunity to discuss with stakeholders whether P50 is really the metric they care about, or if the business would be better served with P99 or some other way of measuring things. These metric discussions can be annoying, but at their best they can uncover subtle differences in expectations.
Note -- this is quite process-heavy; you wouldn't go into this much detail with a two-pizza-team startup! But if your team is dedicated to this internal tool in a company of tens of teams, then this level of detail might make sense for you.
I’d like to see some case studies here. Anyone got examples they like? I hear patio11 talking about Stripe a lot, and he says stuff like “could we deliver something faster?” which I think gets at an urgency for results, interested how they coordinate the layers and orgs.
I think it’s plausible that OKRs forestall agility and innovation as-used. But (at risk of invoking the “no true Scotsman” argument), maybe these companies are just doing OKRs wrong? OKRs are supposed to be abstract enough that you can change your plan if something better comes along. So if your KR is tightly coupled to a particular implementation then you can’t pivot in the face of new information. A quarter is a long planning horizon for small/medium sized companies! But if your KR is loosely-coupled, then it leaves room for new approaches. This balance is hard! The maximally-loose O/KR for many companies is just “succeed/increase share price by X% QoQ”. This doesn’t give you any direction.
I think at their best OKRs provide bi-directional information flow; both giving a way to make output from the lower levels more legible to the upper levels, while also making the objectives, desires, priorities of the upper levels more legible to the lower levels. I think any replacement has to achieve both of these things too.
A status meeting serves to reinforce the social status of the one who called the meeting. No other status is relevant.
* Reverse-engineering the todo list to produce OKRs
* OKRs being the wrong fit for the team and/or the company's lifecycle.
The second one is more damaging, because it's subtle, whereas the first is obvious to everyone involved.
Fundamentally, OKRs are a tool that should allow teams to make decisions about what to focus on, with the knowledge that they're aligned with the business objectives. If a team already has an immutable quarter+ roadmap, they're not making any decisions, they're just working; OKRs aren't a good fit for this kind of team. OKRs done well _should_ result in teams feeling empowered, because they can see the link between their actions/decisions and overall success. OKRs done poorly have the exact opposite effective; not just benign, but harmful.
I recently saw that. In a training. By a professional, paid, external trainer.
We are supposed to take our backlog, comb it for stuff that always falls between the cracks, collect that, these are our key results. [1]
Then we are supposed to invent a headline for all that assorted stuff. That's our objective.
The whole procedure is supposed to be called "bottom-up OKR". [2]
You don't have to tell me, I have actually read Doerr's book and know better, but everyone in that training went from knowing nothing about OKRs to knowing falsehoods about OKRs.
[1] yes, he really confused tasks with measurable outcomes. And yes, that procedure ensures that the not-so-important stuff gets highest priority :-)
[2] in reality, "bottom-up" refers to the company hierarchy. Bottom-up OKRs are set by lower-tier employees.
Of course, having teams produce bottom-up OKRs has failure modes too -- it's not really strategic planning if the leaves of the org tree are setting the goals. So you need both.
A cycle of planning seems to work best -- a high-level goal is transmitted down, then each team engages in a local goal-setting process, then these goals go up, then there is coordination and refinement. But this takes forever and requires incredible discipline, so it's hard to do well.
Well, maybe not completely reverse engineer.. just strongly influence whatever the OKRs are. I guess… provided whatever is in the backlog is solving the highest priority problems.
I never really understood how an OKR is at all applicable to an individual. Maybe our OKRs were bad but they were always related to increasing some metric or measure. I can't personally, directly affect any of that. I can do my damnedest to do good work that I think will help with that, but ultimately I have no control over whether our conversion rate goes down because some other department did something that hurt it. Yeah we did a great job on some landing page but then marketing pushed the wrong audience to it so it tanks. How does an OKR help with that? Now it looks like I haven't met any of my goals?
Maybe the problem was we always had these executive-level, vague "objectives" set by the C-suite, but then nobody knew what to actually do with that. Object: be an industry leader in innovation. What does that even mean?
In theory the idea of OKRs is fine. They are "supposed" to give individuals a form of autonomous growth which can be related to the company and has a way to be measured and traced back. It gives them the "personal responsibility", "growth" and "autonomy" which tickles a particular type of people.
In practice, there are too many conflicting viewpoints, agendas, power hierarchies, goals and more, to make them work out. That's assuming everyone works in good faith with one another, too. As others point out, nothing's keeping a manager from using OKRs against you.
Stay tuned when in 10-20 years it will silently die off to be replaced by another flavor, and reinvent the square wheel once more.
I still don’t even understand how that’s supposed to work unless you’re an executive with wide latitude to do what you think is best. I worked on what my manager told me to work on. All I can control is the quality of my own work. Maybe it wasn’t meant for the rank and file.
The weird thing is that almost everyone, even high up in the organization, thinks these performance eval mechanisms are a baloney waste of time. But somewhere, at some level, somebody is really pushing hard for these and values them.
Is it a suit thing? At what point in one's career does someone all of sudden decide "yeah, having everyone write paragraphs of BS justifying their work performance is a good use of time and THAT will REALLY solve problems"?
When somebody starts making a lot of noise that people should do it, the people under you seem to unanimously support it, and the people above you seem to unanimously support it. That is, except for the very few that you can get a honest answer from, those are the only few that think it's bullshit, but put-up with it because everybody else is pushing it.
A better question is how the hell somebody getting money to teach your people and organize the process get enough credibility that they can mess with all the communication channels. Maybe an even better question is if there is any way to make people feel safer at work than in a crazy absolutist king's court, so that they have less need to lie.
The worse corporate jobs I’ve had have been the privately held large firms as the places are big enough you never know the executive but you still end up spending days/weeks doing ice breakers and personality assessments because the CEO’s life guru convinced him it would align the astrological feng shui.
It doesn’t do anything for the company but waste time and money, but until it wastes enough to cause the leaders to lose all power it’ll continue flowing down to us peons
I hope most people will agree that, in an organization, having an understanding - and agreement - of what each team plans to work on, along with the trust that they have reasonably well-thought steps on how to achieve that objective, is incredibly helpful.
The problem is, as with "Agile", "TDD", etc, people will run with this and lose track of what the actual end-goal is, i.e. to build things that matter while ensuring coordination between teams. So, you will see things that don't make much sense, like OKRs for individual contributors, OKRs that change monthly, objectives that aren't necessarily right for the company, key results that don't really tie into the objective, etc.
I'm not sure what the solution is. At a previous gig, I tried asking management plainly to state publicly what they're going to do and how they're going to do it, and no one did it. Then I tried, let's do OKRs "just like Google" and everyone jumped on, even though IMO it's the same thing...
But for Engineering? I had all kind of nice reading OKRs: reduce mttr, mttf, reduce bugs, reduce feature implementation time, etc. But in reality the day-to-day work of Engineering was "build the shit that the product team defines", we had little influence in stories that went on each sprint, and invariably every time we pushed for some technical matter, business always trumped us and the stories went to the backlog.
My favorite version of this is when it's personal goals attached to some kind of performance review framework, but the company keeps on changing its performance review system every 6 months so you literally never can see them again.
Yes! In my past two jobs I finally learned to never pay attention to official performance reviews and career goal plans and personal goals tracking systems: every 6 months to 1 year, like clockwork, someone important in HR gets replaced, a new tool to track goals gets bought, and everything is redesigned from scratch.
The only way to get a promotion or a pay rise is to push for it on a completely different band, nothing to do with these systems.
> Let's not have objectives just yet, we are not ready to communicate our goals yet with you, let's come up with key results... More like tasks, really just Jira tickets, our team already had to commit with management for the next year, we have a release plan and everything, so there is very little wiggle room there.
> And please, don't ask about any of those goals, you are just a code monkey, and you weren't there at all the meetings, but trust us, it's the most important and impactful thing we can do...
> Anyway, What's in the next 6 sprints? Let's just put them somehow in this OKR spreadsheet. Let's hope we don't need to change anything in the next sprint...
> Hmm, alright, it looks like we have some space there, let's come up with 2-3 extra projects that realistically would need a team effort and a month focused work to complete, but you will do it on your own... Just remember that you should only work on items in the sprint, and the next 12 sprints are already planned, and we can't add your tasks to any of the sprints. You also need to convince the team that it's important, but please don't bother the team with your tasks, they will lose focus on sprint items.
> I remember the book said something about something-something measurable. Unfortunately, no matter how many analysts we hire, they all quit around 3-4 months after they join, I wonder why that is. Anyway, we don't really have "numbers", so we can't come up with metrics, and we can't measure our success in any way, and we don't know whether gigantic projects bring any improvement at all.
> Oh, and I'm pretty sure I will not be your boss, whoops, sorry, "competency lead" in three months because I'm actively interviewing to get out of here. Cheerio!
It missed that the reason the engineering org failed to deliver was that the internal customer would change the requirements of 6 month projects roughly once a week.
It was basically an exercise in trying to pin blame on engineers for the failure of the company. It didn't bother me too much because I was quitting anyway.
OKRs don't fix dysfunctional organisations.
Very true.
However -- note that the process of implementing OKRs has caused the leadership team to write down their expectations, which previously might have been unspoken. This is a first step towards resolving the problem. The next step is for the CTO to push back, hard, on the bits of these that aren't actually realistic, and hopefully get the leadership team aligned on what can actually be done. And so, the process of OKRs potentially has value here in flushing out unrealistic or mismatched expectations between different parts of the org.
This is one of those rare "my way or the high way" moments in leadership; as CTO at this company it's your job to either get the leadership team to realize that they are asking you for more than you can be expected to deliver, or quit. You can't stick around and put your name on a plan that you know is impossible; otherwise you're going to be the one that failed for every quarter to come. And even worse, you can't sign your team up for this BS. It's your job to shield them from this kind of shit.
> Half the Objectives were outside the scope of the engineering team.
Shared OKRs are difficult, but sometimes they are unavoidable. The most difficult business problems usually are cross-functional. At their best, OKRs can help to make these cross-functional dependencies more explicit, and foster communication and collaboration around them.
If they are trying to have engineering take full ownership of a shared OKR, that's a big problem. But if you clearly call out the shared ownership, and consider both parties responsible for implementation, then I think that's OK.
> We were to going to increase our release cadence 10x. We were also going to reduce production issues by 100%. That's right, our objective was 0 production issues whilst massively increasing our release cadence with 0 extra resources.
As a tangential point, increasing release cadence can definitely decrease your long-term rate of issues (see "Continuous Delivery" by Humble[1]) -- this forces you to automate manual processes, and manual processes are one of the main places that errors creep in. Though I think "count of issues" is a very poor metric, you're better off with an uptime metric. And "100%" is the only strictly-incorrect number to pick for uptime, because it is literally impossible; my (super-unscientific, don't hold me to this) rule of thumb for business people is "every extra 9 costs you 10x". So do you need 99.9% uptime or 99.99%?
[1]: https://www.amazon.com/Continuous-Delivery-Deployment-Automa...
6 months later
"So yeah that's the most we're gonna offer you for this raise because your OKR score was only..."
Every time.