Being a hedgehog is useful on the journey to domain mastery because sticking to frameworks saves you time and headache compared to not having any frameworks at all.
The 3 stages of domain mastery:
Stage 1 - No knowledge or structure to approach a domain (everything is hard, pitfalls are everywhere)
Stage 2 - Frameworks that are useful to approach the domain (Map to avoid pitfall areas)
Stage 3 - Detailed understanding of the domain (in which you can move through pitfall areas freely and see where frameworks fall short)
Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.
All good decisions must be made by stage 3 persons, but ironically, training is most efficiently done by stage 2 persons. Hedgehogs get more limelight because 90% of the population is at stage 1 and values the knowledge of stage 2 (and can't grasp the complexities and nuances of stage 3).
Many hedgehogs struggle to touch stage 3, and instead see stage 2 as mastery. This is compounded by the positive feedback loops of success - the frameworks save time, it gives them reputation, it allows them to save stage 1 persons from their ignorance, and it's the foundation of their current level and achievements. Frameworks are also convenient and broadly applicable to many problems; detailed domain mastery in contrast is difficult, time consuming, and highly contextualized.
All of this makes it hard to move beyond stage 2 into stage 3.
To be a good X you must follow the rules. To be a great X you
must have followed the rules so well as to learn why they’re there and when they should be broken.
Works for almost any X - writer, programmer, driving, etc.
> To be a good X you must follow the rules. To be a great X you must have followed the rules so well as to learn why they’re there and when they should be broken.
In my experience slavishly following the "rules" or "best practices" can actually be worse than never following them. Not understanding when it's good to deviate usually means of a lack of understanding of why the "rules" or "best practices" exist in the first place. So much attention is spent following the letter of the rule rather than what problems it was meant to solve.
Look no further than modern day "Agile" vs the actual Agile Manifesto
Most people agree with this. The actual challenge is to discern real rules from superficial bullshit. That is a level of criticality many people do not possess and some find hostile or disgusting.
Reminds me of a quote from the Mustard Seed Garden Manual of Painting:
"Some consider it noble to have a method;
others consider it noble not to have a method.
Not to have a method is bad;
to stop entirely at method is worse still.
One should at first observe rules severely,
then change them in an intelligent way.
The aim of possessing method
is to seem finally as if one had no method."
Applies really nicely to music. First you learn the rules so you know when you can break them. But if you start breaking the rules from the start, people will think you have never learned them in the first place.
A lot of good ways of looking at this, and your phrasing @bombcar it's really nicely put.
In my opinion, once you play a game for a while, it's easy to know exactly when and how to break ANY rule: you must understand what the rules are there for.
Think of any agile framework, a ton of them are practiced like cults by the most. However, if you think that those are there for 2 simple purposes:
1. you want to ensure nobody in the team is ever idle
2. you want to ensure everyone is working on the most relevant available task that hasn't been picked up from others
Then it's easy to see which rules are useful to the goals and which are hurting you, thus you just gotta break them!
This is just an easy example, this is really about anything in life though!
Also the social benefit of staying in stage 2 is strong. Stage 3 folks don’t mingle together easily, not just because there are less of them, but also because they have in some ways become themselves. It’s much easier for people to stick together when they believe in something together, whether it’s imperfect or not.
And then some stage 3 folks make a new stage 2 thing and the cycle continues. But I think people don’t want stage 3 attainment by giving up the social buffer (which is very reasonable, as being a social animal in most cases is better life path than a lonely scholar/innovator).
EDIT: I think what I am talking about applies better to spiritual/philosophical/psychological attainment, rather than technological. The effect is probably still there for less spiritual things like tech or writing, but probably less so.
> Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.
While interviewing recently, I've found a similar anti-correlation between general competency and people who focus on teaching frameworks and libraries.
The more competent candidates basically don't do any teaching based on frameworks/libraries (but they might have experience mentoring individuals); whereas the candidates who focused on teaching frameworks specifically (often to groups) were the least competent - the more they focused on teaching the less competent they seemed to be! I found this kinda surprising and worrying, though my sample size is fairly small. To clarify, I know only so much can be evaluated in an interview, this is basic competency.
There’s an old pithy quote about this that I won’t repeat here. I’ve seen the same results, but I also know that I may be totally underestimating their competency in an interview: I am not always competent to judge someone’s competency. Keep an open mind —teaching can become a skill trap like any other and it may take time for people to readjust if that’s no longer their primary responsibility.
The article talks about contingent advice being better than universal advice only in stage 3. If you're not then universal advice is helpful. I think that holds true for most people and most subjects, including myself.
Originally, I thought the article did a great job describing a common scenario that occurs usually in decision making and I wanted to describe my intuition on why I think it comes about. It's not really a universal theory, more my own digestion / explanation on the interrelation around hedgehogs vs foxes and my own interpretation of the issues that the articles describes.
A universal theory is not the same thing as universal advice. What would one universally do just because one knows the universal theory?
...Now that you bring it up, the OP is offering a piece of universal advice. The irony seems stronger there. Not sure if that invalidates his advice or not. Probably just invalidates taking it as a hard and fast rule.
Also: often good knowledge allows to see similiarities in other domains - feels like mastery/intuition/domain inteligence :) - it allows to instantly grasp new concepts in new domains. Works like isomorphism in math.
This is a useful way to frame it. I would add here that stage 2 is the likely the most comfortable and that the migration from stage 2 to 3 has an embedded disincentive structure.
While this progression may exist, it is not what I took from the article, nor the original idea behind the hedgehog classification.
It is more about some thought-leader being keen on blockchains, machine learning, supply-side economics, or what have you, and looking at every problem/situation through the lens of wanting to apply this technology/policy to solve it, possibly ignoring the downsides/details/side-effects.
The article gives the fictional example of a project “just needing a relational database” but the “domain expert” trying to push them to use SpringySearch because that can also work as a relational database (and because this hedgehog is sold on SpringySearch).
This also accounts for why you see such a noisy contingent of people clamoring for difficult things (ex: software architecture) to be reduced to a discrete process: it suits the way they are currently learning, avoids sunk costs in said learning, sidesteps the hard question of, "do I really know this?" and maintains the current social status quo.
At stage 3, things aren't necessarily easy, but you have the skills to navigate much larger amounts of uncertainty than stage 1 or 2.
So frequently do I see the hottest new ECMAScript feature presented by one these folks on Twitter, and I'm thinking how out of the box, older browsers won't support it, and most junior developers don't have a great grasp on polyfills or transpilers. So the first thing I'm seeing is limitations in how I might use it in a production environment (or the small but non-zero work to update my tooling to properly support ES5), but most of these folks don't seem to develop in environments requiring older browser support, but in either case, the nuance always seems missed. It's a caricature of the depth that we sometimes need to dig into.
I think this progression could make sense as a personal learning progression. The real world is probably more complex than this though.
See for example https://news.ycombinator.com/item?id=27468360 where the person mentions pressure to simplify advice and Tetlocks own work shows that the hedgehogs were the more famous and successful people. So some people may migrate backwards by simplifying a message for maximum impact.
> All of this makes it hard to move beyond stage 2 into stage 3.
Often, moving to the stage 3 is a waste of time and resources. There are a few cases if your business's main secret sauce is stage 3, but for most other things - commoditize and focus.
I've seen so many teams and engineers trying to master stage 3 but no real business need and ROI. Engineers love mastering things but good leaders guide them in the right things to master and avoid getting addicted to useless stage 3 expertise.
You say that as if there were any agreement about what that means. But the point of the article is that this is not true; you'll always find someone that insists on using technology (or technique) X for your project, only technology X will be different for each person. Moreover, X might be actually making the project harder to understand, longer to develop or have other significant downsides.
The point of reaching step 3 is not necessarily to be able to develop your own custom solution (although that can be sometimes valuable), but to able to pick the right technologies for a given project given the myriad options available.
There is some value in it, you can become the guy people call in when their needs go beyond the standard framework. In the right situation this sort of work can pay handsomely.
It's cathartic to read other people who have to go through this.
I'm fighting red tape for my team as we build out a dashboard.
Outlook is packed with 1–2 hour meetings for the next 3 months where so far I'm:
* being asked to load test our system to make sure it can handle the load (of 3 people?)
* being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)
* told to have this run in k8s since everything runs in k8s
* other pedantic tasks by Sys Ops who think everything is a nail and love to argue their points ad exhaustium (or worse, argue why their fav stack is the golden child)
I understand the need for standards and making sure they're followed, but there really needs to be a human element of, "is this truly needed for what I'm trying to do?". So many engineering departments are all about automation, but don't truly think through how much automation is needed, rather than a 1 size fits all approach.
I appreciate that this article comes to the conclusion that the more correct an answer will be, the more complicated it tends to be. I wish more people in decision making positions would understand this.
The minor conclusion of this article was the more interesting (and perhaps more practical) of the two:
Hide concessions to various leaders in the project roadmap.
This isn’t just a “bureaucratic trick” as the OP suggested, it’s actually a way to convert unconditional advice into contingent advice, by encoding a priority.
> to convert unconditional advice into contingent advice, by encoding a priority
This is one of the most important things I've learned as a developer, and one that I thought I invented myself, before I knew about agile, by keeping a whiteboard near my desk with yellow sticky notes ordered by property:
"Yes, I get that it's a must-have feature, but where do you place it in relation to these other features?"
The concept of prioritization of features, and of saying "if I stopped dead at some arbitrary point in this list, would you have been happy with your order?" seemed so eye-opening to people at the time.
Yeah, that's not so much a "nifty bureaucracy hack" as a core skill to completing any project. It doesn't even have to be 20 unrelated people's feedback... it's my own priorities quite often that I mercilessly stuff on the backlog. YAGNI isn't just at the micro code level, it's a core project design skill. In fact I probably YAGNI my roadmap much harder than my code since I often have a good idea that I will in fact Need It at the microlevel after decades of experience and can save some time at that level, but at the project roadmap level anything you can trim is getting the product out generating value sooner.
(Obviously one can go too far, blah blah blah. But just as with code, we have a much larger problem in practice grabbing too much from the project feature buffet than too little.)
This is it. I do a lot of consulting work around this problem, and the roadmap is where the business and technology meet. It’s where you convert sprints into calendar boxes. It’s also the part most companies do poorly because nobody likes to spend money on good project/program managers (hint: hire product managers instead even though they’re ~25% more expensive because everything in 2021 is a product in some way).
When you do it this way, you can decide well ahead of time if you need to bring in a contractor to build a must-have feature your team won’t have bandwidth for. It flips the narrative and puts the responsibility on the business side (which usually controls the budget anyway).
This works especially well when you set and own those priorities, or if your management supports those priorities. Everyone who wants their feature will need to justify to you that their feature deserves a better placement on your roadmap.
It does not work if you can not defend your priorities.
> Create an extended product roadmap and put those items at least a year off into the future “and as long as they don’t seem relevant, you can just keep pushing them into the future.”
That actually seems to me like the root cause of all the calamity in the article, a culture of lying.
- Ceremonial unit tests for every little thing. The whole system is buggy as hell and we don’t have any confidence that the unit tests are truly covering critical parts of the app. But alas, test coverage, the god damn Pope that can never be bemoaned.
- I’m not making this one up: A/B testing for an internal enterprise app.
Test coverage is almost the perfect illustration of Goodhart’s law. Good programming practices do result in high test coverage, coverage is very easy to measure, but very easy to fake with useless “tests”. So, when the coverage is measured, the coverage goes up, but stops being meaningful.
I have seen bad unit test being introduced when engineering management starts enforcing a threshold (80% coverage). Often developers will scramble to test trivial methods, such getter and setters, but will not write any suitable tests that actually cover the business logic. It is even worse when management only enforce a 80% coverage for new changes. In those scenarios developers go out of their way to encapsulate changes in a separate class to avoid having to test the original codebase in a meaningful way.
Back when I was struggling to develop features in overengineered hell, I commented to my friends what a breath of fresh air updating a personal site with scp was.
They all gave sighs and shudders of disgust, but then again, they had normal programming jobs, so I suppose it seemed quite backwards to them.
Oh, but scp won’t update it atomically, so you should switch to a scheme that will. Then all you need to do is set cache policies correctly, coordinate with your CDN, and maybe do a staged rollout, just in case.
The other side of the coin you are not telling is: let's ship this small project to production without all those useless bells and whistles, and then fast-forward 12 months, suddenly everybody is using it and it starts failing spectacularly, and now all those teams that complained in the beginning have a fire to extinguish.
I've been too many times on this other side of the coin.
being asked to load test our system to make sure it can handle the load (of 3 people?)
The problematic load in a dashboard isn't users; it's querying the data sources to get up to date information. For example, if you're running a query to aggregate a bunch of things with lots of joins and that query takes 1.5s to run but your dashboard tries to run it every second so it can be 'real time' then you're in for a bad time even with just 1 user. You absolutely need to load test a dashboard application that's running against production data.
being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)
It might not be vital right now, but if you make a dashboard for it then it'll quickly become vital. Putting metrics in front of people focuses them on those metrics...
Cathartic is certainly the word. The title in particular really hits the mark for me.
There are a lot of people talking about computer programs, and telling us we should do things this way or that way. Even telling us that their way is certainly the best or only correct way.
A great many of these people - perhaps the majority majority - are plain wrong. Some of them talk such nonsense that I suspect they don't have any actual ability to program at all!
> * told to have this run in k8s since everything runs in k8s
I've seen a production system handling one request (which takes a handful of ms) every 2 seconds (work hours only, mind) in k8s running 8 pods. It is quite breathtaking.
I enjoyed that read. I suspect that it probably pissed off a few folks.
I'm a grizzled, scarred old codger that spent most of his career, saying "Are you sure that's a good idea?", only to be ignored, and then put in charge of mopping up the blood.
I have learned that "I told you so." is absolutely, 1000% worthless. It doesn't even feel good, saying it.
What I have learned, is that, when I see someone dancing along a cliff edge, I quietly start figuring out where the mops are kept. If that person has any authority at all, I'll never be able to stop them from their calisthenics.
One of my favorite quotes is one that pretty much describes "hedgehogs":
"There's always an easy solution to every human problem; Neat, plausible and wrong."
There's another one, by the same chap (H. L. Mencken):
"The fact that I have no remedy for all the sorrows of the world is no reason for my accepting yours. It simply supports the strong probability that yours is a fake."
Of course, the issue is that for every 10,000 appalling, messy, featured-on-rotten-dot-com failures, there's one spectacular success. Since humans are biased to think of successful outcomes as more likely than they actually are, the ingredients for that success become a "recipe," and are slavishly reproduced, without any critical thought, or flexibility.
It's like a witch doctor's formula for headache cure is bat urine, dandruff from the shrunken head of a fallen warrior chief, eye of newt, boiled alligator snot, and ground willow bark. The willow bark is what did it, but the dandruff thing is the most eye-catching ingredient, so it gets the credit, and everytime the chief gets a hangover, they start a war.
Somewhere down the road, a copycat substitutes hemlock for the willow bark, and headaches become a death sentence.
I’ve found that management and decision making is much more of a social thing than anything else. Which is probably why your “I told you so”’s feel so worthless. I don’t think think quietly making people crash into a wall is the best way to handle it either, but having worked in the same political organisation for a decade I can certainly see why it’s easier to end up in that category.
I prefer to drive into the wall with people instead, working at it together, when that’s what is going to happen despite any concerns I have. Usually when you end up being right, people will listen to you more the next time if you’ve stood there with them.
It also helps a lot when your prediction turns out to be wrong. When RPA became a big thing in the Danish public sector a few years back I was one of the stronger voices against it in most of our national ERFA networks. When we got the clear message from the top that we were going to do this, however, I jumped right in and helped us chose and build what is now the leading RPA setup in any Danish municipality aside from Copenhagen. I still think RPA is really terrible from a technical perspective, but I can also see the merit in how it’s currently saved us around 90 years worth or manual case-work at the price of a few months of developer- and support-time in total. Because I was quick to jump aboard what I still thought was going to be a sinking ship when it was going to sail no matter what I did or thought, people don’t hold how wrong I was against me but instead lovingly tease me or sometimes cheer me up with other times where I’ve been right.
You have to want to do this of course. If your workplace doesn’t have the sort of people you’ll want to drive into a wall with, the your way is probably better than mine.
> When we got the clear message from the top that we were going to do this, however, I jumped right in and helped us chose and build what is now the leading RPA setup in any Danish municipality aside from Copenhagen.
That sounds almost exactly like traditional Japanese consensus.
Everyone argues for their opinion during the planning meeting, but once The Big Boss does the "chopping motion" with his (it's always a "he") hand, then everyone is expected to fall in line, and commit to the team effort.
They actually despise "I told you so." It's not smart to do that, in a Japanese corporation.
I see myself in the same role in my organization, except that I think in terms of a different bodily (semi) fluid.
I even have had crises of confidence, thinking that "I told you syndrome" is a psychological issue with me. I do tend to be overcautious, and tend to underachieve because of it.
But I get a grim satisfaction in knowing that when the next time the bodily fluid hits the air circulator, my pail and mop will make it liveable again.
There was some dedication which I thought came from a John Le Carre novel, "For those who served and stayed silent". I can't find the source now, but that's my spirit.
Different thing. "I told you so" is a smug, nasty statement. It wins no friends, and closes the ears of those most in need of it.
A postmortem is a clinical, reasoned, and scientific review. Everyone is on board, and agrees to abide by the results.
I worked for a Japanese company, for a long time. I made some colossal mistakes, during my tenure, and was told "That was, indeed, a mistake. We expect you to mitigate it, and not repeat it." Often, I would actually get more trust and responsibility, afterwards.
My leadership is constantly pushing for “hedgehog” style advice to be depersonalized, encoded in policies, and handed over to bureaucrats or automation to enforce.
Trying to empathize with their position, I think they think failure happens because the right hedgehogs didn’t show up to the design review that day, or forgot to harp on whatever point that time. They are never satisfied with the “it depends, there’s no hard and fast rule, you have to let the experts think about it in context” responses I give them when pressed for policy. This is limiting my advancement. But worse, someday someone will join the team and will write those hedgehog policies, and then I’ll have to live under them too.
Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is.
Leaders rarely want the nuance. 99% the answer is always "it depends" or you have to ask follow-up scoping questions. They just want to be handed a decision without the color or limitations. Sometimes that leads to future uncomfortable conversations where they assumed they would get Capability X but you only gave a "yes" or "no" because that's the preferred level of detail, leaving the rest to assumption.
Author here, thanks for sharing this. Let me know what you think.
I'm trying to connect the dots on research on expert advice and our fields 'thought-leaders'.
The connection is a bit tenuous but I think contingent advice can be shown to be better than non-contingent advice. I also think people are too confident in their opinions.
One thing I've found (as a person who advises engineering managers and startups) is that recipients of advice seem to value non-contingent advice more. They just want simple answers that don't make them think.
When someone asks me a question like "how should I interview candidates?", my default answer is "it depends". Tell me about the role. The company. The culture. The product. Remote or in-person? What's the team like. Then i can give a framework that gives you the answer. But people want answers like "use take-homes" or "do 2 behavioral interviews and 1 coding interview".
Same for technical decisions. They don't want to hear "it depends". They want to hear "use Rails and MySQL hosted on Heroku".
So I naturally find myself being pushed to give non-contingent advice.
I personally find that I want opinionated advice in two scenarios: either all the options look indistinguishably similar or they are very different sets of trade-offs that leave no clear winner. At that point, the advice is more a tool for breaking decision paralysis than a choice between options with noticeably different outcomes.
Yeah, I think that could be true in general. Tetlock found the hedgehogs were more famous and more wrong and I got the idea from him that that was because simple advice is more sticky and works better in sounds bites but it could also work the opposite way -- ie. the more people ask you for advice the more you learn to tell them what they want which might be oversimplified. So you get trained to be more of a hedgehog over time. Interesting idea.
So consider the position the people asking the questions are in. They're facing a problem; they need it solved.
All of us, when we're in that position, desire a solution. I'm not sure what differentiates those who want to fully understand the whole solution space, and all the context that dictates -why- a particular solution may be the 'best' (given a specific set of tradeoffs), but certainly, whether we are like that or not, we all desire the right solution ASAP.
I'd be super interested in how you respond to those who ask such questions; do they seem interested in explaining their problem in detail? If you, rather than say "it depends", instead immediately launch into questions, are they engaged in answering them? Can you then finish with a "given what you describe, because X, Y, and Z, I think (solution) would be the best fit for you. It has the downsides of A, B, and C, but those don't apply to you", or whatever. I.e., basically change the tone to always be focusing on solving their problem, while also allowing you to inform them, rather than "it depends" which could imply "there isn't a clear-cut solution to your problem".
This is pure gold for fresher developers, and something of which more experienced devs could use a reminder.
Every fad and every champion of every technique or framework has something to teach you, and they are often very happy to teach it to you at the wrong time. Trying to please everyone at the start of the project is tantamount to design by committee, and is a sure way to kill a project.
Thanks for reading it. It was one of the those ideas bouncing around in the back of my head for a while but hard to put into words then I read something about Tetlock and the dots sort of connected for me.
Software advice isn't totally a prediction, but it sort of is.
There is a very similar problem in advice-giving for technical questions, the problem of "Why do you want to do that? You should do this instead." I've seen others recommend trying to ask binary yes/no questions ("I think it's like this. Yes/no?") or to turn an open-ended question, when asked, into a set of binaries rather than guess at the intent.
The property that seems to be common in addressing both is benchmark-setting. The advice of "kick the can down the road" for less productive advice is premised on knowing that it doesn't fit your success benchmarks, but not wanting the confrontation(since a hedgehog benchmark is going to boil down to a single-issue attachment). Likewise, a battery of narrow binary questions that have a definite pass/fail characteristic constructs a form of fox knowledge - it's pragmatic in how it describes the "potential shape" of the outcome, so it makes for a better holistic benchmark than asking "what's the best way to do this?"
> the problem of "Why do you want to do that? You should do this instead
IIR, there's a word or idiom that describes this kind of solution, I can't think of it and now it's going to bother me until I do. It's a stackoverflow issue, someone asks "How do I do X?" Someone will counter, "Why do you want to do X?" and upon receiving additional information, answer, "You don't want to do X, or this other thing you're doing before doing X. You want to start this way and go down this path and that way you don't have to do X." Maddening!
Technically, I suppose, it was Archilochus: "The fox knows many things; the hedgehog one great thing."
Tetlock seems to have a slightly different interpretation to Berlin - (paraphrased from [1]) "hedgehogs have one grand theory; foxes are skeptical about grand theories".
One thing to note is that Berlin does not consider the fox as all around better than the hedgehog. Usually you'll see that people have a preference for the fox but Berlin considers some great people as hedgehogs such as Plato, Nietzsche, and Dostoevsky. He also stresses the fact that it's merely a metaphor and shouldn't be applied strictly.
The book is absolutely worth a read if you're into the subject.
You captured this well. I genuinely thought this was just something that happened in my company, which has been around for a bit. ;-) I accidentally stumbled upon your technique independently!
A) I’d love to have a coffee with you. Virtual or otherwise!
B) What do you think about alignment of priorities -within- a team? I’ve seen some interesting behaviors and misbehaviors in a team, where initiatives that are both trivial and non trivial die a death of a thousand cuts because of various and sundry plausible reasons. If I peel back the onion on it, it seems like those situations are ones that arise because of a fundamental lack of trust. Would you challenge or support that premise? If supported would you consider external stakeholders’ objections to stem from the same root lack of trust? It seems like we get more “hedgehog” like behavior when we don’t trust each other, and more “fox-like” behavior when there’s better trust and communication.
Get article. I really appreciate how you were able to incorporate Tetlock's findings.
I've been surprised by how reluctant sales and marketing people are to Brier Scores when it comes to their forecasting, given their interest in delivery estimates from engineering.
Having been on both sides of these types of discussions, I have a few thoughts:
Advice isn't always unconditionally uncontingent. An infra person saying that something should probably be done in some overly specific preachy "best practice" way is sometimes thinking of things that a product person may not. For example, maybe the data guy told you to use WebScaleDB because scaaale, and you chose to use a simple YourSQL thing instead. But it turns out that in the next semester, a metal team you had never heard of is working on chaos testing and they're making sure WebScaleDB handles datacenter failovers properly (but they don't know about your snowflake YourSQL instance silently chugging along in a forgotten corner of one DC). This sort of stuff can be very tricky to anticipate, especially in large companies with siloed teams. I've found it useful to fully embrace the idea of leveraging technical debt: yes maybe YourSQL won't scaaale and maybe it'll die horribly and without explanation when failovers start happening, but if it can carry us to the next point in the evolution cycle, then we can reevaluate our options then, instead of being trapped in analysis paralysis and getting nothing done for the entire duration of time.
As a person giving advice, I feel that I fall in the contingent camp (looking at specifics before giving suggestions), but over the years, I've started to try to be mindful of cognitive overload: saying "it depends because X, Y, Z" often goes over people's heads especially when they're already trying to soak up advice from a million different directions. Sometimes, it's better to just take a stance and spit out the TL;DR. If the stance happens to align with "best practices", you can just point at them and people are usually satisfied; if it doesn't align, you can often sway people to understand that there is nuance with a clever enough soundbite: "no, actually you don't want to enforce 100% coverage, full coverage tells you nothing about test quality, uncovered code is what tells you what you're lacking" (or "you don't need WebScaleDB; a billion db rows can be binary-searched in 10 comparisons"). Even if your dumbed down advice now lacks nuance, there's always the opportunity to course-correct as the team builds more experience on top of that advice.
Sometimes, you have to be the thought leader and drive the change you want. At my company, for the longest time, every team was suffering the pains of Jenkins. You can't do X because otherwise Jenkins will not be able to handle it, they'd say. We've invested a lot in Jenkins, they'd say. A scaling solution is coming soon, they'd say. My team couldn't wait anymore and we took the initiative to bring in an off-the-shelf 3rd party solution that had all of the pain points figured out (and then some). This turned out to be a really good call because just a week after we deployed the new solution, our Jenkins cluster - shadowing at this point - completely gave out due to scale limits. This third party solution is now what other teams in the company are adopting - including teams that were investing in jenkins integrations before.
If you are in a meeting where "thought leaders" are debating idea vs idea – STOP. Don't engage. You will be pulled in, some words or false statements will be very tempting to prove or disprove.
Sometimes you may even be asked to take a decision or commitment on the spot to just an "idea". STOP right there and don't fall into their trap. They just want their "idea" to win, and then they'll disappear in the execution, leaving you holding the bag. Worse still, in case the idea was flawed, they'll refuse to admit. They'll come back and reinforce the idea, not allowing you to pivot or learn from mistakes. That's the nature of thought leadership – the "thought" matters more than everything else.
All ideas are open and welcome, but you don't take commitments based on just ideas. Ask them to show a spec or concrete doc, and start discussing spec vs spec, detail vs detail, plan vs plan, data vs data or anything concrete. You'll find many of these thought leaders silently disappear into the background then.
They will come back and try to abstract-ify the discussion again before decisions are taken. That's why you set ground rules before the meeting begins, and not when it's happening.
Thought leaders are all nice and fancy, until rubber hits the ground. 100% agree with just this title alone: Don't feed them.
Found this to be a straightforward and interesting article. Not sure I've ever seen someone connect Tetlock's research to engineering planning before; I certainly hadn't made the connection myself. I also appreciated the tip that instead of saying no to something, you can just add it to a "future work" section of lower-priority but definitely-still-very-important-i-promise-really-i-mean-it tasks and everyone will end up happy.
From How to Talk So Kids Will Listen & Listen So Kids Will Talk (1980):
> My husband and I took Jason and his older sister, Leslie, to the Museum of Natural History. We really enjoyed it, and the kids were just great. Only on the way out we had to pass a gift shop. Jason, our four-year-old, went wild over the souvenirs. Most of the stuff was overpriced, but we finally bought him a little set of rocks. Then he started whining for a model dinosaur. I tried to explain that we had already spent more than we should have. His father told him to quit his complaining and that he should be happy for what we did buy him. Jason began to cry. My husband told him to cut it out, and that he was acting like a baby. Jason threw himself on the floor and cried louder.
> Everyone was looking at us. I was so embarrassed that I wanted the floor to open up. Then—I don’t know how the idea came to me—I pulled a pencil and paper out of my bag and started writing. Jason asked what I was doing. I said, “I’m writing that Jason wishes he had a dinosaur.” He stared at me and said, “And a prism, too.” I wrote, “A prism, too.”
> Then he did something that bowled me over. He ran over to his sister, who was watching the whole scene, and said, “Leslie, tell Mommy what you want. She’ll write it down for you, too.” And would you believe it, that ended it. He went home very peacefully.
> I’ve used the idea many times since. Whenever I’m in a toy store with Jason and he runs around pointing to everything he wants, I take out a pencil and a scrap of paper and write it all down on his “wish list.” That seems to satisfy him. And it doesn’t mean I have to buy any of the things for him—unless maybe it’s a special occasion. I guess what Jason likes about his “wish list” is that it shows that I not only know what he wants but that I care enough to put it in writing.
I do this to myself. Every time I want to buy something I stick it on my Amazon wish list. Then I forget about it. I almost never actually buy things on the wish list.
That's a great idea. I've unwittingly used similar techniques in the past. Now that I know it's a "real thing," I may have to start using it more. Thanks for sharing this!
I've done this trick fairly often. Like many "tricks" in people management, it works until people figure it out, so you have to be a little careful with it. Nobody likes to feel manipulated.
However, I've found that being really open and collaborative with people helps mitigate the manipulation factor by a significant margin. In other words, you get them to agree that the project is not the highest priority or the highest ROI thing to be working on. You ask: "Given the list of W, X, Y, and Z, and keeping in mind that we only have enough resources to tackle two of these at a time, do you think X is the most important?" and they say "Well, X would be cool but yeah, W and Z would give us the most ROI, so let's hold off on X and Y until we have more time and resources."
The key is to be (or appear) really genuine with this. If it's obvious that you're kicking the can down the road because you don't want to do it, you won't win any friends or influence people. But if you can approach it with "I'd love to do X but the realities of our situation mean that we can't" in an authentic way, then you stand a much greater chance of having both sides walk away with a sense of accomplishment. They feel heard and valued, and you don't have to waste resources on something you don't think is a good idea.
If you can't be authentic about that, then I would just go the truthful route of "This isn't going to happen" and try and just be honest about the realities of the situation. They might feel hurt and rejected, but it's better than them feeling manipulated, IMO.
I take it one further. "Okay, let's sit down together and write out a story that fully defines what you're asking me to do".
Half the time they won't bother. -Your- effort is free, but -their- effort has a cost.
The other half of the time they will, because they care about it, and so it goes into the backlog, and they get to see what stuff takes precedence (and it's a legitimately good faith effort on my part to see it ranked appropriately, and that they feel informed as to what is coming ahead of it and why).
In a sane situation, you usually have a "product manager" or someone who owns the feature set and the priorities. (If you don't, if the engineers have to answer to multiple people with conflicting desires and demands, that's the first problem.)
But if you have a product manager (and they're doing their job), then all you have to do is tell them the truth. Let them figure out which features are priority, or will lead to the most revenue, or whatever. That's their job.
I was in an Extreme Programming estimating session one time. A particular story came up for our consideration, and several people groaned. Nobody wanted us to do the story, because it was going to be a bear to implement. I said "Just tell them the truth. They'll figure out why this is a bad idea." We estimated six months, and they decided that they didn't want the feature at that price.
> Like many "tricks" in people management, it works until people figure it out [...]
"Good strategy works even when you know it's coming" - something like this from "Sanctuary for all" :) One example of that was mentioned here few times: features need money. And resources and time.
But sometimes features can be crammed into project without bigger investment - just talk to devs and and often they will find a way. Sometimes it works perfectly, when overall architecture is good or extendable. And often it make total mess in codebase. But cost nothing ! ;)
Yeah, the future trick is kind of interesting because it solves the immediate problem and it allows people to feel like you heard their concerns and valued their advice. If that is what they are looking for then its a great solution. If they have spotted legit problems then you need to actually reassess things.
I guess like everything it is very contingent on the environment. It worked in this specific context.
I simply call hedgehogs ideologues. These people have a lens they see the whole world through and that tints everything a single color. It is a mental short-cut.
Some people like to call these mental models or lens, and that you should add as many as possible—switch out the green lens for a red lens and see if that makes things look better. And I agree, but I think if you have to consciously make “mental models” you are probably going to struggle to think critically about what the problems are anyway.
The truth is we probably all are a hedgehog at various times without realizing it. The only solution is to be as widely read as possible so that you do not short-cut to a few ideas that may or may not fit the challenge you are trying to solve.
The 3 stages of domain mastery:
Stage 1 - No knowledge or structure to approach a domain (everything is hard, pitfalls are everywhere)
Stage 2 - Frameworks that are useful to approach the domain (Map to avoid pitfall areas)
Stage 3 - Detailed understanding of the domain (in which you can move through pitfall areas freely and see where frameworks fall short)
Hedgehogs are at stage 2. You move from stage 1 to stage 2 by adopting frameworks; hence, hedgehogs are seen as "thought leaders" because they teach the frameworks that lead MOST people to more mastery. Except when you're at stage 3, in which case frameworks lead you to more inefficiencies compared to your own understanding.
All good decisions must be made by stage 3 persons, but ironically, training is most efficiently done by stage 2 persons. Hedgehogs get more limelight because 90% of the population is at stage 1 and values the knowledge of stage 2 (and can't grasp the complexities and nuances of stage 3).
Many hedgehogs struggle to touch stage 3, and instead see stage 2 as mastery. This is compounded by the positive feedback loops of success - the frameworks save time, it gives them reputation, it allows them to save stage 1 persons from their ignorance, and it's the foundation of their current level and achievements. Frameworks are also convenient and broadly applicable to many problems; detailed domain mastery in contrast is difficult, time consuming, and highly contextualized.
All of this makes it hard to move beyond stage 2 into stage 3.
Works for almost any X - writer, programmer, driving, etc.
In my experience slavishly following the "rules" or "best practices" can actually be worse than never following them. Not understanding when it's good to deviate usually means of a lack of understanding of why the "rules" or "best practices" exist in the first place. So much attention is spent following the letter of the rule rather than what problems it was meant to solve.
Look no further than modern day "Agile" vs the actual Agile Manifesto
"Some consider it noble to have a method; others consider it noble not to have a method. Not to have a method is bad; to stop entirely at method is worse still. One should at first observe rules severely, then change them in an intelligent way. The aim of possessing method is to seem finally as if one had no method."
“Learn the rules like a pro, so you can break them like an artist.” - Pablo Picasso
In my opinion, once you play a game for a while, it's easy to know exactly when and how to break ANY rule: you must understand what the rules are there for.
Think of any agile framework, a ton of them are practiced like cults by the most. However, if you think that those are there for 2 simple purposes:
Then it's easy to see which rules are useful to the goals and which are hurting you, thus you just gotta break them!This is just an easy example, this is really about anything in life though!
And then some stage 3 folks make a new stage 2 thing and the cycle continues. But I think people don’t want stage 3 attainment by giving up the social buffer (which is very reasonable, as being a social animal in most cases is better life path than a lonely scholar/innovator).
EDIT: I think what I am talking about applies better to spiritual/philosophical/psychological attainment, rather than technological. The effect is probably still there for less spiritual things like tech or writing, but probably less so.
While interviewing recently, I've found a similar anti-correlation between general competency and people who focus on teaching frameworks and libraries.
The more competent candidates basically don't do any teaching based on frameworks/libraries (but they might have experience mentoring individuals); whereas the candidates who focused on teaching frameworks specifically (often to groups) were the least competent - the more they focused on teaching the less competent they seemed to be! I found this kinda surprising and worrying, though my sample size is fairly small. To clarify, I know only so much can be evaluated in an interview, this is basic competency.
The article talks about contingent advice being better than universal advice only in stage 3. If you're not then universal advice is helpful. I think that holds true for most people and most subjects, including myself.
Originally, I thought the article did a great job describing a common scenario that occurs usually in decision making and I wanted to describe my intuition on why I think it comes about. It's not really a universal theory, more my own digestion / explanation on the interrelation around hedgehogs vs foxes and my own interpretation of the issues that the articles describes.
...Now that you bring it up, the OP is offering a piece of universal advice. The irony seems stronger there. Not sure if that invalidates his advice or not. Probably just invalidates taking it as a hard and fast rule.
In its defense, it's also extremely intellectually gratifying
It is more about some thought-leader being keen on blockchains, machine learning, supply-side economics, or what have you, and looking at every problem/situation through the lens of wanting to apply this technology/policy to solve it, possibly ignoring the downsides/details/side-effects.
The article gives the fictional example of a project “just needing a relational database” but the “domain expert” trying to push them to use SpringySearch because that can also work as a relational database (and because this hedgehog is sold on SpringySearch).
At stage 3, things aren't necessarily easy, but you have the skills to navigate much larger amounts of uncertainty than stage 1 or 2.
See for example https://news.ycombinator.com/item?id=27468360 where the person mentions pressure to simplify advice and Tetlocks own work shows that the hedgehogs were the more famous and successful people. So some people may migrate backwards by simplifying a message for maximum impact.
Definitely at stage 3, could also be the reason not as many people using it.
If one stays open to new idea while at the same time keeps asking why that idea can be good or bad, one can escape the stage 2 trap.
Dead Comment
Often, moving to the stage 3 is a waste of time and resources. There are a few cases if your business's main secret sauce is stage 3, but for most other things - commoditize and focus.
I've seen so many teams and engineers trying to master stage 3 but no real business need and ROI. Engineers love mastering things but good leaders guide them in the right things to master and avoid getting addicted to useless stage 3 expertise.
You say that as if there were any agreement about what that means. But the point of the article is that this is not true; you'll always find someone that insists on using technology (or technique) X for your project, only technology X will be different for each person. Moreover, X might be actually making the project harder to understand, longer to develop or have other significant downsides.
The point of reaching step 3 is not necessarily to be able to develop your own custom solution (although that can be sometimes valuable), but to able to pick the right technologies for a given project given the myriad options available.
I'm fighting red tape for my team as we build out a dashboard.
Outlook is packed with 1–2 hour meetings for the next 3 months where so far I'm:
* being asked to load test our system to make sure it can handle the load (of 3 people?)
* being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)
* told to have this run in k8s since everything runs in k8s
* other pedantic tasks by Sys Ops who think everything is a nail and love to argue their points ad exhaustium (or worse, argue why their fav stack is the golden child)
I understand the need for standards and making sure they're followed, but there really needs to be a human element of, "is this truly needed for what I'm trying to do?". So many engineering departments are all about automation, but don't truly think through how much automation is needed, rather than a 1 size fits all approach.
I appreciate that this article comes to the conclusion that the more correct an answer will be, the more complicated it tends to be. I wish more people in decision making positions would understand this.
Hide concessions to various leaders in the project roadmap.
This isn’t just a “bureaucratic trick” as the OP suggested, it’s actually a way to convert unconditional advice into contingent advice, by encoding a priority.
This is one of the most important things I've learned as a developer, and one that I thought I invented myself, before I knew about agile, by keeping a whiteboard near my desk with yellow sticky notes ordered by property:
"Yes, I get that it's a must-have feature, but where do you place it in relation to these other features?"
The concept of prioritization of features, and of saying "if I stopped dead at some arbitrary point in this list, would you have been happy with your order?" seemed so eye-opening to people at the time.
(Obviously one can go too far, blah blah blah. But just as with code, we have a much larger problem in practice grabbing too much from the project feature buffet than too little.)
When you do it this way, you can decide well ahead of time if you need to bring in a contractor to build a must-have feature your team won’t have bandwidth for. It flips the narrative and puts the responsibility on the business side (which usually controls the budget anyway).
It does not work if you can not defend your priorities.
That actually seems to me like the root cause of all the calamity in the article, a culture of lying.
- Ceremonial unit tests for every little thing. The whole system is buggy as hell and we don’t have any confidence that the unit tests are truly covering critical parts of the app. But alas, test coverage, the god damn Pope that can never be bemoaned.
- I’m not making this one up: A/B testing for an internal enterprise app.
They all gave sighs and shudders of disgust, but then again, they had normal programming jobs, so I suppose it seemed quite backwards to them.
/s
Seriously though, rsync is your friend. :-)
The problematic load in a dashboard isn't users; it's querying the data sources to get up to date information. For example, if you're running a query to aggregate a bunch of things with lots of joins and that query takes 1.5s to run but your dashboard tries to run it every second so it can be 'real time' then you're in for a bad time even with just 1 user. You absolutely need to load test a dashboard application that's running against production data.
being asked to integrate with various analytics platforms so we can alert some poor schmuck at 3 AM in case the API goes down then (it's not a vital part of any platform)
It might not be vital right now, but if you make a dashboard for it then it'll quickly become vital. Putting metrics in front of people focuses them on those metrics...
It's just as likely that OP did already know that what you are insisting on is not relevant to their use case. That might be why they stated it.
There are a lot of people talking about computer programs, and telling us we should do things this way or that way. Even telling us that their way is certainly the best or only correct way.
A great many of these people - perhaps the majority majority - are plain wrong. Some of them talk such nonsense that I suspect they don't have any actual ability to program at all!
How can they be so sure of themselves?
I've seen a production system handling one request (which takes a handful of ms) every 2 seconds (work hours only, mind) in k8s running 8 pods. It is quite breathtaking.
I'm a grizzled, scarred old codger that spent most of his career, saying "Are you sure that's a good idea?", only to be ignored, and then put in charge of mopping up the blood.
I have learned that "I told you so." is absolutely, 1000% worthless. It doesn't even feel good, saying it.
What I have learned, is that, when I see someone dancing along a cliff edge, I quietly start figuring out where the mops are kept. If that person has any authority at all, I'll never be able to stop them from their calisthenics.
One of my favorite quotes is one that pretty much describes "hedgehogs":
There's another one, by the same chap (H. L. Mencken): Of course, the issue is that for every 10,000 appalling, messy, featured-on-rotten-dot-com failures, there's one spectacular success. Since humans are biased to think of successful outcomes as more likely than they actually are, the ingredients for that success become a "recipe," and are slavishly reproduced, without any critical thought, or flexibility.It's like a witch doctor's formula for headache cure is bat urine, dandruff from the shrunken head of a fallen warrior chief, eye of newt, boiled alligator snot, and ground willow bark. The willow bark is what did it, but the dandruff thing is the most eye-catching ingredient, so it gets the credit, and everytime the chief gets a hangover, they start a war.
Somewhere down the road, a copycat substitutes hemlock for the willow bark, and headaches become a death sentence.
I prefer to drive into the wall with people instead, working at it together, when that’s what is going to happen despite any concerns I have. Usually when you end up being right, people will listen to you more the next time if you’ve stood there with them.
It also helps a lot when your prediction turns out to be wrong. When RPA became a big thing in the Danish public sector a few years back I was one of the stronger voices against it in most of our national ERFA networks. When we got the clear message from the top that we were going to do this, however, I jumped right in and helped us chose and build what is now the leading RPA setup in any Danish municipality aside from Copenhagen. I still think RPA is really terrible from a technical perspective, but I can also see the merit in how it’s currently saved us around 90 years worth or manual case-work at the price of a few months of developer- and support-time in total. Because I was quick to jump aboard what I still thought was going to be a sinking ship when it was going to sail no matter what I did or thought, people don’t hold how wrong I was against me but instead lovingly tease me or sometimes cheer me up with other times where I’ve been right.
You have to want to do this of course. If your workplace doesn’t have the sort of people you’ll want to drive into a wall with, the your way is probably better than mine.
That sounds almost exactly like traditional Japanese consensus.
Everyone argues for their opinion during the planning meeting, but once The Big Boss does the "chopping motion" with his (it's always a "he") hand, then everyone is expected to fall in line, and commit to the team effort.
They actually despise "I told you so." It's not smart to do that, in a Japanese corporation.
I even have had crises of confidence, thinking that "I told you syndrome" is a psychological issue with me. I do tend to be overcautious, and tend to underachieve because of it.
But I get a grim satisfaction in knowing that when the next time the bodily fluid hits the air circulator, my pail and mop will make it liveable again.
There was some dedication which I thought came from a John Le Carre novel, "For those who served and stayed silent". I can't find the source now, but that's my spirit.
(I am not in the IT area, I am in academia.)
https://archive.uie.com/brainsparks/2011/07/08/beans-and-nos...
This is proof of bad company culture.
Doing post-mortems is important to learn what went wrong in the decision process and how to prevent it.
A postmortem is a clinical, reasoned, and scientific review. Everyone is on board, and agrees to abide by the results.
I worked for a Japanese company, for a long time. I made some colossal mistakes, during my tenure, and was told "That was, indeed, a mistake. We expect you to mitigate it, and not repeat it." Often, I would actually get more trust and responsibility, afterwards.
Trying to empathize with their position, I think they think failure happens because the right hedgehogs didn’t show up to the design review that day, or forgot to harp on whatever point that time. They are never satisfied with the “it depends, there’s no hard and fast rule, you have to let the experts think about it in context” responses I give them when pressed for policy. This is limiting my advancement. But worse, someday someone will join the team and will write those hedgehog policies, and then I’ll have to live under them too.
Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is.
Deleted Comment
"Software engineering is a thinking person’s game. I get that management wishes it weren’t, but it is."
The connection is a bit tenuous but I think contingent advice can be shown to be better than non-contingent advice. I also think people are too confident in their opinions.
Also another submission here: https://news.ycombinator.com/item?id=27462255
One thing I've found (as a person who advises engineering managers and startups) is that recipients of advice seem to value non-contingent advice more. They just want simple answers that don't make them think.
When someone asks me a question like "how should I interview candidates?", my default answer is "it depends". Tell me about the role. The company. The culture. The product. Remote or in-person? What's the team like. Then i can give a framework that gives you the answer. But people want answers like "use take-homes" or "do 2 behavioral interviews and 1 coding interview".
Same for technical decisions. They don't want to hear "it depends". They want to hear "use Rails and MySQL hosted on Heroku".
So I naturally find myself being pushed to give non-contingent advice.
All of us, when we're in that position, desire a solution. I'm not sure what differentiates those who want to fully understand the whole solution space, and all the context that dictates -why- a particular solution may be the 'best' (given a specific set of tradeoffs), but certainly, whether we are like that or not, we all desire the right solution ASAP.
I'd be super interested in how you respond to those who ask such questions; do they seem interested in explaining their problem in detail? If you, rather than say "it depends", instead immediately launch into questions, are they engaged in answering them? Can you then finish with a "given what you describe, because X, Y, and Z, I think (solution) would be the best fit for you. It has the downsides of A, B, and C, but those don't apply to you", or whatever. I.e., basically change the tone to always be focusing on solving their problem, while also allowing you to inform them, rather than "it depends" which could imply "there isn't a clear-cut solution to your problem".
Every fad and every champion of every technique or framework has something to teach you, and they are often very happy to teach it to you at the wrong time. Trying to please everyone at the start of the project is tantamount to design by committee, and is a sure way to kill a project.
To a hammer, everything looks like nails.
Well written.
Software advice isn't totally a prediction, but it sort of is.
The property that seems to be common in addressing both is benchmark-setting. The advice of "kick the can down the road" for less productive advice is premised on knowing that it doesn't fit your success benchmarks, but not wanting the confrontation(since a hedgehog benchmark is going to boil down to a single-issue attachment). Likewise, a battery of narrow binary questions that have a definite pass/fail characteristic constructs a form of fox knowledge - it's pragmatic in how it describes the "potential shape" of the outcome, so it makes for a better holistic benchmark than asking "what's the best way to do this?"
IIR, there's a word or idiom that describes this kind of solution, I can't think of it and now it's going to bother me until I do. It's a stackoverflow issue, someone asks "How do I do X?" Someone will counter, "Why do you want to do X?" and upon receiving additional information, answer, "You don't want to do X, or this other thing you're doing before doing X. You want to start this way and go down this path and that way you don't have to do X." Maddening!
Tetlock seems to have a slightly different interpretation to Berlin - (paraphrased from [1]) "hedgehogs have one grand theory; foxes are skeptical about grand theories".
[1] https://longnow.org/seminars/02007/jan/26/why-foxes-are-bett...
The book is absolutely worth a read if you're into the subject.
A) I’d love to have a coffee with you. Virtual or otherwise!
B) What do you think about alignment of priorities -within- a team? I’ve seen some interesting behaviors and misbehaviors in a team, where initiatives that are both trivial and non trivial die a death of a thousand cuts because of various and sundry plausible reasons. If I peel back the onion on it, it seems like those situations are ones that arise because of a fundamental lack of trust. Would you challenge or support that premise? If supported would you consider external stakeholders’ objections to stem from the same root lack of trust? It seems like we get more “hedgehog” like behavior when we don’t trust each other, and more “fox-like” behavior when there’s better trust and communication.
Also Ctrl F: "beleive" -> "believe"
For example, another type of useless feedback is so general as to be insulting; "this needs to scale" or "it needs to be high quality".
"too general" vs "non-contingent" are nice distinct buckets.
https://news.ycombinator.com/item?id=27468654
I'd be interested to hear your thoughts on that take since I thought it was very insightful.
I've been surprised by how reluctant sales and marketing people are to Brier Scores when it comes to their forecasting, given their interest in delivery estimates from engineering.
Having been on both sides of these types of discussions, I have a few thoughts:
Advice isn't always unconditionally uncontingent. An infra person saying that something should probably be done in some overly specific preachy "best practice" way is sometimes thinking of things that a product person may not. For example, maybe the data guy told you to use WebScaleDB because scaaale, and you chose to use a simple YourSQL thing instead. But it turns out that in the next semester, a metal team you had never heard of is working on chaos testing and they're making sure WebScaleDB handles datacenter failovers properly (but they don't know about your snowflake YourSQL instance silently chugging along in a forgotten corner of one DC). This sort of stuff can be very tricky to anticipate, especially in large companies with siloed teams. I've found it useful to fully embrace the idea of leveraging technical debt: yes maybe YourSQL won't scaaale and maybe it'll die horribly and without explanation when failovers start happening, but if it can carry us to the next point in the evolution cycle, then we can reevaluate our options then, instead of being trapped in analysis paralysis and getting nothing done for the entire duration of time.
As a person giving advice, I feel that I fall in the contingent camp (looking at specifics before giving suggestions), but over the years, I've started to try to be mindful of cognitive overload: saying "it depends because X, Y, Z" often goes over people's heads especially when they're already trying to soak up advice from a million different directions. Sometimes, it's better to just take a stance and spit out the TL;DR. If the stance happens to align with "best practices", you can just point at them and people are usually satisfied; if it doesn't align, you can often sway people to understand that there is nuance with a clever enough soundbite: "no, actually you don't want to enforce 100% coverage, full coverage tells you nothing about test quality, uncovered code is what tells you what you're lacking" (or "you don't need WebScaleDB; a billion db rows can be binary-searched in 10 comparisons"). Even if your dumbed down advice now lacks nuance, there's always the opportunity to course-correct as the team builds more experience on top of that advice.
Sometimes, you have to be the thought leader and drive the change you want. At my company, for the longest time, every team was suffering the pains of Jenkins. You can't do X because otherwise Jenkins will not be able to handle it, they'd say. We've invested a lot in Jenkins, they'd say. A scaling solution is coming soon, they'd say. My team couldn't wait anymore and we took the initiative to bring in an off-the-shelf 3rd party solution that had all of the pain points figured out (and then some). This turned out to be a really good call because just a week after we deployed the new solution, our Jenkins cluster - shadowing at this point - completely gave out due to scale limits. This third party solution is now what other teams in the company are adopting - including teams that were investing in jenkins integrations before.
Sometimes you may even be asked to take a decision or commitment on the spot to just an "idea". STOP right there and don't fall into their trap. They just want their "idea" to win, and then they'll disappear in the execution, leaving you holding the bag. Worse still, in case the idea was flawed, they'll refuse to admit. They'll come back and reinforce the idea, not allowing you to pivot or learn from mistakes. That's the nature of thought leadership – the "thought" matters more than everything else.
All ideas are open and welcome, but you don't take commitments based on just ideas. Ask them to show a spec or concrete doc, and start discussing spec vs spec, detail vs detail, plan vs plan, data vs data or anything concrete. You'll find many of these thought leaders silently disappear into the background then.
They will come back and try to abstract-ify the discussion again before decisions are taken. That's why you set ground rules before the meeting begins, and not when it's happening.
Thought leaders are all nice and fancy, until rubber hits the ground. 100% agree with just this title alone: Don't feed them.
> My husband and I took Jason and his older sister, Leslie, to the Museum of Natural History. We really enjoyed it, and the kids were just great. Only on the way out we had to pass a gift shop. Jason, our four-year-old, went wild over the souvenirs. Most of the stuff was overpriced, but we finally bought him a little set of rocks. Then he started whining for a model dinosaur. I tried to explain that we had already spent more than we should have. His father told him to quit his complaining and that he should be happy for what we did buy him. Jason began to cry. My husband told him to cut it out, and that he was acting like a baby. Jason threw himself on the floor and cried louder.
> Everyone was looking at us. I was so embarrassed that I wanted the floor to open up. Then—I don’t know how the idea came to me—I pulled a pencil and paper out of my bag and started writing. Jason asked what I was doing. I said, “I’m writing that Jason wishes he had a dinosaur.” He stared at me and said, “And a prism, too.” I wrote, “A prism, too.”
> Then he did something that bowled me over. He ran over to his sister, who was watching the whole scene, and said, “Leslie, tell Mommy what you want. She’ll write it down for you, too.” And would you believe it, that ended it. He went home very peacefully.
> I’ve used the idea many times since. Whenever I’m in a toy store with Jason and he runs around pointing to everything he wants, I take out a pencil and a scrap of paper and write it all down on his “wish list.” That seems to satisfy him. And it doesn’t mean I have to buy any of the things for him—unless maybe it’s a special occasion. I guess what Jason likes about his “wish list” is that it shows that I not only know what he wants but that I care enough to put it in writing.
Dead Comment
However, I've found that being really open and collaborative with people helps mitigate the manipulation factor by a significant margin. In other words, you get them to agree that the project is not the highest priority or the highest ROI thing to be working on. You ask: "Given the list of W, X, Y, and Z, and keeping in mind that we only have enough resources to tackle two of these at a time, do you think X is the most important?" and they say "Well, X would be cool but yeah, W and Z would give us the most ROI, so let's hold off on X and Y until we have more time and resources."
The key is to be (or appear) really genuine with this. If it's obvious that you're kicking the can down the road because you don't want to do it, you won't win any friends or influence people. But if you can approach it with "I'd love to do X but the realities of our situation mean that we can't" in an authentic way, then you stand a much greater chance of having both sides walk away with a sense of accomplishment. They feel heard and valued, and you don't have to waste resources on something you don't think is a good idea.
If you can't be authentic about that, then I would just go the truthful route of "This isn't going to happen" and try and just be honest about the realities of the situation. They might feel hurt and rejected, but it's better than them feeling manipulated, IMO.
Half the time they won't bother. -Your- effort is free, but -their- effort has a cost.
The other half of the time they will, because they care about it, and so it goes into the backlog, and they get to see what stuff takes precedence (and it's a legitimately good faith effort on my part to see it ranked appropriately, and that they feel informed as to what is coming ahead of it and why).
But if you have a product manager (and they're doing their job), then all you have to do is tell them the truth. Let them figure out which features are priority, or will lead to the most revenue, or whatever. That's their job.
I was in an Extreme Programming estimating session one time. A particular story came up for our consideration, and several people groaned. Nobody wanted us to do the story, because it was going to be a bear to implement. I said "Just tell them the truth. They'll figure out why this is a bad idea." We estimated six months, and they decided that they didn't want the feature at that price.
"Good strategy works even when you know it's coming" - something like this from "Sanctuary for all" :) One example of that was mentioned here few times: features need money. And resources and time.
But sometimes features can be crammed into project without bigger investment - just talk to devs and and often they will find a way. Sometimes it works perfectly, when overall architecture is good or extendable. And often it make total mess in codebase. But cost nothing ! ;)
Yeah, the future trick is kind of interesting because it solves the immediate problem and it allows people to feel like you heard their concerns and valued their advice. If that is what they are looking for then its a great solution. If they have spotted legit problems then you need to actually reassess things.
I guess like everything it is very contingent on the environment. It worked in this specific context.
Some people like to call these mental models or lens, and that you should add as many as possible—switch out the green lens for a red lens and see if that makes things look better. And I agree, but I think if you have to consciously make “mental models” you are probably going to struggle to think critically about what the problems are anyway.
The truth is we probably all are a hedgehog at various times without realizing it. The only solution is to be as widely read as possible so that you do not short-cut to a few ideas that may or may not fit the challenge you are trying to solve.