I'd argue that incremental change strikes the balance between tearing something down and being paralysed by potential consequences. Also, you can only learn so much by passive observation. Eventually you have to apply a small stimulus (incremental change) and observe the response. If the response is bad reverse the increment.
Caveat is that if you reach a local maxima a more drastic change may be required, but the journey to the local maxima has probably taught you enough about the system that you can safely make the bigger jump.
Chesterton's Fence doesn't tell us _not to change things_ or even prescribe some level of _incrementalism_ it just asks us to make a good-faith effort to _know_ why the thing we want to change is the way it is before we change it.
To follow the metaphor, maybe the reason the fence is there is because the county water supply is behind the fence and they put it there to stop cows and livestock from fouling the water.
If there's one "prescriptive" thing I find useful about the allegory that applies to software, it's thinking about how to write good documentation.
A lot of engineers would write something like "this fence is made of silver oak planks cut to a length of 3 feet and fastened together aluminum wire" and while that may be useful if you have to fix the fence, for the people in the allegory it would be much more helpful to have a sign that says "this fence is here to keep cows from fouling the water in the lake behind it because 5 houses nearby use it for drinking water". If either the 5 houses or the cows are no longer there, it makes the whole system much less resistant to change.
Often enough there is just no reason at all for the fence, or the reason is completely lost. So knowing why it's there is impossible, and if you condition taking down the fence on that, you are effectively prohibiting taking the fence down.
That's why every time somebody argues against it. It's because at some point you have to abandon that rule, and all the interesting discussion is about when to abandon it, not on how to follow it.
Another lesson from the story is that you might not be able to figure out the reason for the fence currently, but in springtime when the creek overflows, it might become obvious.
I'd counter argue that the need to make executive quick consequential decisions actually does a lot of damage. I understand that is what we value and how we consider our executives to be high functioning strong leaders but I don't believe that gets to the best long term outcomes for the organization.
Obviously paralysis is a different matter - but taking time and consideration to get to the appropriate decision actually is highly productive in the long run - though if you are an executive it might not signal that you are going to be around for a long time due to organization thinking you aren't up to task.
N.B - taking the time to access a decision is always a trade-off.
>Eventually you have to apply a small stimulus (incremental change) and observe the response.
In any sufficiently complex system this is probably true. Often there's a reason for some thing being there, and that reason is often enough a workaround for an issue in an external library. Then the library gets updated, but the workaround remains.
I've observed this often enough. I try to add a comment whenever I do this linking to the issue on the external repo if possible, so that later on you can check if it's still needed.
But this is not fool proof and not everyone will leave breadcrumbs. The breadcrumbs might also be eaten in the meantime by some merges and refactorings.
In the end you're often left with a useless workaround, the person that made it is not in the company anymore, and there's nothing to observe for why it's necessary.
If you don't want your system to eventually collapse under its own weight, sometimes you have to go for a risky refactoring. Doing it incrementally mitigates the risk.
> Eventually you have to apply a small stimulus (incremental change) and observe the response
I wonder whether if that observation is transferable so that the second-order thinking need not be original if there exists a directory of common decisions & outcomes?
I've been trying to validate[1] whether if we could create a standardization for common decisions and outcomes; Then people could share their recipes of decisions & outcomes to a central directory which can be used to make Second-Order decisions without actually having to make one our self.
Agreed; under uncertainly, you should shrink the size of your bet.
For example, I think open borders might be good for the world (maybe outside of pandemics, say), with less than 100% confidence. However, debating completely open borders is sort of pointless. The actual lever we have is how much legal immigration we allow (and possibly how much enforcement we do against illegal immigration). So in practice, the road to open borders would be steady increases in the number of visas we grant (unless things start to go badly), not just tearing down the "fence" all at once.
Even with 100% knowledge¹ that Open Borders is the right policy, implementing it overnight can be a disaster, since institutions, culture, public opinion etc need time to adjust to the new reality.
¹ For the sake of argument. Let's not debate immigration policy.
There are also often (though not always) alternatives to incremental change.
These include pilot projects, modelling, stratified or distributed deployments (different regulations in different districts, rolling out changes to subsets of a userbase, etc.).
There are times when an entire system needs to be modified as a whole. Those are not all instances however.
Here is a very popular in Russian tech joke, in my humble translation.
Why do engineers hate to work with other people's code? Let me try to make an analogy.
You are hired to finish building a scientific lab on a distant island. When you arrive, you see half-finished building, a giant (same size than the building) fan and a hot air balloon. In the basement you found a room full of floor mops - about a thousand of these.
You clean up the mess, finish all the work, the lab starts it's first experiment and then suddenly after only five minutes scientists are starting running around screaming about toxic gas leak. You call your predecessor.
- Buddy, what's up, there's a toxic gas leak, how's that possible?
- I don't know, everything should just work, did you change anything?
- Yes, I put all those floor mops away.
- Why did you do that?! It was meant to support the floor above, where the toxic gas reservoir is! It was too heavy, thus the mops!
- Are you crazy to support the toxic gas reservoir with mops? Why didn't you at least put a sign on the door? What do I do now?
- Turn on the fan, it will blow the gas away.
- Dude, I disassembled the fan first thing, why didn't you put a box of gas masks instead?
- Where do I find gas masks? And the fan was a spare from the other project!
- This is terrible! We all gonna die here!
- Why are you still there? Jump to the balloon and fuck off the damn island!
In a perfect world where you are able to find out the reasons why the fence exists, it is fine to evaluate if the reasons still hold true to keep the fence. In many real instances, we may not be able to find out the original reasons with any reasonable level of certainty.
Under those circumstances, it is best to figure out options in front of us, make a decision and move forward. Doing nothing is also an option, but one cannot be stuck in endless analysis-paralysis and fail to decide.
Does anybody actually think we shouldn't change anything if we can't figure out what the original reason after a good faith effort? That just seems like a straw man that simply isn't what Chesteron's fence is about.
It does happen quite often, especially when groups (n>1) are involved in decision making and the parties don't all agree on the causes or have divergent views or have competing vested interests. eg: UN decision making or Climate change action planning etc.,
When this occurs, they dig in firmly in their positions and either argue for keeping the fence as-is (maintain status quo) or taking it down (action bias).
I may be reading your point incorrectly, but you mention no reason FOR a change, except change. I think that is fine on your own property, but wasteful when it comes to property that is public or common. If the change has an associated cost, on public or common property, change is usually passed to those public or common owners, often when they don't want it.
Bureaucrats famously build things just to associate themselves with "getting things done".
This is one of the reasons that keeping code in a repository (especially a git repo) with good discipline on tying changes to issue descriptions is so key. It won't guarantee you'll find all relevant consequences of a change, but it really gives you a leg-up on the alternative of parsing out the results of a change by examining the code entrails.
Chesterton's Fence is a valid point, but I dislike how the article celebrates it. Doing the archeology required to figure out why things are the way they are is oftentimes much more expensive than building it was in the first place. This leads to terrible situations where it's cheaper to simply build an expressway over the fence rather than tear it down.
This is the stuff technical debt is made of.
Sometimes it's the right decision to risk second-order regressions in order to make forward progress. This of course depends on the circumstances and the costs of regressions.
True, but also ignoring Chesterton's Fence is what catastrophic rewrites are made of.
If you know why the fence is there and have confidence the reasons no longer apply, you can be bold. If you're not sure because the archeology is expensive, you should take baby steps if possible. (Which I think you get at with the cost of regressions.)
For my team, that's often flipping a feature flag where we don't expect any difference, and watching the output for a while to verify. First sign of surprise, we can quickly flip back. We get surprised more than we would like.
Its a process obviously. Sure it is painful to spelunk the first time. But it gets easier over time. And it definitely gets easier if you are able to remove things over time and eliminate all that unneeded code/architecture noise.
With technical debt you know why something was done a certain way. It was a deliberate shortcut, done for a reason, to be fixed later (paying off the debt).
Just as a general rule in conversation, invoking so-and-so's law or this fence isn't a productive tactic. Instead, ask the question that wisdom suggests you ask, and for this bothersome fence in particular, keep in mind that you are weighing an unidentified consequence against a proposed benefit, and at least endeavor to expend some of your own effort on suggesting what the value of the fence might be or how one might go about finding it. You may find some of that effort has been undertaken and that failure has not been shared.
> invoking so-and-so's law or this fence isn't a productive tactic.
Not true. If we both are familiar with the concept, saying "Consider Chesterson's fence" conveys a lot of information. If we aren't, chasing down the reference will almost always result in a much more articulate version of the concept than whatever you or I would come up with on the fly.
What I tend to notice about this is how easily people seem to fall on both sides of this fence (pun slightly intended). People can be annoyed that a change broke their workflow (why didn't anyone bother to find out if this was in use), while simultaneously pushing to change other things without applying Chesterton's Fence (we should just do ... to solve this problem).
Like so many logical constructs, it really just seems to exist so we can apply it when we are frustrated, not in our day to day decision making.
I think it's cheekier than that. It's a deliberate attempt to watch people miss points, argue, and split hairs over the mildest of mild rules of thumb.
Young developers often propose large-scale rewrites with little sense of the costs or risks involved. Changing old code often has unintended side-effects.
Before destroying/removing/deleting something, make sure you spend sufficient time to understand why it was created in the first place and if you can't, then leave it intact.
No need to spread 1 piece of butter on a whole bakery worth of bread
If I'm reading your revision correctly, you're suggesting leaving the feature in place if you cannot spend the time to understand its justification, rather than if you cannot find a justification. Your wording could be read either way.
The first interpretation is generally more defensible, but still not an absolute. There may be circumstances in which time does not exist and other exigencies prevail. As an example, if you come across a fence in the course of, say, responding to / evacuating from a natural disaster, you might consider briefly if there's some specific danger that the fence guards against (say, a cliff or other hazard), but determine that the greater benefit is in removal for the purpose of effecting rescue or escape.
First-responders don't agonise over why car doors were created when deploying Jaws of Life, earthquake responders don't survey plans of buildings to determine why walls exist before demolishing or removing them to access victims.
In less pressing circumstances, such as making incremental updates or changes to some system, performing some inquiry into purpose, intent, or function is strongly advisable, and Chesterton's Law is a check against naive and uninformed alteration without such considerations.
And this distinction is very important. Organizational scar tissue can and does develop, where things are a certain way and no one for sure knows why, but there are {vibes, superstition, ego, whatever} that develop into maintaining them.
It's really important to challenge these things, but they're tricky specifically because no one owns the decision (the person who chose left, or forgot, or who knows) so a naïve application of "don't change what you can't figure out the reason for" fails when there may be no clearly documented reason, beyond a shared hunch that it mattered at some point.
Sure you should still investigate, but if after a decent investigation you can't figure out why, yes, change the thing. Sometimes you'll be wrong, but this is what we have error budgets for.
I'm not sure what Frost's _Mending Wall_ was supposed to add to this. In that poem, in the author's view the wall is entirely pointless, since it doesn't actually block anything. But his neighbor insists on repairing it because "good fences make good neighbors".
Its juxtaposition with Chesterton's essay is kind of weird, because Frost doesn't even know what the wall is for, and apparently neither does the neighbor.
Caveat is that if you reach a local maxima a more drastic change may be required, but the journey to the local maxima has probably taught you enough about the system that you can safely make the bigger jump.
To follow the metaphor, maybe the reason the fence is there is because the county water supply is behind the fence and they put it there to stop cows and livestock from fouling the water.
A lot of engineers would write something like "this fence is made of silver oak planks cut to a length of 3 feet and fastened together aluminum wire" and while that may be useful if you have to fix the fence, for the people in the allegory it would be much more helpful to have a sign that says "this fence is here to keep cows from fouling the water in the lake behind it because 5 houses nearby use it for drinking water". If either the 5 houses or the cows are no longer there, it makes the whole system much less resistant to change.
That's why every time somebody argues against it. It's because at some point you have to abandon that rule, and all the interesting discussion is about when to abandon it, not on how to follow it.
Obviously paralysis is a different matter - but taking time and consideration to get to the appropriate decision actually is highly productive in the long run - though if you are an executive it might not signal that you are going to be around for a long time due to organization thinking you aren't up to task.
N.B - taking the time to access a decision is always a trade-off.
In any sufficiently complex system this is probably true. Often there's a reason for some thing being there, and that reason is often enough a workaround for an issue in an external library. Then the library gets updated, but the workaround remains.
I've observed this often enough. I try to add a comment whenever I do this linking to the issue on the external repo if possible, so that later on you can check if it's still needed.
But this is not fool proof and not everyone will leave breadcrumbs. The breadcrumbs might also be eaten in the meantime by some merges and refactorings.
In the end you're often left with a useless workaround, the person that made it is not in the company anymore, and there's nothing to observe for why it's necessary.
If you don't want your system to eventually collapse under its own weight, sometimes you have to go for a risky refactoring. Doing it incrementally mitigates the risk.
I wonder whether if that observation is transferable so that the second-order thinking need not be original if there exists a directory of common decisions & outcomes?
I've been trying to validate[1] whether if we could create a standardization for common decisions and outcomes; Then people could share their recipes of decisions & outcomes to a central directory which can be used to make Second-Order decisions without actually having to make one our self.
[1] https://needgap.com/problems/263-plan-second-order-third-ord...
For example, I think open borders might be good for the world (maybe outside of pandemics, say), with less than 100% confidence. However, debating completely open borders is sort of pointless. The actual lever we have is how much legal immigration we allow (and possibly how much enforcement we do against illegal immigration). So in practice, the road to open borders would be steady increases in the number of visas we grant (unless things start to go badly), not just tearing down the "fence" all at once.
Even with 100% knowledge¹ that Open Borders is the right policy, implementing it overnight can be a disaster, since institutions, culture, public opinion etc need time to adjust to the new reality.
¹ For the sake of argument. Let's not debate immigration policy.
These include pilot projects, modelling, stratified or distributed deployments (different regulations in different districts, rolling out changes to subsets of a userbase, etc.).
There are times when an entire system needs to be modified as a whole. Those are not all instances however.
Why do engineers hate to work with other people's code? Let me try to make an analogy.
You are hired to finish building a scientific lab on a distant island. When you arrive, you see half-finished building, a giant (same size than the building) fan and a hot air balloon. In the basement you found a room full of floor mops - about a thousand of these.
You clean up the mess, finish all the work, the lab starts it's first experiment and then suddenly after only five minutes scientists are starting running around screaming about toxic gas leak. You call your predecessor.
- Buddy, what's up, there's a toxic gas leak, how's that possible?
- I don't know, everything should just work, did you change anything?
- Yes, I put all those floor mops away.
- Why did you do that?! It was meant to support the floor above, where the toxic gas reservoir is! It was too heavy, thus the mops!
- Are you crazy to support the toxic gas reservoir with mops? Why didn't you at least put a sign on the door? What do I do now?
- Turn on the fan, it will blow the gas away.
- Dude, I disassembled the fan first thing, why didn't you put a box of gas masks instead?
- Where do I find gas masks? And the fan was a spare from the other project!
- This is terrible! We all gonna die here!
- Why are you still there? Jump to the balloon and fuck off the damn island!
Under those circumstances, it is best to figure out options in front of us, make a decision and move forward. Doing nothing is also an option, but one cannot be stuck in endless analysis-paralysis and fail to decide.
When this occurs, they dig in firmly in their positions and either argue for keeping the fence as-is (maintain status quo) or taking it down (action bias).
Bureaucrats famously build things just to associate themselves with "getting things done".
This is the stuff technical debt is made of.
Sometimes it's the right decision to risk second-order regressions in order to make forward progress. This of course depends on the circumstances and the costs of regressions.
True, but also ignoring Chesterton's Fence is what catastrophic rewrites are made of.
If you know why the fence is there and have confidence the reasons no longer apply, you can be bold. If you're not sure because the archeology is expensive, you should take baby steps if possible. (Which I think you get at with the cost of regressions.)
For my team, that's often flipping a feature flag where we don't expect any difference, and watching the output for a while to verify. First sign of surprise, we can quickly flip back. We get surprised more than we would like.
Deleted Comment
Just as a general rule in conversation, invoking so-and-so's law or this fence isn't a productive tactic. Instead, ask the question that wisdom suggests you ask, and for this bothersome fence in particular, keep in mind that you are weighing an unidentified consequence against a proposed benefit, and at least endeavor to expend some of your own effort on suggesting what the value of the fence might be or how one might go about finding it. You may find some of that effort has been undertaken and that failure has not been shared.
Not true. If we both are familiar with the concept, saying "Consider Chesterson's fence" conveys a lot of information. If we aren't, chasing down the reference will almost always result in a much more articulate version of the concept than whatever you or I would come up with on the fly.
Chesterton’s Fence: A Lesson in Second Order Thinking - https://news.ycombinator.com/item?id=22533484 - March 2020 (85 comments)
The Fallacy of Chesterton’s Fence (2014) - https://news.ycombinator.com/item?id=13063246 - Nov 2016 (26 comments)
The Fallacy of Chesterton’s Fence (2014) - https://news.ycombinator.com/item?id=11743965 - May 2016 (2 comments)
There was also https://news.ycombinator.com/item?id=23196731, which was on the front page for 5 minutes before we buried it.
It's true that it's an internet cliché though—so much so that Chesterton seems in danger of turning into his fence.
Like so many logical constructs, it really just seems to exist so we can apply it when we are frustrated, not in our day to day decision making.
Deleted Comment
Before destroying/removing/deleting something, make sure you spend sufficient time to understand why it was created in the first place and if you can't, then leave it intact.
No need to spread 1 piece of butter on a whole bakery worth of bread
The first interpretation is generally more defensible, but still not an absolute. There may be circumstances in which time does not exist and other exigencies prevail. As an example, if you come across a fence in the course of, say, responding to / evacuating from a natural disaster, you might consider briefly if there's some specific danger that the fence guards against (say, a cliff or other hazard), but determine that the greater benefit is in removal for the purpose of effecting rescue or escape.
First-responders don't agonise over why car doors were created when deploying Jaws of Life, earthquake responders don't survey plans of buildings to determine why walls exist before demolishing or removing them to access victims.
In less pressing circumstances, such as making incremental updates or changes to some system, performing some inquiry into purpose, intent, or function is strongly advisable, and Chesterton's Law is a check against naive and uninformed alteration without such considerations.
It's really important to challenge these things, but they're tricky specifically because no one owns the decision (the person who chose left, or forgot, or who knows) so a naïve application of "don't change what you can't figure out the reason for" fails when there may be no clearly documented reason, beyond a shared hunch that it mattered at some point.
Sure you should still investigate, but if after a decent investigation you can't figure out why, yes, change the thing. Sometimes you'll be wrong, but this is what we have error budgets for.
Its juxtaposition with Chesterton's essay is kind of weird, because Frost doesn't even know what the wall is for, and apparently neither does the neighbor.