A more specific implementation of "Make the change easy, then make the easy change".
Another tool that's useful (if you're doing email-based development at least) is always floating minor fixes and refactoring to the beginning of your series. Then the fixes and refactors can be checked in as they're reviewed, such that the final merge is much smaller.
Does anyone use this method in practice? How well does it work?
I can imagine it not working well in some cases, for example when a failure can only be resolved by some new thing (e.g. when a function was replaced by some other function).
There also seems to be an implicit assumption that the tests are 'sufficient', whereas I'd wager that the whole reason a revert is needed is the fact they are, in fact, insufficient. The article even says so:
> Obviously, the Mikado Method cannot work if you don’t have a good and highly reliable automated test suite.
But if you have a good test suite, then there should be no problem shipping the big change in one go!
I imagine the mikado method can still work though, because when shit does hit the fan, you can just downgrade to the previous version while keeping most of the preliminary changes? And that should then reduce the cost of the revert.
I use a similar strategy to this… much less formal but I’m excited to try this formal write up.
My strategy was more like:
1. Start making a big change.
2. See what breaks.
3. Move the big change to another branch.
4. Fix one broken thing and PR to main.
5. Rebase the big change back on top of main.
Essentially trying to break off chunks of the larger problem in small commits. I’ve had about a 90% success rate with this strategy and when it works developers really appreciate reviewing the smaller commits than one monster change.
I once heard that, “complex systems are built from simple systems,” and started to view my job more as identifying and working on the simple things to let the complex stuff fall out rather than attacking the complex stuff directly.
Edit 1: one other thing I will add is that this strategy has the huge benefit of producing many smaller tasks that more of a team can work on and commit to main rather than having one engineer do the whole refactor or having folks working out of a half functional refactor branch with nonstandard git workflows.
Edit 2: I've used this strategy in codebases with and without automated/unit testing. Where automation didn't exist, I've used dedicated QA staff. QA also appreciate testing small changes that impact part of a product rather than 'retest the entire product'. In environments where productivity is constrained by QA turnaround time I've prepared changes in a staging branch and keep having QA test the latest staging branch as soon as the previous has merged.
Yeah, that’s my method as well. Start big and even fix nits along the way. Anything that’s in the way is fair game as long as all of the tests are passing. As fundamental, atomic changes are discovered, peel those off into separate commits and PRs justifying their current and future benefit. Rebase those changes on the big branch and repeat until all the changes in the big branch are merged.
I’ve found that having a lot of small commits in my dev branch also helps makes rebasing easier. When there are conflicts, it’s easier to reason about what atomic set of changes they might affect than the whole wad. It also makes peeling off those changes into their own branch easier if things get to hairy.
> started to view my job more as identifying and working on the simple things to let the complex stuff fall out rather than attacking the complex stuff directly.
This corresponds well with what I think that software development actually is: it's almost entirely an exercise in complexity management.
I've never heard of this until today, but it is more or less the strategy that I've been following for a few years now. I can say it works pretty much as advertised here. I've done some major refactors by splitting into atomic changes like this, and although it takes a lot of time and effort, the end result is comparable to the work put in.
What I mean is that a big refactor might take 9 months to do atomically whereas it could have been done in 3 months as a massive single PR. However I guarantee you we'd have spent 6+ months cleaning up unexpected issues that show up after merging the giant PR. In the atomic commit approach what you get at the end of the 9 months is a solidly engineered product without those issues.
I will say that on a team, issues that you encounter are that it is very hard to keep track of these work-in-progress refactors that are getting in piece by piece, and it can cause rebase hell for other committers if there is a lot of refactoring involved. Splitting one giant refactor into 3 smaller refactors is better for the product, but represents 3x the work for other team members who have to rebase all their changes every time something is merged.
>What I mean is that a big refactor might take 9 months to do atomically whereas it could have been done in 3 months as a massive single PR.
My experience with these 3 month-long PRs is that they have a tendency to not get finished. Then they get stale. Then they get abandoned. Then somebody starts another one.
In one case it was finished, but the CTO was just too afraid to deploy it because it was big and it might break something. So while we were waiting for the right deployment window it got stale, meaning more conflict fixes....
I also think most people overestimate the amount of extra work involved in breaking up a refactoring. It's usually no more than ~20% more coding, and that extra work almost always pays for itself by reducing deployment risk.
> What I mean is that a big refactor might take 9 months to do atomically whereas it could have been done in 3 months as a massive single PR. However I guarantee you we'd have spent 6+ months cleaning up unexpected issues that show up after merging the giant PR.
Yes, one of the lessons of process engineering a la Deming is that the important thing is to get the process under statistical control, which typically means minimizing the variance inherent in a process, rather than the mean. Doing a big refactor might minimize the mean time to completion, but it will typically have a long tail which occasionally bites you and costs you dearly. The approach that minimises the variance might take longer in mean but generally when it's done it's done. Fewer unexpected surprises.
I used a very similar method to upgrade Django from a very low version to a very high version where I would periodically:
1) Upgrade the dependencies.
2) Find something that was broken and fix it so it worked on both versions - usually with an ugly if statement.
3) Merge to master and deploy.
4) Repeat 2 days later when I had another spare half hour.
Because it was a pressure cooker environment, the testing was thinner on the ground than I would have liked but mostly because this was an enormous change this whole process took about a year. I actually left the company before I finished off the upgrade and I thought this work had been abandoned.
However, my coworker picked it up after I left and finished off the upgrade in the same way.
Before I joined there were several aborted attempts to do big bang upgrade on the same thing - stale branches that just died by optimistic people who repeatedly thought they could do it all in one go.
>But if you have a good test suite, then there should be no problem shipping the big change in one go!
No, there is. If the change is large enough this is still a horribly bad idea - especially if there are structural changes to data involved. You might be able to make bigger chunks than you would otherwise, but you should still drip-feed large refactorings into production because something can always go wrong. The fewer changes you make at once, the lower chance something will go wrong and the easier it is to isolate and fix the problem if it does.
If you are starting out with a good test suite (something I find almost never exists on a project I'm joining), you should be upgrading everything via small increments and very frequently - negating the need for using this strategy on most upgrades.
In 2021, I ran a two-team, two-codebase detangling project with 20 engineers. Both codebases were writing to the same database, but one was the leader for some fields and the other was the leader for the rest. It was causing serious production issues in a highly regulated field, so we fixed it as fast and accurately as possible with almost no downtime. The very first thing was building in automated error detection processes so we could have product staff catching and fixing errors live.
We split the work into six releases over six months. Separating the two systems required a totally different architecture with a significant amount of code changes in the one system.
We didn't exactly use this method step by step (I hadn't heard of it). We thought logically through all the steps, and then reversed them. There were spike branches that we used to gather all the steps, and then we did delete them. But we only used maybe 4 or 5 spike branches total. Unit tests didn't drive anything - these changes were more system-integration level, so there weren't existing unit tests we could use.
In the end, it worked, we made a major overhaul to the one system and updated the second to now be the leader on all. We built an API on the second system, so it could be called by the first when new records needed to be created.
So I'd say, conceptually this method is what we used, but really none of our to-do list was generated from unit tests. Perhaps we could have built out a ton of tests and separated it into a release every week or every day, but we felt comfortable with the monthly releases. That helped because the QA process was pretty long (a lot of features in a regulated industry). Each release they were testing for at least a week. That being said, we built out a lot of unit tests to complement the changes, and I don't think QA found any issues in any of the six releases.
As the author described it, Mikado method is recursive. You find a to-do list on the first branch, then possibly more things on the second branch, and more things on the third. We only did one "loop", maybe two, where the author might do many.
I got this by thinking about the Kent Beck quote "first make the change easy, then make the easy change".
I often use an approach like this, although I've never tried to pin it down formally and I'd never heard it called "Mikado". I really like this writeup!
When working on a big gnarly feature, I'll start by hacking it in and seeing what breaks, or what prerequisites are missing. When I find a piece that's standalone and definitely needed, I'll move that to a clean branch and get it in.
I usually do blow away the messy development branch and restart from scratch, often multiple times. I generally find the important and time-consuming part is finding out what's needed rather than the actual code itself. The final code is usually small and doesn't take long to rewrite. That matches the OP's experience pretty closely.
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Doing things step-by-step is the only way to build something that works. And while you can do that iterative development in a branch, it's almost always better to commit those iterative steps to the trunk as soon as possible, so you never have to do that massive risky merge at the end.
I’ve used it many times with great success. Not in a dogmatic way — I don’t usually delete my “code in progress”. I would rather cherry-pick standalone changes to new PRs, and then rebase. Or keep unsuccessful code in a branch that I can look at, when needed.
This is my first time hearing the term, but I’ve often done something kind of like it when dealing with upcoming changes from systems owned by other teams. Rather than make a big change that requires I’m online and deploy my code at the same time they deploy, I make my code so it supports the new and old system. They have their deployment window and I get to not care about it.
I don't think having extensive tests makes it possible to change anything.
Much of the work of a programmer is splitting up problems in to chunks that can easily fit into our very limited bulbs. This article highlights that there are two aspects to this; one is the one widely known, which is using appropriate abstractions to reduce complexity of any isolated part of the program; but the other part is keeping the changes at a manageable scale. To me this sounds like a very valid and potentially underapprechiated part of working on code with other people involved.
>> Obviously, the Mikado Method cannot work if you don’t have a good and highly reliable automated test suite.
> But if you have a good test suite, then there should be no problem shipping the big change in one go!
The author is referring here to unit test suites. These cannot catch every problem that may arise due to refactoring when integrating the refactored code with the rest of the code base. His method basically means that we should look for incremental integraton tests to catch such problems early.
I use this(i don't call it Mikado or use the idea),
its much easier than rebasing your development
branches on whatever was last commited.
Breaking things down into simple components
is possible even for big things, just add them
as pieces of 'alternative path' and when its ready switch
to the 'alternative path' for testing.
The 'old path' code then can be removed later, also
piece by piece to avoid breaking stuff.
Parts of it seem familiar. But rather than the weird cyclical sort of methodology, it's more like we create a framework for being able to migrate changes piecemeal without requiring a big change. The aim of small incremental changes remains the same but without all the silly redundant work.
I use this method and became converted after a long lived refactor branch caused merge hell, delayed a project for many months and caused all manner of bugs that we were fixing for a long time afterwards.
My very wise and experienced CTO said to me something along the lines of: "if you don't think it's possible to break a big refactor down into small steps, then you're probably not thinking hard enough". That has stuck with me.
I once had a coworker working on a refactoring branch for one and a half _years_. Just merging with the main branch after they had things working took weeks. Especially as we had switched coding styles and reformatted all the source code in the meantime… (It eventually shipped, but I'm fairly certain they could have saved themselves quite a bit of pain by landing it in smaller chunks.)
I have a colleague who performed a major migration of the linting tools we used. TSLint to ESLint. Of course, they also fixed all the new rules changes. All in all, took them roughly thirteen months. I'm certain that they would have had less trouble merging their changes in if they did so in much smaller chunks.
I think smaller commits and fewer long
lived branches are good. Feature flags are good. Breaking stuff down logically is good.
But this example seems inefficient. Keep that original failing branch in a seperate clone to refer back to. You can even pull down the fix commits back to it to observe the test count improve gradually!
And with dependencies certainly in .NET and Node and probably Ruby too there are dragons in the details. Let the compiler and/or tests do that heavy lifting rather than a tree if postit notes.
For example you assume you know upgrading gem A will need changes XYZ but I bet ya there are gotchas related to when you upgrade B and C along with A. Picking the granularity level takes a nose for it but I wouldn’t waste time planning just start chipping away at the problem to help get a lay of the land.
It feels like this method is like planning your walk to the shops and creating a survey of dogshit on your path. Just get walking and look where you walk!
I didn't really follow the benefit of deleting your WIP despite the author being emphatic that it was important. In my view, if a change A I'm working on requires me to detour and make some change B, then a mandatory requirement is that change B really does work for change A. And not just I think it will work based on my memory of what I was doing, but I tested it and it really does work. My ability to predict all the details and impacts of a change are fallible, otherwise this unexpected detour wouldn't have happened to begin with.
I guess that does point to a benefit - it decreases the desire to hold off on merging all the incremental changes until the full refactor is done. With the WIP there will always be one loose end to tie up before it is complete and you are really sure all the changes worked together. Without it, all you have is that the individual change didn't break anything so might as well merge even if there might be rework in the future.
I had the impression that is a crucial aspect of the method. By starting from scratch each time, you avoid creating a chain of changes that may not be the smallest / most efficient, because you didnt have the full context when you made the previous change.
And if turns out that that was, indeed, necessary and the right thing to do, it’s still fresh in your mind to reimplement.
I think the point is that if e.g. v7 removes a certain method, but the replacement is already available in v6, you can simply keep using v6 while updating your code to use the new method. By doing all this work on top of v6, upgrading to v7 becomes as simple as 'flipping the switch', and reverting it if issues do arise also becomes trivial.
But yes, I can definitely imagine cases where trying to apply this method leads to a chicken-and-egg problem.
Like the person you replied to, the example makes no sense to me.
I don't program in Rails, but, in the languages that I do/have I don't recall a situation where it was _likely_ that a function or API that replaces something deprecated in an older version, was already available in that older version.
The only case I can think of, that happens regularly, is that something would be deprecated and marked as such, with a replacement available at the point of deprecation.
That deprecated usage should be replaced before it is removed; and if we're talking about skipping multiple major versions over a long period, the replacement likely didn't exist in the older version, so this method still wouldn't work.
You try to do thing X. You sit down and start doing it. You discover there's a dependency tree, where you need to do A, B, C, and D. The idea is, instead of doing X, now you throw your efforts to do X away and do one of A, B, C, or D instead. Possibly applying this recursively when you find C requires G, H, and I. You throw X away but keep the discovered dependency tree.
As you do this more and more often the dependency trees will become more clear until eventually you can generally skip the "sit down and try to do X" step, or at least come close.
An example is, you had code that worked on a local server but now you want to run it in the cloud, "natively". You sit down to start doing the conversion, but discover that you had some files you wrote to disk, and to be "cloud native" they ought to live in S3. You stop the big-bang "native cloud" conversion. You work on abstracting away what you do with the files into some sort of interface (whatever your local language calls it) which backs to something that implements that interface on a local file system. (In this particular case, there's a good chance someone has already written this abstraction layer and you just need to go pick it up, but it's not out of the question to write it yourself either; often the price of the two options are comparable.) At all points in this process you can ship the results, because even if it's ugly behind the scenes due to a half-done conversion, and you can, say, read files but not write them through the new interface, it's still shippable. And you can ship it when the only implementation of the interface is to the file system. But later on, you can implement something that backs to S3 and slip it in to the interface, and even though it's likely you'll still find a thing here and a thing there that need to be tweaked, it's far less disruptive than if you had stuck with the initial "make a big PR to convert to cloud native all in one go".
It is also more work. Although perhaps less "more work" than you might anticipate. A lot of the process of discovering what surprisingly deep dependencies on file system behaviors you had and untangling all of them is really the same in both cases. The saved effort of writing an explicit interface and jumping straight to the new thing can be easily lost in the additional effort of trying to juggle the intermediate phases (which can get quite complex) and feature flags and the bugs you introduce in the process. Even if you don't have a "Mikado" philosophy, you may well still find the process I outline in the previous paragraph is the one you adopt anyhow! In which case, why not do it independently and isolated, without a big bang?
I haven't thought "Mikado" explicitly in my refactoring plans, but really, the only "big bang" rewrite I'm willing to undertake is when my back is forced against the wall and I simply have to switch languages. Though there can still be some value in trying to straddle some services across in the meantime, even that is itself often a rather large rewrite on both sides. Otherwise, there is almost always some sort of path to doing your rewrite incrementally, one step at a time, without ever stopping main development.
If you look at the graph of "value obtained over time", it's really a no brainer to adopt this approach, too. Generally these incremental refactorings also have intermediate value they deliver. Shipping that value now means you start benefiting from it before the big bang cutover. It also means you can stop at any time and just pocket the gains you've made, with no risk of having all this work go to nought. The big bang rewrite needs to be a desperate last-ditch effort, not the first thing you reach for.
Honestly, they generally aren't even as fun as developers anticipate; they may be more fun than an incremental refactor in the first month, but the eternal grind of discovering some corner case that the old code base handled and you completely whiffed on and now you need a major rearchitecting gets to be not fun in the third month and every month subsequently. Over the course of a full project I think the incremental refactors are actually more fun.
Another tool that's useful (if you're doing email-based development at least) is always floating minor fixes and refactoring to the beginning of your series. Then the fixes and refactors can be checked in as they're reviewed, such that the final merge is much smaller.
I can imagine it not working well in some cases, for example when a failure can only be resolved by some new thing (e.g. when a function was replaced by some other function).
There also seems to be an implicit assumption that the tests are 'sufficient', whereas I'd wager that the whole reason a revert is needed is the fact they are, in fact, insufficient. The article even says so:
> Obviously, the Mikado Method cannot work if you don’t have a good and highly reliable automated test suite.
But if you have a good test suite, then there should be no problem shipping the big change in one go!
I imagine the mikado method can still work though, because when shit does hit the fan, you can just downgrade to the previous version while keeping most of the preliminary changes? And that should then reduce the cost of the revert.
My strategy was more like:
1. Start making a big change.
2. See what breaks.
3. Move the big change to another branch.
4. Fix one broken thing and PR to main.
5. Rebase the big change back on top of main.
Essentially trying to break off chunks of the larger problem in small commits. I’ve had about a 90% success rate with this strategy and when it works developers really appreciate reviewing the smaller commits than one monster change.
I once heard that, “complex systems are built from simple systems,” and started to view my job more as identifying and working on the simple things to let the complex stuff fall out rather than attacking the complex stuff directly.
Edit 1: one other thing I will add is that this strategy has the huge benefit of producing many smaller tasks that more of a team can work on and commit to main rather than having one engineer do the whole refactor or having folks working out of a half functional refactor branch with nonstandard git workflows.
Edit 2: I've used this strategy in codebases with and without automated/unit testing. Where automation didn't exist, I've used dedicated QA staff. QA also appreciate testing small changes that impact part of a product rather than 'retest the entire product'. In environments where productivity is constrained by QA turnaround time I've prepared changes in a staging branch and keep having QA test the latest staging branch as soon as the previous has merged.
I’ve found that having a lot of small commits in my dev branch also helps makes rebasing easier. When there are conflicts, it’s easier to reason about what atomic set of changes they might affect than the whole wad. It also makes peeling off those changes into their own branch easier if things get to hairy.
This corresponds well with what I think that software development actually is: it's almost entirely an exercise in complexity management.
10 Identify the problem factors as you currently understand them.
20 Solve simple patterns in the problem to expand your understanding.
30 GOTO 10 until complete.
What I mean is that a big refactor might take 9 months to do atomically whereas it could have been done in 3 months as a massive single PR. However I guarantee you we'd have spent 6+ months cleaning up unexpected issues that show up after merging the giant PR. In the atomic commit approach what you get at the end of the 9 months is a solidly engineered product without those issues.
I will say that on a team, issues that you encounter are that it is very hard to keep track of these work-in-progress refactors that are getting in piece by piece, and it can cause rebase hell for other committers if there is a lot of refactoring involved. Splitting one giant refactor into 3 smaller refactors is better for the product, but represents 3x the work for other team members who have to rebase all their changes every time something is merged.
My experience with these 3 month-long PRs is that they have a tendency to not get finished. Then they get stale. Then they get abandoned. Then somebody starts another one.
In one case it was finished, but the CTO was just too afraid to deploy it because it was big and it might break something. So while we were waiting for the right deployment window it got stale, meaning more conflict fixes....
I also think most people overestimate the amount of extra work involved in breaking up a refactoring. It's usually no more than ~20% more coding, and that extra work almost always pays for itself by reducing deployment risk.
Yes, one of the lessons of process engineering a la Deming is that the important thing is to get the process under statistical control, which typically means minimizing the variance inherent in a process, rather than the mean. Doing a big refactor might minimize the mean time to completion, but it will typically have a long tail which occasionally bites you and costs you dearly. The approach that minimises the variance might take longer in mean but generally when it's done it's done. Fewer unexpected surprises.
I used a very similar method to upgrade Django from a very low version to a very high version where I would periodically:
1) Upgrade the dependencies.
2) Find something that was broken and fix it so it worked on both versions - usually with an ugly if statement.
3) Merge to master and deploy.
4) Repeat 2 days later when I had another spare half hour.
Because it was a pressure cooker environment, the testing was thinner on the ground than I would have liked but mostly because this was an enormous change this whole process took about a year. I actually left the company before I finished off the upgrade and I thought this work had been abandoned.
However, my coworker picked it up after I left and finished off the upgrade in the same way.
Before I joined there were several aborted attempts to do big bang upgrade on the same thing - stale branches that just died by optimistic people who repeatedly thought they could do it all in one go.
>But if you have a good test suite, then there should be no problem shipping the big change in one go!
No, there is. If the change is large enough this is still a horribly bad idea - especially if there are structural changes to data involved. You might be able to make bigger chunks than you would otherwise, but you should still drip-feed large refactorings into production because something can always go wrong. The fewer changes you make at once, the lower chance something will go wrong and the easier it is to isolate and fix the problem if it does.
If you are starting out with a good test suite (something I find almost never exists on a project I'm joining), you should be upgrading everything via small increments and very frequently - negating the need for using this strategy on most upgrades.
We split the work into six releases over six months. Separating the two systems required a totally different architecture with a significant amount of code changes in the one system.
We didn't exactly use this method step by step (I hadn't heard of it). We thought logically through all the steps, and then reversed them. There were spike branches that we used to gather all the steps, and then we did delete them. But we only used maybe 4 or 5 spike branches total. Unit tests didn't drive anything - these changes were more system-integration level, so there weren't existing unit tests we could use.
In the end, it worked, we made a major overhaul to the one system and updated the second to now be the leader on all. We built an API on the second system, so it could be called by the first when new records needed to be created.
So I'd say, conceptually this method is what we used, but really none of our to-do list was generated from unit tests. Perhaps we could have built out a ton of tests and separated it into a release every week or every day, but we felt comfortable with the monthly releases. That helped because the QA process was pretty long (a lot of features in a regulated industry). Each release they were testing for at least a week. That being said, we built out a lot of unit tests to complement the changes, and I don't think QA found any issues in any of the six releases.
As the author described it, Mikado method is recursive. You find a to-do list on the first branch, then possibly more things on the second branch, and more things on the third. We only did one "loop", maybe two, where the author might do many.
I got this by thinking about the Kent Beck quote "first make the change easy, then make the easy change".
When working on a big gnarly feature, I'll start by hacking it in and seeing what breaks, or what prerequisites are missing. When I find a piece that's standalone and definitely needed, I'll move that to a clean branch and get it in.
I usually do blow away the messy development branch and restart from scratch, often multiple times. I generally find the important and time-consuming part is finding out what's needed rather than the actual code itself. The final code is usually small and doesn't take long to rewrite. That matches the OP's experience pretty closely.
I'm a firm believer in Gall's law (https://en.wikipedia.org/wiki/John_Gall_(author)#Gall's_law):
A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
Doing things step-by-step is the only way to build something that works. And while you can do that iterative development in a branch, it's almost always better to commit those iterative steps to the trunk as soon as possible, so you never have to do that massive risky merge at the end.
Much of the work of a programmer is splitting up problems in to chunks that can easily fit into our very limited bulbs. This article highlights that there are two aspects to this; one is the one widely known, which is using appropriate abstractions to reduce complexity of any isolated part of the program; but the other part is keeping the changes at a manageable scale. To me this sounds like a very valid and potentially underapprechiated part of working on code with other people involved.
> But if you have a good test suite, then there should be no problem shipping the big change in one go!
The author is referring here to unit test suites. These cannot catch every problem that may arise due to refactoring when integrating the refactored code with the rest of the code base. His method basically means that we should look for incremental integraton tests to catch such problems early.
My very wise and experienced CTO said to me something along the lines of: "if you don't think it's possible to break a big refactor down into small steps, then you're probably not thinking hard enough". That has stuck with me.
I wrote a blog post on atomic development a while back which references the Mikado method. https://tomsouthall.com/blog/atomic-development.
But this example seems inefficient. Keep that original failing branch in a seperate clone to refer back to. You can even pull down the fix commits back to it to observe the test count improve gradually!
And with dependencies certainly in .NET and Node and probably Ruby too there are dragons in the details. Let the compiler and/or tests do that heavy lifting rather than a tree if postit notes.
For example you assume you know upgrading gem A will need changes XYZ but I bet ya there are gotchas related to when you upgrade B and C along with A. Picking the granularity level takes a nose for it but I wouldn’t waste time planning just start chipping away at the problem to help get a lay of the land.
It feels like this method is like planning your walk to the shops and creating a survey of dogshit on your path. Just get walking and look where you walk!
I guess that does point to a benefit - it decreases the desire to hold off on merging all the incremental changes until the full refactor is done. With the WIP there will always be one loose end to tie up before it is complete and you are really sure all the changes worked together. Without it, all you have is that the individual change didn't break anything so might as well merge even if there might be rework in the future.
And if turns out that that was, indeed, necessary and the right thing to do, it’s still fresh in your mind to reimplement.
(I have not tried this method myself yet)
Initially I thought this was a plea for more Mikado chocolate sticks, a sweet snack popular in Europe :-)
Certain kinds of projects benefit from these tools. To say they are "uneconomic" is not only ignorant but condescending.
It comes off like a carpenter calling a welder "uneconomic" because the welding machine takes longer to start and cost more than a hammer and nails.
But yes, I can definitely imagine cases where trying to apply this method leads to a chicken-and-egg problem.
I don't program in Rails, but, in the languages that I do/have I don't recall a situation where it was _likely_ that a function or API that replaces something deprecated in an older version, was already available in that older version.
The only case I can think of, that happens regularly, is that something would be deprecated and marked as such, with a replacement available at the point of deprecation.
That deprecated usage should be replaced before it is removed; and if we're talking about skipping multiple major versions over a long period, the replacement likely didn't exist in the older version, so this method still wouldn't work.
As you do this more and more often the dependency trees will become more clear until eventually you can generally skip the "sit down and try to do X" step, or at least come close.
An example is, you had code that worked on a local server but now you want to run it in the cloud, "natively". You sit down to start doing the conversion, but discover that you had some files you wrote to disk, and to be "cloud native" they ought to live in S3. You stop the big-bang "native cloud" conversion. You work on abstracting away what you do with the files into some sort of interface (whatever your local language calls it) which backs to something that implements that interface on a local file system. (In this particular case, there's a good chance someone has already written this abstraction layer and you just need to go pick it up, but it's not out of the question to write it yourself either; often the price of the two options are comparable.) At all points in this process you can ship the results, because even if it's ugly behind the scenes due to a half-done conversion, and you can, say, read files but not write them through the new interface, it's still shippable. And you can ship it when the only implementation of the interface is to the file system. But later on, you can implement something that backs to S3 and slip it in to the interface, and even though it's likely you'll still find a thing here and a thing there that need to be tweaked, it's far less disruptive than if you had stuck with the initial "make a big PR to convert to cloud native all in one go".
It is also more work. Although perhaps less "more work" than you might anticipate. A lot of the process of discovering what surprisingly deep dependencies on file system behaviors you had and untangling all of them is really the same in both cases. The saved effort of writing an explicit interface and jumping straight to the new thing can be easily lost in the additional effort of trying to juggle the intermediate phases (which can get quite complex) and feature flags and the bugs you introduce in the process. Even if you don't have a "Mikado" philosophy, you may well still find the process I outline in the previous paragraph is the one you adopt anyhow! In which case, why not do it independently and isolated, without a big bang?
I haven't thought "Mikado" explicitly in my refactoring plans, but really, the only "big bang" rewrite I'm willing to undertake is when my back is forced against the wall and I simply have to switch languages. Though there can still be some value in trying to straddle some services across in the meantime, even that is itself often a rather large rewrite on both sides. Otherwise, there is almost always some sort of path to doing your rewrite incrementally, one step at a time, without ever stopping main development.
If you look at the graph of "value obtained over time", it's really a no brainer to adopt this approach, too. Generally these incremental refactorings also have intermediate value they deliver. Shipping that value now means you start benefiting from it before the big bang cutover. It also means you can stop at any time and just pocket the gains you've made, with no risk of having all this work go to nought. The big bang rewrite needs to be a desperate last-ditch effort, not the first thing you reach for.
Honestly, they generally aren't even as fun as developers anticipate; they may be more fun than an incremental refactor in the first month, but the eternal grind of discovering some corner case that the old code base handled and you completely whiffed on and now you need a major rearchitecting gets to be not fun in the third month and every month subsequently. Over the course of a full project I think the incremental refactors are actually more fun.