The core point they're trying to make is that agile (or similar) practices are the incorrect way to approach consolidation of smaller systems into bigger ones when the overall system already works and is very large.
I agree with their assertion that being forced to address difficult problems earlier on in the process results in ultimately better outcomes, but I think it ignores the reality that properly planning a re-write of monumentally sized and already in use system is practically impossible.
It takes a long time (years?) to understand and plan all the essential details, but in the interim the systems you're wanting to rewrite are evolving and some parts of the plan you thought you had completed are no longer correct. In essence, the goal posts keep shifting.
In this light, strangler fig pattern is probably the pragmatic approach for many of these re-writes. It's impossible to understand everything up front, so understand what you reasonably can for now, act on that, deliver something that works and adds value, then rinse and repeat. The problem is that for sufficiently large system, this will take decades and few software architects stick around at a single company long enough to see it through.
A final remark I want to make is that, after only a few years of being a full-time software developer, "writing code" is one of the easiest parts of the job. The hard part is knowing what code needs to be written, this requires skills in effective communication with various people, including other software developers and (probably more importantly) non-technical people who understand how the business processes actually need to work. If you want to be a great software developer, learn how to be good at this.
I highly applaud this idea. IMO this is why big upfront design is so risky.
Ai assistance would seem to favor the engineering approach as the friction of teams and personalities is reduced in favor of quick feasibility testing and complete planning.
Software has 0 construction cost but that it does have is extremely complicated behavior.
Take a bridge for example: the use case is being able to walk or drive or ride a train across it. It essentially proves a surface to travel on. The complications of providing this depend on the terrain, length etc etc and are not to be dismissed but there's relatively little doubt about what a bridge is expected to do. We don't iterate bridge design because we don't need to know much from the users of the bridge: does it fulfill their needs, is it "easy to use" etc AND because construction of a bridge is extremely expensive so iteration is also incredibly costly. We do, however, not build all bridges the same and people develop styles over time which they repeat for successive bridges and we iterate that way.
In essence, cycling is about discovering more accurately what is wanted because it is so often the case that we don't know precisely at the start. It allows one to be far more efficient because one changes the requirements as one learns.