On Hacker News, banned accounts can still comment, but those comments are immediately dead until vouched.
What its related to is the narrative you shared in your second paragraph, which I think you were writing mostly for color, perhaps seeming to you as the only or necessary story of how software gets made.
"And then", "And then", "And then"... "One feature at a time, you just keep stacking more and more on."
There's no foresight. There's no planning. There's no architecture. There's no readiness. There's no preparation. Tomorrow's needs are always beyond the horizon and today's are to be approached strictly on their own terms.
If you've taught software development to anyone, you're used to seeing this behavior. Your student doesn't know what they don't know (because you haven't taught them yet), and they're overwhelmed by it, so they just scramble together whatever they can to make it past the next most immediate hurdle.
Much (not all) of the industry has basically enshrined that as the way for whole teams and organizations to work now, valorizing it as an expression of agility, nimbleness, and efficiency, and humility.
But it inevitably results in code and modules and programs and services and systems where nothing fully snaps together and operational efficiency is lost at every interface, and implementations for new features/challenges need to get awkwardly jammed in as new code rather than elegantly fleshed out as prefigured opportunities.
So you're right that most modern projects eventually just become swamped by these inefficiencies, with no ripe ways to resolve them. But it's not because of Rails vs Go or something, it's because pretty much everyone now aspires to build cathedrals without committing to any plans for them ahead of time.
What they gain for that is some comfort against FOMO, because they'll never face yesterday's plan conflicting with tomorrow's opportunity (since there is no plan). But they lose a lot too, as you call out very well here.
The best environment I've ever worked in was, ironically enough, fully invested in Scrum, but it wasn't what's typical in the industry. Notably, we had no bug tracker[0], and for the most part, everyone was expected to work on one thing together[1]. We also spent an entire quarter out of the year doing nothing but planning, roleplaying, and actually working in the business problem domain. Once we got the plan together, the expectation was to proceed with it, with the steps executed in the order we agreed to, until we had to re-plan[2].
With the rituals built in for measuring and re-assessing whether our plan was the right one through, e.g., sprint retrospectives, we were generally able to work tomorrow's opportunity into the plan that we had. With the understanding that successfully delivering everything we'd promised at the end of the sprint was a coin toss, if we were succeeding a lot, it gave us the budget to blow a sprint or two on chasing FOMO and documenting what we learned.
0: How did we address bugs without a bug tracker? We had a support team that could pull our andon cord for us whenever they couldn't come up with a satisfactory workaround (based on how agonizing it was for everyone involved) to behavior that was causing someone a problem. Their workarounds got added to the product documentation, and we got a product backlog item, usually put at the top of the backlog so it'd be addressed in the next sprint, to make sure that the workaround was, e.g., tested enough such that it wouldn't break in subsequent revisions of the software. Bad enough bugs killed the sprint and sent us to re-plan. We tracked the product backlog with Excel.
1: Think pairing but scaled up. It's kinda cheesy at first, but with everyone working together like this, you really do get a lot done in a day, and mentoring comes for free.
2: As it went: Re-planning is re-work, and re-work is waste.
"What would you use if you cannot use Terraform for a project?"
To which I initially answered, since it was a SENIOR position, with a warning about mixing Terraform and non-Terraform managed infra because it can lead to unforeseen issues, especially if there are less visible dependencies between the 2. I then mentioned anyway it could be done with Python + boto3, with AWS CLI + bash, with Pulumi, with CDK and then after some extra talk, also with Ansible.
They didn't want a long answer with lessons learned in real prod, they wanted a oneliner answer: Ansible. They told me then to be shorter in next answers and proceeded to ask like 30 questions in a row involving bash, Linux, Terraform and Kubernetes knowledge, to which I answered all correctly (and with the one-liner answer).
The result: discarded, because I chaotically answered to that first question. Although I was somehow offended because I don't like to be discarded, I think I dodged a bullet in this case.
"A simple poka-yoke example is demonstrated when a driver of the car equipped with a manual gearbox must press on the clutch pedal (a process step, therefore a poka-yoke) prior to starting an automobile."
You would typically put the car to a neutral gear before starting up the car, clutch isn't required.
Furthermore, if there was such a poka-yoke preventing start-up when the clutch isn't pressed, it would prevent the common safety procedure when a car doesn't start and is in a dangerous position, for example on rails or in the motorway. In such situations you would drive the car with the starter engine alone, putting the first gear on, release the clutch, and start the car, thus moving it forward slowly by the battery and the starter engine.
0: This is basically everything after the three-on-the-tree/four-on-the-floor era. I have yet to drive anything with an overdrive gear that didn't require popping the clutch to crank the starter.
https://www.forbes.com/sites/realspin/2012/05/17/the-law-of-...
https://www.npr.org/sections/thetwo-way/2011/05/27/136718112...
https://www.theatlantic.com/business/archive/2011/06/georgia...
https://www.politico.com/story/2011/06/ga-immigrant-crackdow...
https://www.timesfreepress.com/news/2013/jul/07/immigration-...
Closely tracking things you can not control may provide a sense of control to some.
Or the other way round: Crunching enough data and building reasonable predictions based on that takes away the element of surprise, and the element of surprise for some translates to anxiety.
For me the only things that scare me are in the "I have no data on that" category.
> For me the only things that scare me are in the "I have no data on that" category.
I feel exactly the same way. It means that I have no idea what those things will wind up costing me, and that's the anxiety trigger as far as I'm concerned.
Also: Different people, different coping strategies. I don't want to wait until the "Active Shooter!" cell broadcast message finds its way to my phone. :)
For some people, potentially especially those on the spectrum, having as much information as possible to work with might bring mental security and stability.
People are different, brains are diverse.
I know it's stuff I can't control, and that's sort of the point. I want to know what I can't control so that I can know what I can control, if that makes sense.
0: Otherwise known as "touching grass."
Because yeah, the database is you main source of invariants. But there is no good reason for you application environment not to query the invariants from there and test or prove your code around them.
We do DRY very badly, and the most vocal proponents are the worst... But I don't think this is a good example of the principle failing.
I largely agree, but...
> ... the database is you main source of invariants.
I guess my upbringing through strict typing discipline leaves me questioning this in particular. I'm able to encode these things in my types without consulting my database at build time and statically verify that my data are as they should be as they traverse my system with not really any extra ceremony.
Encoding that in the database is nice (and necessary), but in the interest of limiting network round-trips (particularly in our cloud-oriented world), I really would prefer that my app can get its act together first before crossing the machine boundary.