I've been tracking software quality metrics for 3 years as an engineering manager. The pattern is getting worse, not better:
- Apple Calculator: 32GB RAM leak - Spotify on macOS: 79GB memory consumption - CrowdStrike: One missing bounds check = 8.5M crashed computers - macOS Spotlight: Wrote 26TB to SSDs overnight
Meanwhile Big Tech is spending $364B on infrastructure instead of fixing the code.
I wrote up the full analysis with citations: https://techtrenches.substack.com/p/the-great-software-quality-collapse
But the real question: When did we normalize this? What happened to basic quality standards?
What are you seeing in your organizations?
Everything human beings create is ephemeral. That restaurant you love will gradually drop standards and decay. That inspiring startup will take new sources of funding and chase new customers and leave you behind, on its own trajectory of eventual oblivion.
When I frame things this way, I conclude that it's not that "software quality" is collapsing, but the quality of specific programs and companies. Success breeds failure. Apple is almost 50 years old. Seems fair to stipulate that some entropy has entered it. Pressure is increasing for some creative destruction. Whose job is it to figure out what should replace your Apple Calculator or Spotify? I'll put it to you that it's your job, along with everyone else's. If a program doesn't work, go find a better program. Create one. Share what works better. Vote with your attention and your dollars and your actual votes for more accountability for big companies. And expect every team, org, company, country to decay in its own time.
Shameless plug: https://akkartik.name/freewheeling-apps
I've published a blog post urging [0] top programmers to quit for‑profit social media and rebuild better norms away from that noise.
[0] https://abner.page/post/exit-the-feed/
Look at trappist brewers. Long tradition of consistent quality. You just have to devote your life to the ascetic pursuit of monkhood. It attracts a completely different kind of person.
Dead Comment
This resource allocation strategy seems rational though. We could consume all available resources endlessly polishing things and never get anything new shipped.
Honestly it seems like the another typical example of the “cost center” vs “revenue center” problem. How much should we spend on quality? It’s hard to tell up front. You don’t want to spend any more than the minimum to prevent whatever negative outcomes you think poor quality can cause. Is there any actual $ increase from building higher quality software than “acceptable”?
As a simple version think about it this way: if a customer can't tell the difference in quality at time of purchase then the only signal they have is price.
I think even here on HN if we're being honest with ourselves it's hard to tell quality prior to purchase. Let alone the average nontechnical person. It's crazy hard to evaluate software even hands on. How much effort you need you put in these days. The difficulty of differentiating sponsored "reviews" from legitimate ones. Even all the fake reviews or how Amazon allows changing a product and inheriting the reviews of the old product.
No one asks you because all the sellers rely too heavily on their metrics. It's not just AI people treat like black boxes, it's algorithms and metrics in general. But you can't use any of that effectively without context.
At engineers I think we should be a bit more grumpy. Our job is to find problems and fix them. Be grumpy to find them. Don't let the little things slip because even though a papercut isn't a big deal, a thousand is. Go in and fix bugs without being asked to. Push back against managers who don't understand. You're the technical expert, not them (even if they were once an engineer, those skills atrophy and you get disconnected from a system when you aren't actively working on it). Don't let them make you make arguments about some made up monetary value for a feature or a fix. It's managements job to worry about money and our job to worry about the product. There needs to be a healthy adversarial process here. When push comes to shove, we should prioritize the product over the profit while they should do the opposite. This contention is a feature, not a bug. Because if we always prioritize profits, well, that's a race to the bottom. It kills innovation. It asks "what's the shittiest cheapest thing we can sell but people will still buy". It enables selling hype rather than selling products. So please, be a grumpy engineer. It's in the best interest of the company. Maybe not for the quarter, but it is for the year and the decade. (You don't need to be an asshole or even fight with your boss. Simply raising concerns about foreseeable bugs can be a great place to start. Filling bug reports for errors you find too! Or bugs your friends and family find. Or even help draft them with people like on HN that raise concerns about a product your company works on. Doesn't need to be your specific team, but file the bug report for someone who can't)
And as the techies, we should hold high standards. Others rely on us for recommendations. We need to distill the nuances and communicate better with our nontechnical friends and family.
These won't solve everything but I believe they are actionable, do not require large asks, and can push some progress. Better something than nothing, otherwise there will be no quality boots to buy
https://en.wikipedia.org/wiki/Boots_theory
The more loudly someone speaks up, the faster they are shown the door. As a result, most people keep their head down, pick their battles carefully, and try to keep their head above water so they can pay the rent.
I don’t think you can draw conclusions from that short a period.
As a counterpoint: in the ‘80s and early ‘90s, my brain was almost hardwired to hit the hotkey for “Save” every few seconds while working, even though that could mean applications became unresponsive for seconds, because I didn’t trust the application to not crash while idle.
Yes, part of that is because applications nowadays rarely run out of memory, and likely don’t have code that tries to keep things running in low-memory conditions, but that’s not all of it. A significant part was that applications were buggy. (Indirect) evidence for that is that they also were riddled with security holes.
> How do I make a link in a text submission?
> You can't. This is to prevent people from submitting a link with their comments in a privileged position at the top of the page. If you want to submit a link with comments, just submit the link, then add a regular comment.
https://news.ycombinator.com/newsfaq.html
Deleted Comment
Look at the construction industry. Many buildings on this planet were built hundreds, sometimes a thousand or more years ago. They still stand today as the quality of their build quality was excellent.
A house built today of cheap materials (i.e poor quality software engineers) as quickly as possible (i.e urgent business timelines) will fall apart in 30 years while older properties will continue to stand tall long after the "modern" house has crumbled.
These days software is often about being first to market with quality (and cough security) being a distant second priority.
However occasionally software does emerge as high quality and becomes a foundation for further software. Take Linux, FreeBSD and curl as examples of this. Their quality control is very high priority and time has proven this to be beneficial - for every user.
We’ve industrialized the process without industrializing the discipline. The result is mass-produced code built on shaky abstractions, fast to assemble, and faster to decay.
Linux and curl weren’t built on sprints or OKRs. They were built on ownership, long time horizons, and the idea that stability is innovation when everyone else is optimizing for speed.
True. And yet, far more buildings built then are not standing. We just don't notice them, because they aren't still here for us to notice.
So don't think that things were built better then. A few were; most weren't.
I remember the good old days where nobody unit tested, there were no linters or any focus on quality tools in IDEs. Gang of four patterns we take for granted were considered esoteric gold plating.
Sure, memory usage is high, but hardware is cheap.
In the ’90s, inefficiency meant slower code. Today it means 32GB RAM leaks in calculator apps, billion-dollar outages from a missing array field, and 300% more vulnerabilities in AI-generated code.
We’ve automated guardrails, but we’ve also automated incompetence. The tooling got better, the results didn’t.
All of the above is multiplied 1.3x-1.5x with accelerating ways to get upto speed with iterative indexing of knowledge with llms. I believe we are reliant on those early engineers whose software took a while to build (like a marathon), and not short-sprinted recyclable software we keep shipping on it. The difference is not a lot of people want to be in those shoes (responsibility/comp tradeoffs.