Dan Morena, CTO at Upright.com, made the point that every startup was unique and therefore every startup had to find out what was best for it, while ignoring whatever was considered "best practice." I wrote what he told me here:
No army has ever conquered a country. An army conquers this muddy ditch over here, that open wheat field over there and then the adjoining farm buildings. It conquers that copse of lush oak trees next to the large outcropping of granite rocks. An army seizes that grassy hill top, it digs in on the west side of this particular fast flowing river, it gains control over the 12 story gray and red brick downtown office building, fighting room to room. If you are watching from a great distance, you might think that an army has conquered a country, but if you listen to the people who are involved in the struggle, then you are aware how much "a country" is an abstraction. The real work is made up of specifics: buildings, roads, trees, ditches, rivers, bushes, rocks, fields, houses. When a person talks in abstractions, it only shows how little they know. The people who have meaningful information talk about specifics.
Likewise, no one builds a startup. Instead, you build your startup, and your startup is completely unique, and possesses features that no other startup will ever have. Your success will depend on adapting to those attributes that make it unique.
Napoleon and his army would like to have a word with you…
I get the analogy but I think it can be made a lot better, which will decrease people who dismiss it because they got lost in where the wording doesn’t make sense. I’m pretty confident most would agree that country A conquered country B if country B was nothing but fire and rubble. It’s pretty common usage actually. Also, there’s plenty of examples of countries ruled by militaries. Even the US president is the head of the military. As for army, it’s fairly synonymous with military, only really diverting in recent usage.
Besides that, the Army Corp of engineers is well known to build bridges, roads, housing, and all sorts of things. But on the topic of corp, that’s part of the hierarchy. For yours a battalion, regiment, company, or platoon may work much better. A platoon or squad might take control of a building. A company might control a hill or river. But it takes a whole army to conquer a country because it is all these groups working together, even if often disconnected and not in unison, even with infighting and internal conflicts, they rally around the same end goals.
By I’m also not sure this fully aligns with what you say. It’s true that the naive only talk at abstract levels, but it’s common for experts too. But experts almost always leak specifics in because the abstraction is derived from a nuanced understanding. But we need to talk in both abstractions and in details. The necessity for abstraction only grows, but so does the whole pie.
It's a cute analogy, but like all analogies it breaks after inspection. One might try and salvage it by observing that military "best practice" in the field and Best Practice at HQ need not be, and commonly are not, the same, either for reasons of scope or expediency. Moreover, lower case "practice" tends to win more, more quickly. Eg guerillas tend to win battles quickly against hidebound formal armies.
For a startup, winning "battles, not wars," is what you need, because you have finite resources and have an exit in mind before you burn through them. For a large enterprise, "winning wars not battles" is important because you have big targets on your back (regulators, stock market, litigation).
One might paraphrase the whole shooting match with the ever-pithy statement that premature optimization is the root of all evil.
> If you are watching from a great distance, you might think that an army has conquered a country, but if you listen to the people who are involved in the struggle, then you are aware how much "a country" is an abstraction.
Most things of any value are abstractions. You take a country by persuading everyone you've taken a country, the implementation details of that argument might involve some grassy hill tops, some fields and farm buildings, but its absolutely not the case that an army needs to control every field and every grassy hill top that makes up "a country" in order to take it. The abstraction is different to the sum of its specific parts.
If you try to invade a country by invading every concrete bit of it, you'll either fail to take it or have nothing of value at the end (i.e fail in your objective). The only reason it has ever been useful or even possible to invade countries is because countries are abstractions and it's the abstraction that is important.
> The real work is made up of specifics: buildings, roads, trees, ditches, rivers, bushes, rocks, fields, houses.
Specifics are important - failing to execute on specifics dooms any steps you might make to help achieve your objective, but if all you see is specifics you won't be able to come up with a coherent objective or choose a path that would stand a chance of getting you there.
The army that is conquering is carrying best practice weapons, wearing best practice boots, best practice fatigues, best practice tanks, trucks, etc.
They're best practice aiming, shooting, walking, communicating, hiring (mercs), hiding, etc...
The people that are in the weeds are just doing the most simple things for their personal situation as they're taking over that granite rock or "copse of lush oak trees".
It's easy to use a lot of words to pretend your point has meaning, but often, like KH - it doesn't.
This is frequently not true. There’s examples all through history of weaker and poorer armies defeating larger ones. From Zulus, to the American Revolution, to the great Emu wars. Surely the birds were not more advanced than men armed with machine guns. But it’s only when the smaller forces can take advantage and leverage what they have better than others. It’s best practices, but what’s best is not universal, it’s best for who, best for when, best for under what circumstances
I think the word you're looking for is "nation", not "country". A country is the land area and would be conquered in that example, while a nation is the more abstract entity made of the people. It's why it makes sense to talk about countries after the government falls, or nations without a country.
Likewise, people do business with people, not with companies. Assert that “society” is merely an abstraction invoked for political gain to become an individualist.
> people do business with people, not with companies
Many of my interactions are with electronic systems deployed by companies or the state. It's rare that I deal with an actual person a lot of the time (which is sad, but that's another story).
It took me way too long to realize this, but my experience is that "zealots, idiots, and assholes" (as the author says) are going to abuse something and wield it as a bludgeon against other people. This appears to be a fixed, immutable property of the universe.
If you take it as a given that some number of people are going to get an idea lodged in their head, treat it like gospel, and beat as many other people in the head with it as they can... the best strategy you can adopt is to have the ideas in their head be at least somewhat useful.
Yes, reasonable people understand that "best practices" come with all sorts of context and caveats that need to be taken into account. But you won't always be dealing with reasonable people, and if you're dealing with an asshole, zealot, or idiot, I'd sure as hell prefer one who blindly believes in, say, test-first development versus believing that "test code isn't real code, you should spend all of your time writing code that ships to users" or some other even worse nonsense.
In any given culture (software development) you will have a set of traditions that may be used by the members of that culture to signify status, experience, pedigree and authority, and often some combination of all these and other descriptors. And, by virtue of creating a culture, you will also necessarily create a counter-culture, which rejects those things (performatively or otherwise) in an effort to do the exact same thing, but in the other direction. If you decide that proper software is developed with command-line in mind first, and UI second, you will in that effort necessarily create software devs who believe the exact opposite. This is core to our being as humans.
In my mind, this author is merely signaling software counter-culture, some of which I agree with, others I don't. And the people whom you describe above are signaling software culture, in a hostile and domineering way.
And of course, these two sides are not impermeable, forever persistent: culture and counter-culture shift constantly, often outright reversing from one another on a roughly 30 year timeline. And, they both have important things to offer. Best practices are best practices for a reason, and also telling stuffy people to chill out when they're focused so hard on best practices that they lose the plot of what the org is actually attempting to accomplish is also important.
The one thing that gives me pause, is that I have seen stages of mastery where the base stage is repetition and adherence to rules to internalize them before understanding them and knowing when to break them.
If much of our industry is new, evangelizing these rules as harder and faster than they are makes a lot of sense to bring people to get people ready for the next stage. Then they learn the context and caveats over time.
That made me want to look up some link about Shu Ha Ri. Turns out that's actually been made popular in some corners of sw dev already. E.g. https://martinfowler.com/bliki/ShuHaRi.html
I think the rejection is too strong in this article. The idea of, “best practices,” comes from an established Body of Knowledge. There is one published for software development called the SoftWare Engineering Body of Knowledge or SWEBOK; published by the IEEE.
The author seems to be arguing for nuance: that these “laws,” require context and shouldn’t be applied blindly. I agree.
However they shouldn’t be rejected out of hand either and people recommending them aren’t idiots.
Update: one problem with “best practices,” that I think the article might have unwittingly implied is that most software developers aren’t aware of SWEBOK and are repeating maxims and aphorisms they heard from others. Software development is often powered by folklore and hand waving.
I think it is best to strongly reject the idea "best practices will always benefit you".
Most best practices that I have been told about were low local maxima at best, and very harmful at worst.
If someone quotes a best practice to you and can't cite a convincing "why", you should immediately reject it.
It might still be a good idea, but you shouldn't seriously consider it until you hear an actually convincing reason (not a "just so" explanation that skips several steps).
> It might still be a good idea, but you shouldn't seriously consider it until you hear an actually convincing reason (not a "just so" explanation that skips several steps).
If everyone follows that then every decision will be bikeshedded to death. I think part of the point of the concept of "best practices" is that some ideas should be at least somewhat entrenched, followed by default, and not overturned without good reason.
Ideally your records of best practices would include a rationale and scope for when they should be reexamined. But trying to reason everything out from first principles doesn't work great either.
> Most best practices that I have been told about were low local maxima at best, and very harmful at worst.
This matches my experience, though sometimes they indeed will be helpful, at least after some consideration.
> If someone quotes a best practice to you and can't cite a convincing "why", you should immediately reject it.
In certain environments this will get you labeled someone who doesn't want to create quality software, because obviously best practices will lead to good code and not wanting to follow those practices or questioning them means that you don't have enough experience or something. Ergo, you should just apply SOLID and DRY everywhere, even if it becomes more or less a cargo cult. Not that I agree with the idea, but that way of thinking is prevalent.
(not that I agree with that, people just have that mindset sometimes)
I definitely sympathize with the thrust of the article. I think the reality is somewhere in the middle: best practices are useful short-cuts and people aren't always idiots for suggesting them. I've worked with folks who insist on Postel's law despite security research in recent years that suggest parsers should be strict to prevent langsec attacks, for example. In those cases I would refute leniency...
Although I also do work in fintech and well... card payment systems are messy. The legal framework covers liability for when actors send bad data but your system still has to parse/process/accept those messages. So you need some leniency.
It does drive me up the wall sometimes when people will hand-wave away details and cite maxims or best-practices... but those are usually situations where the details matter a great deal: security, safety, liveness, etc. People generally have the best intentions in these scenarios and I don't fault them for having different experience/knowledge/wisdom that lead them to different conclusions than I do. They're not idiots for suggesting best practices... it's just a nuisance.
That's what I mean about the rejection being too strong. It should be considered that best practices are often useful and helpful. We don't have to re-develop our intuitions from first principles on every project. It would be tedious to do so. But a healthy dose of skepticism should be used... especially when it comes to Postel's Law which has some decent research to suggest avoiding it.
I don't think anyone has ever thought that best practices will always benefit you. Nothing always works every single time in every single case.
This whole thing is really silly and obvious.
Of course you shouldn't blindly follow advice without thinking. But not following advice just because it might not always be right is also a bad idea.
My advice: In general, you should follow good advice from experienced people. If enough experts say this is the best way to do something, you should probably do that, most of the time.
But that advice will never trend on HN because it isn't clickbait or extreme, and requires using your noggin.
SWEBOK seems the opposite of that. A body of knowledge is not at all the same thing as a best practice. The only unapologetic best practice in SWEBOK is that professionals should be familiar with every topic in SWEBOK. Definitely not that you _should_ do everything in the book.
The book is quite sophisticated in this. It explicitly separate the technical knowledge from the judgments of which, when, and where to apply it. Most occurrences of "best practices" in the text are quoted, and are references to other works and describe the need to choose between different best-practice libraries depending on context. Others are part of a meta-conversation about the role of standards in engineering. Very little of SWEBOK is promoted as a "best practice" in itself.
Here's a quote from SWEBOK v4, 12-5
> Foremost, software engineers should know the key software engineering standards that apply to their specific industry. As Iberle discussed [19], the practices software engineers use vary greatly depending on the industry, business model and organizational culture where they work.
In my view best practices emerge from a body of knowledge (or sometimes from the practice and wisdom of others that haven't been documented/accepted/etc yet) and are "shortcuts."
I'm not defending Postel's Law; I agree that, after years of practice and research, it leads to security issues and surprises.
However, the point is that these kinds of principles don't arise out of people's heads and become accepted wisdom for nothing; they're usually built off of an implied (or explicit) body of knowledge.
To put it simply, best practices are, at best, context-dependent. Best practices for avionics software are not the same as best practices for a CRUD form on your mailing-list signup page.
And to be fair, the best practices for designing a bridge or a skyscraper are not the same ones for designing a doghouse.
This! "Best practice" depends on the circumstances. Are "micro services" a best practice? What about "monolithic architecture"? Those choices are not best practices in and of themselves but may be best practices when considering organization/dev team size, application user count, computational demands on the application/system, etc. What are the goals and vision of the future? Let's future-proof and pre-optimize for problems we don't currently have nor will likely have! (And don't get me started on the number of folks that dream about "we're going to need to be able to scale!" for a fairly simple CRUD app that will most likely be used by hundreads, maybe thousands, or users and realistically need 100's of "simple" requests per second (most likely per minute... )
> In 2016, the IEEE Computer Society kicked off the SWEBOK Evolution effort to develop future iterations of the body of knowledge. The SWEBOK Evolution project resulted in the publication of SWEBOK Guide version 4 in October 2024.
So the thing called "SWEBOK Guide" is actually the reference text for SWEBOK.
The books that encode some standardized Xbok are always named "The guide to the Xbok".
The actual BOK isn't supposed to have a concrete representation. It's not supposed to be standardized either, but standard organizations always ignore that part.
> but because they’re mostly pounded by either 1) various types of zealots, idiots, and assholes who abuse these kind of “best practices” as an argument from authority, or 2) inexperienced programmers who lack the ability to judge the applicability,
The author might go on to make other points that are worth discussing, but lays out his supporting arguments clearly in the opening paragraph. Best practices do not necessarily do harm because they offer bad advice, they do harm because they are advocated for by zealots and the inexperienced.
My first reaction is how unfortunate it is that this particular developer has found himself in the presence of bad engineers and the inexperienced.
But then, the argument is automatically self-defeating. Why is the rest of the article even worth reading, if he states upfront what his arguments are and those arguments are very easy to refute?
It is deeply irrational to judge the merits of an idea based solely on who is advocating for that idea.
My advice to the author is to reflect on the types of positions that he accepts, the ones that have him so put off by the people that he works with that he is openly writing about abandoning what he admits could be sound engineering practice, solely based on who that advice is coming from and how it is being delivered.
Developing software is complicated. It is constant problem solving. When solutions to problems come about, and we abstract those solutions, it is quite easy for individuals to misapply the abstraction to an inappropriate concrete. To drop context and try to retrofit a lousy solution because that solution was appropriate to a slightly different problem. But at the end of the day, these abstractions exist to try and simplify the process. Any time you see a "best practice" or design pattern acting as a complicating force, it is not doing its job. At that point you can either be objective and exercise some professional curiosity in order to try and understand why the solution adopted is inappropriate ... or you can take the lazy way out and just assume that "best practices" are the opinions of zealots and the inexperienced who blindly follow because they don't know any better.
> Best practices do not necessarily do harm because they offer bad advice, they do harm because they are advocated for by zealots and the inexperienced.
I think the point is that blindly suggesting "best practices" often is bad advice.
It's a common form of bikeshedding—it allows someone to give their casual two cents without doing the hard work of thinking through the tradeoffs.
we don't have to give undue value to 'best practices', nor do we need to judge an idea based on its presenter. we just need to have a reasonable discussion about the idea in the context in which its being applied. this simple approach has been largely eclipsed in the industry by the fetishization tools, and the absurd notion that whole classes of approaches can be dismissed as being 'antipattern'.
It's not very hard to weigh a suggestion. speculate about its costs, benefits and risks.
I think the problem is that a lot of computer nerds are a bit OCD and like to apply one solution to everything. You see this with how they get crazy about strictly typed versus not strictly typed, one particular language for every application, or spaces vs tabs. I was like that when I was younger but as I get older I realized the world is the complex place and has programs have to deal with the real world there is no one solution fits all, or a best practice that always applies. To become good at programming you need to be adaptable to the problem space. Best practices are great for juniors once you've got a bit of experience you should use that instead.
My job revolves around adopting software solutions to solve practical problems and let me tell you, this mentality of one solution to everything goes beyond just the coding. I've encountered countless developers that seems to believe that the reality of a business should conform itself to how the developer believes your business/industry should operate.
Funny, my experience is the opposite. When I was younger I thought there was a time and place for everything, a right tool for the job, a need to carefully consider the circumstances. As I got older I realised that actually a lot of libraries, languages, and communities are simply bad, and an experienced programmer is better served by having a deep knowledge of a handful of good tools and applying them to everything.
The problem is that a lot of true things in the world are counter-intuitive. So insisting that all the rules "make sense" in an immediate way is clearly a non-starter. In the safety industry there are many examples of best practices that are bred from experience but end up being counter-intuitive to some. For instance, it might not make intuitive sense that a pilot who has gone through a take-off procedure thousands of times needs a checklist to remember all the steps, but we know that it actually helps.
It's hard because there is usually some information loss in summarisation, but we also have limited memory, so we can't really expect people to remember every case study that led to the distilled advice.
As a chemical engineer by training, though, I have constantly been amazed at how resistant software people are to the idea that their industry could benefit from the kind of standardisation that has improved my industry so much.
It will never happen outside of limited industries because it would appear to be a loss of "freedom". I think the current situation creates an illusory anarchist freedom of informality that leads to sometimes proprietary lock-in, vulnerabilities, bugs, incompatibility churn, poorly-prioritized feature development, and tyranny of chaos and tech debt.
There are too many languages, too many tools, too many (conflicting) conventions (especially ones designed by committee), and too many options.
Having systems, tools, and components that don't change often with respect to compatibility and are formally-verifiable far beyond the rigor of seL4 such that they are (basically) without (implementation) error would be valuable over having tools lack even basic testing or self-tests, lack digital signatures that would prove chain-of-custody, and being able to model and prove a program or library to a level such that its behavior can be completely checked far more deeply in "whitebox" and "blackbox" methods for correctness would prove that some code stand the test of time. By choosing lesser numbers of standard language(s), tool(s), and component(s) it makes it cheaper and easier to attempt to do such.
Maybe in 100 years, out of necessity, there will be essentially 1 programming language that dominates all others (power law distribution) for humans, and it will be some sort of formal behavioral model specification language that an LLM will generate tests and machine code to implement, manage, and test against.
I disagree slightly here. There may be one (1) dominant formal language that's used as the glue code that gets run on machines and verified, but it will have numerous font-end languages that compile into it, for ease of typing and abstraction/domain fit.
Who drove that standardization in chemical engineering?
I ask, because the intra-organizational dynamics of software have been ugly for standardization. Vendor lock-in, submarine patents, embrace-and-extend, etc. have meant naive adoption of "best practices" meant a one-way ticket to an expensive, obsolete system, with an eventually insolvent vendor.
That's an interesting question. I guess it's partly the fact that chemical industry is very large-scale, often with one company in charge (think Shell or Total). The realities of having one organisation in charge of many large operations across many countries probably gives higher reward on standardisation. This is a bit like coding to "Google style guidelines" or whatever. The big organisation has more incentive to fund standardisation, but the small people can benefit from that effort, too.
The magnitude of impact also means that many industrial plants fall under government regulation, and in the safety field specifically there is a lot of knowledge sharing.
I think there is also a component about the inflexibility of real matter that factors into this. It's much harder to attach two incorrectly sized pipes together than it is to write a software shim, so the standardisation of pipe sizes and gets pushed up to the original manufacturers, where it also happens to be more economical to produce lots of exact copies than individually crafted parts.
The advantage of best practices is that you have something you can follow without having to analyze the situation in depth. The disadvantage of best practices is that you may have to analyze the situation in depth to notice that they maybe aren’t the best choice in the specific situation. The harm that best practices can do are lessened by viewing them as a rule of thumb conditioned on certain premises rather than as a dogma.
“Dogma” is the key word in this situation, I believe (and in a lot of similar situations). There are very few examples for when dogmatic obedience is healthy, helpful, or appropriate. Sadly, the trends seem to be heading the wrong way, with more tribalism than pragmatism.
I like to use the term "golden path" instead of best practices.
In a golden path, lots of others have gone before you and figured out all the nuance. But this doesn't mean the path is the best one for you, but does mean you should have a good reason for starying from it
Maybe the problem is the word Best. Perhaps Standard Practice is better — suggests "the usual way we do things" without presuming it is the best and only way to do things.
How other engineering industries deal with this phenomena? Why those approach do not work with programming? I feel silly sometimes because software development is huge industry and we don't have consensus on basics.
For example I think that strict formatting is a good thing. Since I tried to use Prettier I'm using it and similar tools everywhere and I like it. I can't do vertical alignment anymore, it eats empty lines sometimes, but that's a good compromise.
May be there should be a good compromise when it comes to "best practices"? Like "DRY" is not always best, but it's always good enough, so extract common stuff every time, even if you feel it's not worth it.
I often deal with this dilemma when writing Java with default Idea inspections. They highlight duplicated code and now I need to either disable this inspection in some way or extract the chunk of code that I don't really think should be extracted, but I just can do it and move on...
Those approaches do work with programming, but they don't make use of what makes programming different from other disciplines.
Software is usually quick to write, update and deploy. And errors usually have pretty low impact. Sure, your website may be down for a day and people will get grumpy, but you can hack together a quick fix and have it online with the push of a button.
Compare that to, say, electrical engineering, where there's often a long time between finishing a design and getting a manufactured prototype (let alone mass production.) And a fault could mean damage to equipment (or people) and the cost of having to replace everything. So you'll find that there's a lot more work done up-front and the general way of working tends to be more cautious.
There's also the idea of best practices as a form of communication. This also helps for programmers, as code that looks and acts the way you expect it is easier to follow. But code is primarily shared with other programmers. Other engineering disciplines (more) frequently need to collaborate with people from other domains. For example, a civil engineer's work could be shared with architects, government bureaucrats and construction managers, and best practices often provide a common familiar standard.
Compared to other engineering disciplines, software is a big unorganized mess. But it's also incredibly fast and cheap to make because of that.
You can destroy rockets, lethally irradiate people, fly planes upside down, or financially ruin a company because of software bugs, so avoid faults can be critical for software as well.
It is just that high-velocity low-reliability web and consumer application development is a very large niche. A lot of our best-practices are about attempting to maintain high velocity (often with questionable results), more than increasing reliability.
> How other engineering industries deal with this phenomena? Why those approach do not work with programming?
A lot of engineering discipline is a way to prevent engineered works from causing unintentional injury, physical or fiscal.
Most software development is far away from physical injury. And fiscal injury from software failure is rarely assigned to any party.
There's no feedback loop to push us to standardized process to cover our asses; we'd all prefer to do things our own way. It's also pretty hard to do convincing studies to determine which methods are better. Few people are convinced by any of the studies; and there's not been a lot of company X dominates the industry because of practice Y kinds of things, like you see with say Toyota's quality practices in the 80s and 90s.
Other engineering disciplines have certification, codes and regulations for specific domains, which are enforced by law.
DRY is a perfect example though of something which in moderation is a good idea but as the article says is vulnerable to ‘inexperienced programmers who lack the ability to judge the applicability’ and if over-eagerly applied leads to over-abstraction and premature abstraction which does more harm than good.
Before regulations, other engineering disciplines have far more objective decisions and calculations than software engineering.
Consider a mechanical analogue of DRY: choosing between reusing identical parts to make design, assembly and repairs simpler or designing similar but different parts because they are worth optimizing (e.g. a whole IKEA cabinet with interchangeable screws or with short and long ones).
Unlike next month's shifting software requirements the cost and performance of this kind of alternative can be predicted easily and accurately, without involving gut feelings or authority.
>> inexperienced programmers who lack the ability to judge the applicability
In other words, the author knows better than you.
The author could have put forward precedent, principles, or examples. But instead he chose to make it about the people (inexperienced), not his arguments.
I think one thing the industry does not do properly is applying different practices and standards depending on context.
An retailer website is not the same as a trading platform, the same way that a house is not the same as a railway station. But we blindly try to apply the same "good practices" everywhere.
We also have another interesting phenomenon, our products can mutate in their lifetime, and our practices should follow (they often don't) an MVP can become a critical system, a small internal project can become a client-facing application, we can re-platform, re-write, etc. That's very rare in other industries.
What makes you so sure they do? Go to the hardware store and behold how many fasteners there are. Go down the rabbet hole of pipe fittings. Consider the optimal size of lumber, someday.
And then get ready for the horrors of electrical connections. Not necessarily in how many there are; the real horror is how many think there is a "one true answer" there.
You can find some solace in learning of focusing effects. But, focus isn't just getting harder for individuals. :(
In the end, other engineering areas also have lots of "it depends" situations, where often there are multiple correct answers, depending on availability, legislation, safety, physical constraints, etc.
Perhaps in software engineering people are just too quick or immature to judge.
A nit; DRY is probably not what you think it is. DRY is basically the same as SRP, framed differently. In SRP, it's totally valid to have the code twice if it has different meaning from a user pov.
The problem with definition is that it's subjective and cannot be checked automatically. So my definition was about mechanistic DRY, which is objective and checked by linter.
I think we're more like Pythagoras: some useful theory about numbers, taken way too far and became an actual religion[0] listening to the Delphic[1] Oracle[2].
[0] Tabs or spaces? Vi or emacs? Is the singularity the rapture of the nerds, with Roko's Basalisk as the devil and ${insert name here according to personal taste} as the antichrist? SOLID or move fast and break things?
Often the models and equations rely on making assumptions in order to simplify the problem (cue the joke about physicist and the spherical cow). This is one of the reasons thing are designed with tolerances and safety factors.
Software like CAD and particularly Computational Fluid Dynamics (CFD) packages can simulate the problem but at least with CFD you would typically perform other types of verification such as wind tunnel tests etc.
- I think SW needs much more creativity than other industries.
- Typically SW is not mission critical (in mission critical things, it IS pretty much regulated to uncomfortable extremes)
You could regulate it to death, and would probably have some positive impact by some metric, but you would be easily overtaken by FOSS, where for sure there will be less restrictions.
"Engineering industry" is a misnomer. Other engineering areas have the same issues with best practices we have, industries apply best practices with a great amount of success (but not to totality).
Usually, engineering creates best practices for the industries to follow.
So outside of coworkers that have a hard time collaborating in general, is it a problem for others that their coworkers will not apply context? That has not been my experience.
I don't understand the question... If someone has a strong opinion, and they have arguments for their opinion, but don't recognize the significance of the context in which they've formed their opinions, they have blind spots they aren't aware of. Is that a problem? I dunno, that's up to you and your environment.
https://respectfulleadership.substack.com/p/dan-morena-is-a-...
My summary of his idea:
No army has ever conquered a country. An army conquers this muddy ditch over here, that open wheat field over there and then the adjoining farm buildings. It conquers that copse of lush oak trees next to the large outcropping of granite rocks. An army seizes that grassy hill top, it digs in on the west side of this particular fast flowing river, it gains control over the 12 story gray and red brick downtown office building, fighting room to room. If you are watching from a great distance, you might think that an army has conquered a country, but if you listen to the people who are involved in the struggle, then you are aware how much "a country" is an abstraction. The real work is made up of specifics: buildings, roads, trees, ditches, rivers, bushes, rocks, fields, houses. When a person talks in abstractions, it only shows how little they know. The people who have meaningful information talk about specifics.
Likewise, no one builds a startup. Instead, you build your startup, and your startup is completely unique, and possesses features that no other startup will ever have. Your success will depend on adapting to those attributes that make it unique.
I get the analogy but I think it can be made a lot better, which will decrease people who dismiss it because they got lost in where the wording doesn’t make sense. I’m pretty confident most would agree that country A conquered country B if country B was nothing but fire and rubble. It’s pretty common usage actually. Also, there’s plenty of examples of countries ruled by militaries. Even the US president is the head of the military. As for army, it’s fairly synonymous with military, only really diverting in recent usage.
Besides that, the Army Corp of engineers is well known to build bridges, roads, housing, and all sorts of things. But on the topic of corp, that’s part of the hierarchy. For yours a battalion, regiment, company, or platoon may work much better. A platoon or squad might take control of a building. A company might control a hill or river. But it takes a whole army to conquer a country because it is all these groups working together, even if often disconnected and not in unison, even with infighting and internal conflicts, they rally around the same end goals.
By I’m also not sure this fully aligns with what you say. It’s true that the naive only talk at abstract levels, but it’s common for experts too. But experts almost always leak specifics in because the abstraction is derived from a nuanced understanding. But we need to talk in both abstractions and in details. The necessity for abstraction only grows, but so does the whole pie.
https://en.wikipedia.org/wiki/Military_organization
For a startup, winning "battles, not wars," is what you need, because you have finite resources and have an exit in mind before you burn through them. For a large enterprise, "winning wars not battles" is important because you have big targets on your back (regulators, stock market, litigation).
One might paraphrase the whole shooting match with the ever-pithy statement that premature optimization is the root of all evil.
I think we can all agree that if that is the case, you’ve in fact conquered nothing.
Edit: Since we say opposite things, maybe we wouldn’t agree.
Most things of any value are abstractions. You take a country by persuading everyone you've taken a country, the implementation details of that argument might involve some grassy hill tops, some fields and farm buildings, but its absolutely not the case that an army needs to control every field and every grassy hill top that makes up "a country" in order to take it. The abstraction is different to the sum of its specific parts.
If you try to invade a country by invading every concrete bit of it, you'll either fail to take it or have nothing of value at the end (i.e fail in your objective). The only reason it has ever been useful or even possible to invade countries is because countries are abstractions and it's the abstraction that is important.
> The real work is made up of specifics: buildings, roads, trees, ditches, rivers, bushes, rocks, fields, houses.
Specifics are important - failing to execute on specifics dooms any steps you might make to help achieve your objective, but if all you see is specifics you won't be able to come up with a coherent objective or choose a path that would stand a chance of getting you there.
They're best practice aiming, shooting, walking, communicating, hiring (mercs), hiding, etc...
The people that are in the weeds are just doing the most simple things for their personal situation as they're taking over that granite rock or "copse of lush oak trees".
It's easy to use a lot of words to pretend your point has meaning, but often, like KH - it doesn't.
Many of my interactions are with electronic systems deployed by companies or the state. It's rare that I deal with an actual person a lot of the time (which is sad, but that's another story).
If you take it as a given that some number of people are going to get an idea lodged in their head, treat it like gospel, and beat as many other people in the head with it as they can... the best strategy you can adopt is to have the ideas in their head be at least somewhat useful.
Yes, reasonable people understand that "best practices" come with all sorts of context and caveats that need to be taken into account. But you won't always be dealing with reasonable people, and if you're dealing with an asshole, zealot, or idiot, I'd sure as hell prefer one who blindly believes in, say, test-first development versus believing that "test code isn't real code, you should spend all of your time writing code that ships to users" or some other even worse nonsense.
In my mind, this author is merely signaling software counter-culture, some of which I agree with, others I don't. And the people whom you describe above are signaling software culture, in a hostile and domineering way.
And of course, these two sides are not impermeable, forever persistent: culture and counter-culture shift constantly, often outright reversing from one another on a roughly 30 year timeline. And, they both have important things to offer. Best practices are best practices for a reason, and also telling stuffy people to chill out when they're focused so hard on best practices that they lose the plot of what the org is actually attempting to accomplish is also important.
If much of our industry is new, evangelizing these rules as harder and faster than they are makes a lot of sense to bring people to get people ready for the next stage. Then they learn the context and caveats over time.
The author seems to be arguing for nuance: that these “laws,” require context and shouldn’t be applied blindly. I agree.
However they shouldn’t be rejected out of hand either and people recommending them aren’t idiots.
Update: one problem with “best practices,” that I think the article might have unwittingly implied is that most software developers aren’t aware of SWEBOK and are repeating maxims and aphorisms they heard from others. Software development is often powered by folklore and hand waving.
Most best practices that I have been told about were low local maxima at best, and very harmful at worst.
If someone quotes a best practice to you and can't cite a convincing "why", you should immediately reject it.
It might still be a good idea, but you shouldn't seriously consider it until you hear an actually convincing reason (not a "just so" explanation that skips several steps).
If everyone follows that then every decision will be bikeshedded to death. I think part of the point of the concept of "best practices" is that some ideas should be at least somewhat entrenched, followed by default, and not overturned without good reason.
Ideally your records of best practices would include a rationale and scope for when they should be reexamined. But trying to reason everything out from first principles doesn't work great either.
This matches my experience, though sometimes they indeed will be helpful, at least after some consideration.
> If someone quotes a best practice to you and can't cite a convincing "why", you should immediately reject it.
In certain environments this will get you labeled someone who doesn't want to create quality software, because obviously best practices will lead to good code and not wanting to follow those practices or questioning them means that you don't have enough experience or something. Ergo, you should just apply SOLID and DRY everywhere, even if it becomes more or less a cargo cult. Not that I agree with the idea, but that way of thinking is prevalent.
(not that I agree with that, people just have that mindset sometimes)
Although I also do work in fintech and well... card payment systems are messy. The legal framework covers liability for when actors send bad data but your system still has to parse/process/accept those messages. So you need some leniency.
It does drive me up the wall sometimes when people will hand-wave away details and cite maxims or best-practices... but those are usually situations where the details matter a great deal: security, safety, liveness, etc. People generally have the best intentions in these scenarios and I don't fault them for having different experience/knowledge/wisdom that lead them to different conclusions than I do. They're not idiots for suggesting best practices... it's just a nuisance.
That's what I mean about the rejection being too strong. It should be considered that best practices are often useful and helpful. We don't have to re-develop our intuitions from first principles on every project. It would be tedious to do so. But a healthy dose of skepticism should be used... especially when it comes to Postel's Law which has some decent research to suggest avoiding it.
This whole thing is really silly and obvious.
Of course you shouldn't blindly follow advice without thinking. But not following advice just because it might not always be right is also a bad idea.
My advice: In general, you should follow good advice from experienced people. If enough experts say this is the best way to do something, you should probably do that, most of the time.
But that advice will never trend on HN because it isn't clickbait or extreme, and requires using your noggin.
SWEBOK seems the opposite of that. A body of knowledge is not at all the same thing as a best practice. The only unapologetic best practice in SWEBOK is that professionals should be familiar with every topic in SWEBOK. Definitely not that you _should_ do everything in the book.
The book is quite sophisticated in this. It explicitly separate the technical knowledge from the judgments of which, when, and where to apply it. Most occurrences of "best practices" in the text are quoted, and are references to other works and describe the need to choose between different best-practice libraries depending on context. Others are part of a meta-conversation about the role of standards in engineering. Very little of SWEBOK is promoted as a "best practice" in itself.
Here's a quote from SWEBOK v4, 12-5
> Foremost, software engineers should know the key software engineering standards that apply to their specific industry. As Iberle discussed [19], the practices software engineers use vary greatly depending on the industry, business model and organizational culture where they work.
In my view best practices emerge from a body of knowledge (or sometimes from the practice and wisdom of others that haven't been documented/accepted/etc yet) and are "shortcuts."
I'm not defending Postel's Law; I agree that, after years of practice and research, it leads to security issues and surprises.
However, the point is that these kinds of principles don't arise out of people's heads and become accepted wisdom for nothing; they're usually built off of an implied (or explicit) body of knowledge.
Does that make sense?
There was recently a HN thread about it: https://news.ycombinator.com/item?id=41907412
And to be fair, the best practices for designing a bridge or a skyscraper are not the same ones for designing a doghouse.
Also it shouldn't be taken for granted that best practice is always "best/good" - there definitely are idiots recommending best practices.
https://ieeecs-media.computer.org/media/education/swebok/swe...
But, you know, I want the whole ordeal. I want the SWEBOK, not the "how to read the SWEBOK". Where can I find it?
> In 2016, the IEEE Computer Society kicked off the SWEBOK Evolution effort to develop future iterations of the body of knowledge. The SWEBOK Evolution project resulted in the publication of SWEBOK Guide version 4 in October 2024.
So the thing called "SWEBOK Guide" is actually the reference text for SWEBOK.
The actual BOK isn't supposed to have a concrete representation. It's not supposed to be standardized either, but standard organizations always ignore that part.
Deleted Comment
Deleted Comment
Deleted Comment
The author might go on to make other points that are worth discussing, but lays out his supporting arguments clearly in the opening paragraph. Best practices do not necessarily do harm because they offer bad advice, they do harm because they are advocated for by zealots and the inexperienced.
My first reaction is how unfortunate it is that this particular developer has found himself in the presence of bad engineers and the inexperienced.
But then, the argument is automatically self-defeating. Why is the rest of the article even worth reading, if he states upfront what his arguments are and those arguments are very easy to refute?
It is deeply irrational to judge the merits of an idea based solely on who is advocating for that idea.
My advice to the author is to reflect on the types of positions that he accepts, the ones that have him so put off by the people that he works with that he is openly writing about abandoning what he admits could be sound engineering practice, solely based on who that advice is coming from and how it is being delivered.
Developing software is complicated. It is constant problem solving. When solutions to problems come about, and we abstract those solutions, it is quite easy for individuals to misapply the abstraction to an inappropriate concrete. To drop context and try to retrofit a lousy solution because that solution was appropriate to a slightly different problem. But at the end of the day, these abstractions exist to try and simplify the process. Any time you see a "best practice" or design pattern acting as a complicating force, it is not doing its job. At that point you can either be objective and exercise some professional curiosity in order to try and understand why the solution adopted is inappropriate ... or you can take the lazy way out and just assume that "best practices" are the opinions of zealots and the inexperienced who blindly follow because they don't know any better.
I think the point is that blindly suggesting "best practices" often is bad advice.
It's a common form of bikeshedding—it allows someone to give their casual two cents without doing the hard work of thinking through the tradeoffs.
It's not very hard to weigh a suggestion. speculate about its costs, benefits and risks.
Deleted Comment
The problem is that a lot of true things in the world are counter-intuitive. So insisting that all the rules "make sense" in an immediate way is clearly a non-starter. In the safety industry there are many examples of best practices that are bred from experience but end up being counter-intuitive to some. For instance, it might not make intuitive sense that a pilot who has gone through a take-off procedure thousands of times needs a checklist to remember all the steps, but we know that it actually helps.
It's hard because there is usually some information loss in summarisation, but we also have limited memory, so we can't really expect people to remember every case study that led to the distilled advice.
As a chemical engineer by training, though, I have constantly been amazed at how resistant software people are to the idea that their industry could benefit from the kind of standardisation that has improved my industry so much.
There are too many languages, too many tools, too many (conflicting) conventions (especially ones designed by committee), and too many options.
Having systems, tools, and components that don't change often with respect to compatibility and are formally-verifiable far beyond the rigor of seL4 such that they are (basically) without (implementation) error would be valuable over having tools lack even basic testing or self-tests, lack digital signatures that would prove chain-of-custody, and being able to model and prove a program or library to a level such that its behavior can be completely checked far more deeply in "whitebox" and "blackbox" methods for correctness would prove that some code stand the test of time. By choosing lesser numbers of standard language(s), tool(s), and component(s) it makes it cheaper and easier to attempt to do such.
Maybe in 100 years, out of necessity, there will be essentially 1 programming language that dominates all others (power law distribution) for humans, and it will be some sort of formal behavioral model specification language that an LLM will generate tests and machine code to implement, manage, and test against.
I ask, because the intra-organizational dynamics of software have been ugly for standardization. Vendor lock-in, submarine patents, embrace-and-extend, etc. have meant naive adoption of "best practices" meant a one-way ticket to an expensive, obsolete system, with an eventually insolvent vendor.
The magnitude of impact also means that many industrial plants fall under government regulation, and in the safety field specifically there is a lot of knowledge sharing.
I think there is also a component about the inflexibility of real matter that factors into this. It's much harder to attach two incorrectly sized pipes together than it is to write a software shim, so the standardisation of pipe sizes and gets pushed up to the original manufacturers, where it also happens to be more economical to produce lots of exact copies than individually crafted parts.
In a golden path, lots of others have gone before you and figured out all the nuance. But this doesn't mean the path is the best one for you, but does mean you should have a good reason for starying from it
Buddy that’s not a reason, that’s a rationalization.
For example I think that strict formatting is a good thing. Since I tried to use Prettier I'm using it and similar tools everywhere and I like it. I can't do vertical alignment anymore, it eats empty lines sometimes, but that's a good compromise.
May be there should be a good compromise when it comes to "best practices"? Like "DRY" is not always best, but it's always good enough, so extract common stuff every time, even if you feel it's not worth it.
I often deal with this dilemma when writing Java with default Idea inspections. They highlight duplicated code and now I need to either disable this inspection in some way or extract the chunk of code that I don't really think should be extracted, but I just can do it and move on...
Software is usually quick to write, update and deploy. And errors usually have pretty low impact. Sure, your website may be down for a day and people will get grumpy, but you can hack together a quick fix and have it online with the push of a button.
Compare that to, say, electrical engineering, where there's often a long time between finishing a design and getting a manufactured prototype (let alone mass production.) And a fault could mean damage to equipment (or people) and the cost of having to replace everything. So you'll find that there's a lot more work done up-front and the general way of working tends to be more cautious.
There's also the idea of best practices as a form of communication. This also helps for programmers, as code that looks and acts the way you expect it is easier to follow. But code is primarily shared with other programmers. Other engineering disciplines (more) frequently need to collaborate with people from other domains. For example, a civil engineer's work could be shared with architects, government bureaucrats and construction managers, and best practices often provide a common familiar standard.
Compared to other engineering disciplines, software is a big unorganized mess. But it's also incredibly fast and cheap to make because of that.
It is just that high-velocity low-reliability web and consumer application development is a very large niche. A lot of our best-practices are about attempting to maintain high velocity (often with questionable results), more than increasing reliability.
A lot of engineering discipline is a way to prevent engineered works from causing unintentional injury, physical or fiscal.
Most software development is far away from physical injury. And fiscal injury from software failure is rarely assigned to any party.
There's no feedback loop to push us to standardized process to cover our asses; we'd all prefer to do things our own way. It's also pretty hard to do convincing studies to determine which methods are better. Few people are convinced by any of the studies; and there's not been a lot of company X dominates the industry because of practice Y kinds of things, like you see with say Toyota's quality practices in the 80s and 90s.
DRY is a perfect example though of something which in moderation is a good idea but as the article says is vulnerable to ‘inexperienced programmers who lack the ability to judge the applicability’ and if over-eagerly applied leads to over-abstraction and premature abstraction which does more harm than good.
In other words, the author knows better than you.
The author could have put forward precedent, principles, or examples. But instead he chose to make it about the people (inexperienced), not his arguments.
An retailer website is not the same as a trading platform, the same way that a house is not the same as a railway station. But we blindly try to apply the same "good practices" everywhere.
We also have another interesting phenomenon, our products can mutate in their lifetime, and our practices should follow (they often don't) an MVP can become a critical system, a small internal project can become a client-facing application, we can re-platform, re-write, etc. That's very rare in other industries.
And then get ready for the horrors of electrical connections. Not necessarily in how many there are; the real horror is how many think there is a "one true answer" there.
You can find some solace in learning of focusing effects. But, focus isn't just getting harder for individuals. :(
In the end, other engineering areas also have lots of "it depends" situations, where often there are multiple correct answers, depending on availability, legislation, safety, physical constraints, etc.
Perhaps in software engineering people are just too quick or immature to judge.
> rabbet hole
Nice pun ;)
(https://softwareengineering.stackexchange.com/questions/2207...)
"Byte for byte equivalent" doesn't necessarily mean it's a copy, if the semantics of it are different.
I think we're more like Pythagoras: some useful theory about numbers, taken way too far and became an actual religion[0] listening to the Delphic[1] Oracle[2].
[0] Tabs or spaces? Vi or emacs? Is the singularity the rapture of the nerds, with Roko's Basalisk as the devil and ${insert name here according to personal taste} as the antichrist? SOLID or move fast and break things?
vs. really a real religion: https://en.wikipedia.org/wiki/Pythagoreanism
[1] Not https://en.wikipedia.org/wiki/Delphi_(software)
but rather https://en.wikipedia.org/wiki/Delphi
[2] Not https://en.wikipedia.org/wiki/Oracle_Corporation
but rather https://en.wikipedia.org/wiki/Oracle
Deleted Comment
They don't. CAD, the "programming languages" of most other engineering disciplines, is as much of a Wild West.
or Heat Exchanger efficiency calculations (https://en.wikipedia.org/wiki/Logarithmic_mean_temperature_d...) etc.
Often the models and equations rely on making assumptions in order to simplify the problem (cue the joke about physicist and the spherical cow). This is one of the reasons thing are designed with tolerances and safety factors.
Software like CAD and particularly Computational Fluid Dynamics (CFD) packages can simulate the problem but at least with CFD you would typically perform other types of verification such as wind tunnel tests etc.
- I think SW needs much more creativity than other industries.
- Typically SW is not mission critical (in mission critical things, it IS pretty much regulated to uncomfortable extremes)
You could regulate it to death, and would probably have some positive impact by some metric, but you would be easily overtaken by FOSS, where for sure there will be less restrictions.
Usually, engineering creates best practices for the industries to follow.
- small localized team vs big distributed team
- bug fixes and incremental improvements vs green field poc
- saas vs system scripts
Context matters, and if people aren't giving you the context in which they deem practices to be "best", they are myopically wrong