Readit News logoReadit News
mattbuilds · 2 years ago
I really disagree with the pros listed on over-engineering, specifically "future-proofing" and "reusability". I doubt you can accurately predict the future and whatever assumptions you make will likely be wrong in some way. Then you are stuck having to solve the problem that you created by trying to predict. As for reusabilty, it's similar. Start with solving what you have to, then abstract as you see it fit. Again, don't try to predict. Be thoughtful and really understand what is actually happening in your system. Don't try to follow some pattern you read online because it seems like a good fit.

Realistically you should engineer for the problem you have or can reasonably expect you are going to have pretty soon. You can solve future problems in the future. I'm also not saying to write horrible unmaintainable code, but don't try to abstract away complexity you don't actually have yet. Abstractions and where to separate things should become apparent as you build the system, but it's really hard to know them until you are actually using it and see it come together.

sshine · 2 years ago
> I really disagree with the pros listed on over-engineering, specifically "future-proving" and "reusability"

Yes, someone who argues that "over-engineering" leads to "future-proving" is caught by the bug.

When you future-prove something, that's called "engineering". Over-engineering is by definition failing to foresee future needs, imagining generic future needs ten steps ahead instead of the less ambitious future needs two steps ahead.

It is easier to modify early, simplistic assumptions than it is to walk back from premature generalisations over the wrong things.

Deestan · 2 years ago
Exactly. A thing so small and simple that you can rewrite it in an afternoon is more futureproof than any 8000 LOC monstrosity.
mattbuilds · 2 years ago
Perfectly said, that was the exact point I was trying to make. I've seen so many bad decisions made in the name of "future proofing". The the future comes and you are fighting those decisions. I wonder if people switch jobs and projects so often they don't get to see the results of all that future proofing.
tivert · 2 years ago
> I doubt you can accurately predict the future and whatever assumptions you make will likely be wrong in some way. Then you are stuck having to solve the problem that you created by trying to predict. As for reusabilty, it's similar. Start with solving what you have to, then abstract as you see it fit.

Kinda sorta. It's not a binary: you can "predict the future," just not too far out and not with complete certainty. The art is figuring out what the practical limits are, and not going past them.

> Realistically you should engineer for the problem you have or can reasonably expect you are going to have pretty soon. You can solve future problems in the future.

Another factor is comprehensibility. Sometimes it makes sense to solve problems you don't technically have, because solving them makes the thing complete (or a better approximation thereof) and therefore easier to reason about later.

bpicolo · 2 years ago
Agreed. These feel backwards.

When I think "Under engineer", I think "keep it simple, because you can't predict the future". Simplicity is a great enabler of flexibility and tends to go hand in hand with scalability.

arp242 · 2 years ago
It's usually comparatively easy to make something that's too simple a bit more complex.

It's often much harder to make something that's too complex more simple.

indigochill · 2 years ago
It's interesting to try to fit what is often talked about as "future-proofing" and "reusability" into the development of a general-purpose CPU, since CPUs are in a sense the ultimate reusable system.

In an overly simplified textbook example of designing/building a CPU, you have an ISA you're building the CPU to support. The ISA defines a finite set of operations and their inputs, outputs, and side effects (like storing a value in a particular register). Then you build the CPU to fulfill those criteria.

In my experience, designers that want reusability usually don't have enough precision in how they want to reuse a system so an ISA-like design can't be created.

And practically, it's the rare (I might even say non-existent) day-to-day business problem that needs CPU-like flexibility. Usually a system just needs to support a handful of use cases, like integrating with different payment providers. An interface will suffice.

sshine · 2 years ago
> Usually a system just needs to support a handful of use cases, like integrating with different payment providers.

Building EV chargers is a good dose of electrical engineering combined with talking to dozens of car models and their own particular quirky interpretations of common protocols, which is like designing websites for a market with dozens of unique browser implementations.

In spite of that, it seems half of the complexity is making sure people pay.

bluGill · 2 years ago
> I doubt you can accurately predict the future and whatever assumptions you make will likely be wrong in some way.

I doubt this for most of us. Computers have been around for a long time. Most of us are not work on new problems. We by now have a pretty good idea of what will be needed and what won't be. There are a lot of things left that haven't been done yet, but if you understand the problem at all you should have a good idea of what those things will be. You won't be 100% correct of course, and exactly when any particular thing you design for will actually get implemented is unknown, but you should already have a good idea of what things your users will want on a high enough level.

Of course if we ever get something new you will be wrong. 10 years ago I had no idea that LLM type AI would affect my program, but it is now foreseeable even though I don't really know what it can do will turn out useful vs what will just be a passing fad. Science fiction has 3d displays, holographic interfaces, teleportation, and lots of other interesting things ideas that may or may not work out.

Likewise, 20 years ago you could be forgiven for not foreseeing the effects that privacy legislation would have on your app, but you better assume it will exist now and the laws will change.

angarg12 · 2 years ago
I think this is nicely captured by the concept of "cost of carrying".

Keeping code around is not free. Cost of carrying refers to the ongoing efforts of maintaining code, not to mention side effects such as increased complexity and cognitive load.

If you over-engineer a system you aren't getting value out of the extra bits, but you are still paying the cost of keeping them around.

andrewvc · 2 years ago
One challenge with the term over-engineering is that it implies that the over-engineered solution would be generally superior to the under-engineered one were it not for the extra cost. The article rightly points out that this is not true, but it's something that really isn't discussed as much as it should be.

A good example of this would be avg's teardown of the Juicero, in which IIRC he described it as under-engineered despite it's expensive ultra durable components. The rationale being that rather than build a design suitable for the purpose of squeezing juice out of bags they built a machine that was specced for a much more demanding task, thus driving up costs and wasting materials. The implication being if they'd spent more time or care engineering it they wouldn't have poorly engineered over-specced components.

Perhaps a preferred term should be "well engineered" or "poorly engineered". A well engineered thing is something that is well suited in a number of different dimensions, including product capability, business needs, cost (and its impact to end users in terms of price), etc. That sometimes means ugly code, it sometimes means technical debt, but it always implies elegance at a higher level than just the code or components, but an elegance that encompasses a wholistic understanding of the context in which that code exists.

In the software world some examples of poor engineering might be using kubernetes for a small internal app that could run well on a single VM or container. Or, in a different context NOT using kubernetes for the exact same app, but in an organization where k8s is standardized, thus creating more inconsistency and driving up organizational complexity in order to reduce local complexity.

dwallin · 2 years ago
I very much agree with this. Over-engineering and under-engineering are poorly named as they are not on opposite ends of the spectrum. They are actually both badly-engineered and actually both lead to many of the same issues (which I believe this article gets wrong). Of the listed cons both overlap on all of the following:

- Fragile code

- Technical debt

- Reduced Agility

- Understanding Complexity

Over-engineering can be abused an excuse for poorly-engineered solutions and cutting corners. The future being hard to predict is often used as justification, but this swings both ways, you also often won't know what code is going to get built upon. Frequently an obscure one-off piece of code can become more useful than expected, with functionality tacked on over time, until the point where an entire product is resting on really shaky foundations.

Build a culture of quality engineering. Build the minimal solution but build it well. Have a strong (and flexible) product vision as a guiding light but always take small steps towards it. Optimize towards understandability and replaceability.

Deestan · 2 years ago
"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system." -John Gall (Systemantics)

While not the intended audience, Systemantics is one of the most educating books on software architecture in existence.

amannm · 2 years ago
The two terms imply "engineering" varies along only one dimension. I personally don't find these terms useful or constructive for anything apart from "talking smack" about engineering decisions outside of your control, influence, or understanding.
jjk166 · 2 years ago
The issue isn't over or under engineering by these definitions. A good solution needs to satisfy both all requirements, some of which may be clear from the onset and some of which need to be discovered. The issue is how one handles unknown requirements. So often people either guess what unkown requirements will be (which is how this piece defines over-engineering), or ignore them (which is how this piece defines under-engineering). You should be doing neither. Instead you should be decoupling your known from your unknown requirements so that you are agnostic to what solutions need to be implemented down the road. You don't need things that can handle problems you don't have, but you need to be able to easily rip out and replace parts of your solution as they become inadequate. You don't need to handle every edge case, you need to design things to fail safely by default. You don't need to hold off on making decisions until you have information, you need to make decisions you'll be happy with no matter what information you receive later. A robust solution can still be quite lean.
bearjaws · 2 years ago
I always live my life by: You can always make something more complicated, but once its complicated (and more likely deeply re-used and embedded in your system), you are going to have a hard time making it simple again.
rqtwteye · 2 years ago
Don't forget that under-engineered things can also be very complicated. "Under-engineered" doesn't mean "simple"
icedchai · 2 years ago
There is a balance. I've seen plenty of under-engineered software with the same code copy-and-paste dozens of times. I've also seen incredibly complex abstraction layers that only made sense to their original authors (long gone...) and were incredibly hard to navigate and maintain (class hierarchies 5 layers deep, etc.)
hinkley · 2 years ago
I practice something I sometimes call negative space, which sometimes ends up about as vague as it sounds.

My gold standard for well written tests is if the engineer who broke the test can fix their regression without looking at the test implementation, you have achieved nirvana.

My gold standard for well factored code is if people add a feature exactly where you would have added it. But that can be arrived at through socialization or by leaving spots where a feature would need to go if you actually need it.

You don’t need to build in conditionals for speculative features. You can just think about how you would start. What’s the first refactor? Can I arrange the code so that’s not a pain in the ass?

Bertrand Meyer felt that actions and decisions should not be mixed. For one they make testing a pain. They also increase the lines of code in impure functions, which reduces scalability of your system. A common effect of new features is adding more complexity to the decision process, so it’s easier to add 3 lines to an 8 line function than 3 lines to a 40 line function.

bsaul · 2 years ago
There's a middleground that many developers have trouble finding :

You can design your code so that it'll be easier to evolve into the most likely path. However, you don't actually implement the future cases.

Example: it doesn't take a lot more effort to create a configuration struct instead of hardcoding a value. However, you don't want to implement handling of any other values that the one you planned on using. You can easily throw a "value not supported" error if the configuration has anything else.

However, this will greatly help any newcomer on the codebase to understand what possibilities your component potentially offers and how it can evolve.

spacemadness · 2 years ago
How about some actual examples? This seems like a fluff piece article written in 10 minutes without much real content.
IncogniTech · 2 years ago
Agreed some visual examples or just written ones would help. However, it was a clear & concise post that I believe most of us can relate to.