This is one vector for complexity, to be sure. Saying "no" to a feature that is unnecessary, foists a lot of complexity on a system, or has a low power to weight ratio is one of the best skills a senior developer can develop.
One of my favorite real world examples is method overloading in Java. It's not a particularly useful feature (especially given alternative, less complicated features like default parameter values), interacts poorly with other language features (e.g. varargs) and ends up making all sorts of things far more complex than necessary: bytecode method invocation now needs to encode the entire type signature of a method, method resolution during compilation requires complex scoring, etc. The JVM language I worked on probably had about 10% of its total complexity dedicated to dealing with this "feature" of dubious value to end users.
Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity. I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.
Well what I mostly experienced in my years in the field was that developers, wether senior or not, feel obliged to create abstract solutions.
Somehow people feel that if they won't do a generic solution for a problem at hand they failed.
In reality the opposite is often true, when people try to make generic solution they fail to make something simple, quick and easy to understand for others. Let alone the idea that abstraction will make system flexible and easier to change in the future. Where they don't know the future and then always comes a plot twist which does not fit into "perfect architecture". So I agree with that idea that abstraction is not always best response to complex system. Sometimes making copy, paste and change is better approach.
Kevlin Henney makes interesting points on this. We often assume that abstractions make things harder to understand, at the benefit of making the architecture more flexible. When inserting an abstraction, it is supposed to do both, if at all possible. Abstracting should not only help the architecture, but also the understanding of the code itself. If it doesn't do the latter, then you should immediately question whether it is necessary, or if a better abstraction exists.
The take-away I took from it is that as developers, we love to solve problems using technical solutions. Sometimes, the real problem is one of narration. As we evolve our languages, better technical abstractions become available. But that's not going to prevent 'enterprisey' code from making things look and sound more difficult. Just look at any other field where abstractions aren't limited by technicalities: the same overcomplicated mess forms. Bad narrators narrate poorly, even when they are not limited.
I think we forget that while “engineering” is about maximizing the gain for a given investment of resources, it can be stated another way as building the least amount of bridge that (reliably) satisfies the requirements.
Abstraction can be used to evoke category theoretical things, but more often it’s used to avoid making a decision. It’s fear based. It’s overbuilding the bridge to avoid problems we don’t know or understand. And that is not Engineering.
I find sometimes that it helps me to think of it as a fighter or martial artist might. It is only necessary to not be where the strike is, when the strike happens. Anything more might take you farther from your goal. Cut off your options.
Or a horticulturist: here is a list of things this plant requires, and they cannot all happen at once, so we will do these two now, and assuming plans don’t change, we will do the rest next year. But plans always change, and sometimes for the better.
In Chess, in Go, in Jazz, in coaching gymnastics, even in gun safety, there are things that could happen. You have to be in the right spot in case they do, but you hope they don’t. And if they don’t, you still did the right thing. Just enough, but not too much.
What is hard for outsiders to see is all the preparation for actions that never happen. We talk about mindfulness as if it hasn’t been there all along, in every skilled trade, taking up the space that looks like waste. People learn about the preparation and follow through, instead of waiting. Waiting doesn’t look impressive.
I think the thing that comes with ~~seniority~~ experience is being better able to predict where abstraction is likely to be valuable by becoming: more familiar with and able to recognize common classes of problems; better able to seek/utilize domain knowledge to match (and anticipate) domain problems with engineering problem classes.
I’m self taught so the former has been more challenging than it might be if I’d gone through a rigorous CS program, but I’ve benefited from learning among peers who had that talent in spades. The latter talent is one I find unfortunately lacking in many engineers regardless of their experience.
I’m also coming from a perspective where I started frontend and moved full stack til I was basically backend, but I never lost touch with my instinct to put user intent front and center. When designing a system, it’s been indispensable for anticipating abstraction opportunities.
I’m not saying it’s a perfect recipe, I certainly get astronaut credits from time to time, but more often than not I have a good instinct for “this should be generalized” vs “this should be domain specific and direct” because I make a point to know where the domain has common patterns and I make a point to go learn the fundamentals if I haven’t already.
I agree that premature abstraction is bad. Except when using a mature off-the-shelf tool, e.g. Keycloak. Sometimes if you know that you need to implement a standard and are not willing to put in the effort for an in-house solution, that level of complexity just comes with the territory, and you can choose to only use a subset of the mature tool's functionality.
I also have a lot of experience starting with very lo-fi and manual scripting prototypes to validate user needs and run a process like release management or db admin, which would then need to be wrapped in some light abstractions to hide some of the messy details to share with non-maintainers.
Problem is, I've noticed that more junior developers tend to look at a complex prototype that hits all the user cases, and see it as being complicated. Then they go shopping for some shiny toy that can only support a fraction of the necessary cases, and then I have to spend an inordinate amount of time explaining why it's not sufficient and that all the past work should be leverages with a little bit of abstraction if they don't like the number of steps in the prototype.
So, not-generic can also end up failing from a team dynamic perspective. Unless everyone can understand the complexity, somebody is going to come along and massively oversimplify the problem, which is a siren song. Queue the tech debt and rewrite circle of life.
Sure over abstraction is a problem. And sometimes duplication is better than depend y hell.
But other times more as traction is better.
In true it’s an optimisation problem where both under abs over abstracting, or choosing a the wrong abstractions lead to less optimal outcomes.
To get more optimal out comes it helps to know what your optimisation targets are: less code, faster compilation, lower maintenance costs, performance, ease of code review, adapting quirky to market demands, passing legally required risk evaluations, or any number of others.
So understand your target, and choose your abstractions with your eyes open.
I’ve dealt with copy paste hell and inheritance hell. Better is the middle way.
I would like to be able to upvote this answer 10 times.
I often remember that old joke:
When asked to pass you the salt, 1% of developers will actually give it to you, 70% will build a machine to pass you a small object (with a XML configuration file to request the salt), and the rest will build a machine to generate machines that can pass any small object from voice command - the latter being bootstrapped by passing itself to other machines.
Also makes me remember the old saying
- junior programmer find complex solutions to simple problems
- senior programmers find simple solutions to simple problems, and complex solutions to complex problems
- great programmers find simple solutions to complex problems
To refocus on the original question, I often find the following misconceptions/traps in even senior programmers architecture:
1) a complex problem can be solved with a declarative form of the problem + a solving engine (i.e. a framework approach). People think that complexity can be hidden in the engine, while the simple declarative DSL/configuration that the user will input will keep things apparently simple.
End result:
The system becomes opaque for the user which has no way to understand how things work.
The abstraction quickly leaks in the worst possible way, the configuration file soon requires 100 obscure parameters, the DSL becomes a Turing complete language.
2) We have to plan for future use cases, and abstract general concepts in the implementation.
End result:
The abstraction cost is not worth it. You are dealing with a complex implementation for no reason since the potential future use cases of the system are not implemented yet.
3) We should factor out as much code as possible to avoid duplication.
End result:
Overly factored code is very hard to read and follow. There is a sane threshold that should not be reached in the amount of factorization. Otherwise the system becomes so spaghetti that understanding a small part requires untangling dozens and dozens of 3 lines functions.
---
When I have to argue about these topics with other developers, I often make them remember the worst codebase they had to work on.
Most of the time, if you work on a codebase that is _too_ simplistic and you need to add a new feature to it, it's a breeze.
The hard part, is when you have an already complex system and you need to make a new feature fit in there.
I'd rather work on a codebase that's too simple rather that too complex.
> Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity.
I'm pretty good at fighting off features that add too much complexity, but the abstraction trap has gotten me more than once. Usually, a moderate amount of abstraction works great. I've even done well with some really clever abstractions.
Abstraction can be seductive, because it can have a big payoff in reducing complexity. So it's often hard to draw the line, particularly when working in a language with a type of abstraction I've not worked much with before.
Often the danger point comes when you understand how to use an abstraction competently, but you don't yet have the experience needed to be an expert at it.
Yes, but remember Sanchez's Law of Abstraction[0]: abstraction doesn't actually remove complexity, it just puts off having to deal with it.
This may be a price worth paying: transformations that actually reduce complexity are much easier to perform on an abstraction of a program than on the whole mess with all its gory details. It's just something to keep in mind.
As a Java end user I'm really glad that method overloading exists. The two largest libraries I ever built would have been huge messes without overloading. But I take your point that method overloading might be a net negative for the Java platform as a whole.
Yes, java would be a mess without overloading (particularly for telescoping args), but that's because it doesn't include other, simpler features that address the same problems. Namely:
- default parameter values
- union types
- names arguments
I would also throw in list and map literals, to do away with something like varargs.
All of these are much simpler, implementation-wise, than method overloading. None would require anywhere near the compiler or bytecode-level support that method overloading does. It just has a very low power to weight ratio, when other langauge features are considered. And, unfortunately, it makes implementing all those other features (useful on their own) extremely difficult.
> This is one vector for complexity, to be sure. Saying "no" to a feature that is unnecessary, foists a lot of complexity on a system, or has a low power to weight ratio is one of the best skills a senior developer can develop.
I don't consider myself to be an exceptional developer, but this alone has launched my career much faster than it would if I was purely, technically competent. Ultimately, this is a sense of business understanding. The more senior/ranking you are at a company, the more important it is for you to have this tune in well.
It can be really, really hard to say no at first, but over time the people ask you to build things adapt. Features become smaller, use cases become stronger, and teams generally operate happier. It's much better to build one really strong feature and fill in the gaps with small enhancements than it is to build everything. Eventually, you might build "everything", but you certainly don't need it now. If your product can't exist without "everything", you don't have a strong enough business proposition.
----
Note: No, doesn't mean literally "I'm/we're not building this". It can mean two things:
* Forcing a priority. This is the easiest way to say no and people won't even notice it. Force a priority for your next sprint. Build a bunch of stuff in a sprint. Force a priority for another sprint. Almost inevitably, new features will be prioritized over the unimportant left overs. On a 9 month project, I have a 3 month backlog of things that simply became less of a priority. We may build them, but there's a good chance nobody is missing them. Even if we build half of them, that still puts my team 1.5 months head. For a full year, that's almost like getting 2 additional months of build time.
* Suggesting an easier alternative. Designers have good hearts and intentions, but don't always know how technically difficult something will be. I'm very aggressive about proposing 80/20 features - aka, we can accomplish almost this in a much cheaper way. Do this on 1 to 3 features a sprint and suddenly, you're churning out noticeably more value.
I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.
I'm not surprised; that and a lot of the Java "culture" in general seems to revolve around creating solutions to problems which are either self-inflicted or don't actually exist in practice, ultimately being oriented towards extracting the most (personal) profit for a given task. In other words: why make simple solutions when more complex ones will allow developers to spend more time and thus be paid more for the solution? When questioned, they can always point to an answer laced with popular buzzwords like "maintainability", "reusability", "extensibility", etc.
I always found it surprising that Java implemented method overloading, but not operator overloading for arithmetic/logical operators. It's such a useful feature for a lot of types and really cleans up code, and the only real reason it's hard to do is because it relies on method overloading. But once you have that, why not just sugar "a + b" into "a.__add(b)" (or whatever).
You don't have to go all C++ insane with it and allow overloading of everything, but just being able to do arithmetic with non-primitive types would be very nice.
Operator overloading is deliberately more limited in D, with an eye towards discouraging its use for anything other than arithmetic.
A couple of operator overloading abominations in C++ are iostreams, and a regex DSL (where operators are re-purposed to kinda sorta look like regex expressions).
One in-between option I have kicked around w/ people is offering interfaces that allow you to implement operator overloading (e.g. Numeric). Then you wouldn't have one-off or cutesy operator overloading, but would rather need to meet some minimum for the feature. (Or at least feel bad that you have a lot of UnsupportedOperationExceptions)
Java had/has a ton of potential, but they kept/keep adding features that make no sense to me, making the language much more complex without some obvious day-to-day stuff like list literals, map literals, map access syntax, etc.
All three features, useful on their own, could be added to java at maybe 10% of the complexity of method overloading. With method overloading, they are all exponentially more complicated.
It's crazy how many places method overloading ends up rearing its ugly head if you are dealing with the JVM.
I imagine it refers to the comparison of argument types to find the best match among overloaded methods with the same arity. e.g. when the machine has got "void foo(java.lang.Object)" and "void foo(java.lang.Number)" to choose from.
> Default parameters are far more complicated than method overloading.
I've implemented both, I disagree. Unless you are talking about default arguments in the presence of method overloading, which is insane, and which I have also implemented.
Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer. Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term.
You have to develop an opinion of the thing you're building/maintaining and what it should be able to do and not do. You can't count on project managers to do that.
The trick to doing this effectively is to find out the problem the feature is actually trying to solve and providing a better solution.
Usually the request is from end users of software and they have identified the problem (we need to do xyz) and prescribed a solution (put the ability to do xyz in a modal on this page.) But if you can look to what other software has done, do a UX review and find a way to add a feature in that solves their problem in a way that makes sense in the larger context of the software, they won't have a problem with it since it solves their problem and the codebase will take less of a hit.
Unfortunately, it's a lot easier to just add the modal without complaint.
> Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term.
This. This is why you want senior devs too. You want people who can stand up to poorly conceived features. The most important job is to say “no”, or “how about X instead?”. I get furious when I see senior colleagues defend horrible design decisions with “it was what was specified”. Your job as a developer is to take the spec from the product owner and suggest one that actually fits the system in terms of maintainability, performance etc.
Blindly implementing a specification is a horrible idea.
I'd like to add that it also depends on the culture of the team.
In team A, I challenged many ideas coming from the designer and the product owner. I was also pushing back on technical decisions coming from the CTO. They always listened: I would change their minds a couple of times, sometimes I realized I was wrong. Only a few times could we not resolve the issue, but in the end I felt that they heard me and considered my point of view.
In team B, I started out with the same mindset and was trying to challenge the validity of their product decisions. It was superficially acknowledged, but 98% time my input was basically ignored, and I felt like a party-pooper for pointing out contradictions and mistakes in their thinking. After months of trying to be listened to, I realized I'm here to be a coding monkey, they don't want my input neither on product, nor on technical problems. I learned to just nod and smile, cheer them on on their bad decisions, they felt great because they felt validated. It was also better for my happiness short term, as it's not a great feeling to feel that I'm bumming them out.
Long term, I started looking for new positions, and since then quit already. I still feel it's a shame as the "idea" had great potential.
"In the beginning was the word". Language shapes reality. As software engineers, the second we accept that 'product owner' is a legitimate title, that second we lost agency to push back on poorly conceived features. Say it loud and clear: you also have a stake in the product.
This is one of my pet peeves when it comes to software development. I _really_ think that software development project managers ought to be able to spot the difference between a good architectural decision and a bad architectural decision, a good design decision and a bad design decision, a well implemented function and a badly implemented function. It sinks my heart, as a software development professional, having to work for project managers who, in many cases, would be hard pressed to explain what a byte is. It's just so wrong.
It's like working for a newspaper editor who does not know how to read or write. It does not mean that you cannot produce a newspaper, but it depends upon the workers stepping in and doing all the strategical technical decision behind the project managers back. As an engineer you can live with it for some time, but eventually it ends up feeling fake, and like a masquerade.
I'm much more in favor of hands on leadership types like Microsoft's Dave Cutler, with the technical skills to actually lead, and not just superficially 'manage'.
> Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer.
example from my current project: 1. Inherit half finished large software with lots of features. 2. It contains bugs and is impossible to effectively maintain/develop for the allocated manpower. 3. Management still wants all features. 4. Be brave and don't work on anything except essentials until they're sorted out. Lie to management if you have to e.g. that you found serious bugs that must be fixed (which is kind of true but they wouldn't understand)
I've also seen the opposite. Leads who push the minimal features to the point that IMO the product would fail.
I don't know what good examples would be. maybe a word processor without support for bold and italics. Maybe a code compiler with no error messages. Maybe an email client with no threading.
Does a word processor need every feature of Word? No. But does it need more than notepad? Yes!
Basically you get one chance to make a good first impression. If the customers and press label your product a certain way it will take a ton of effort to change the inertia of anchor you've given them.
It's also faster to just add the modal. When you are asked to do xyz ASAP because it has been sold to a customer and should have been deployed a week ago, you don't feel the need to do a UX review.
Missing a deployment by a week (or more accurately, deploying on that long of a timeframe), and only doing UX reviews when you "feel the need" both speak to larger organizational problems that probably aren't going to get solved by having a senior dev push back on a poorly thought-out feature.
What you describe is a lack of a designer/architect in the loop. Devs are supposed to implement what is requested, as requested. Designers and architects are supposed to figure out what to request to the devs, based on the customers' needs. And this indeed entails figuring out the customers' actual problem, rather than parroting their solutions (which they are almost always unqualified to design).
Your job description of devs is very limiting. Software developers should be close to the customer problem, work to understand it, and develop for it. When you silo them away this way and expect them to be order takers you add bloat to the team and inefficiencies.
Within a problem space, there are two kinds of complexity: inherent complexity, and accidental complexity. This article is about accidental complexity.
There is, as far as I can tell, and enormous amount of accidental complexity in software. Far more than there is inherent complexity. From my personal experience, this largely arises when no time has been taken to fully understand the problem space, and the first potential solution is the one used.
In that case, the solution will be discovered as partially deficient in some manner, and more code will simply be tacked on to address the newfound shortcomings. I'm not referring here to later expansion of the feature set or addressing corner cases, either. I'm referring to code that was not constructed to appropriately model the desired behavior and thus instances of branching logic must be embedded within the system all over the place, or perhaps some class hierarchy is injected, and reflection is used in a attempt to make make the poor design decisions function.
I don't think adding features makes software more complex, unless those features are somehow non-systemic; that is, there is no way to add them into the existing representation of available behaviors. Perhaps an example would be a set of workflows a user can navigate, and adding a new workflow simply entails the construction of that workflow and making it available via the addition to some list. That would be a systemic feature. On the other hand if the entirety of the behaviors embedded within the workflow were instead presented as commands or buttons or various options that needed to be scattered throughout the application, that would be a non-systemic addition, and introduce accidental complexity.
One things I've noticed about building software is that the most appropriate contours of the problem space often only become clear with hindsight.
Even if you start off with the best intentions about not putting in too many features it won't always help.
This is why the second mover can also have an advantage in some areas. If they recognize the appropriate contours they can avoid the crufty features and more directly and effectively tackle the main problem.
While there is accidental complexity, we can not measure what is or isn’t accidental. So I think the statement that the majority of complexity is accidental is completely made up. I also think it’s wrong.
The majority of complexity in software is unavoidable. Accidental complexity just makes it even worse.
I feel that a lot of people misunderstand "complexity" vs "complicated". There's nothing wrong with complex. It's the nature of life that things are complex. Complicated though is almost always a negative. Complex code is fine, it's probably solving a real problem. Complicated code is not, it's just hard to work with.
My experience, is that "complicated" vs. "complex," as you define, changes, depending on who is looking at the code.
If someone has a philosophical aversion to something like abstraction, then they will label it "complicated," but I use abstraction, all the time, to insert low-cost pivot points in a design. I just did it this morning, as I was developing a design to aggregate search results from multiple servers. My abstraction will afford myself, or other future developers, the ability to add more data sources in the future, in a low-risk fashion.
I also design frameworks in "layers," that often implement "philosophical realms," as opposed to practical ones. Think OSI layers.
That can mean that adding a new command to the REST API, for example, may require that I implement the actual leverage in a very low-level layer, then add support in subsequent layers to pass the command through.
That can add complexity, and quality problems. When I do something like that, I need to test carefully. The reason that I do that, is so, at an indeterminate point in the future, I can "swap out" entire layers, and replace them with newer tech. If I don't think that will be important, then I may want to rethink my design.
That is the philosophy behind the OSI layers. They allow drastically different (and interchangeable) implementation at each layer, with clear interface points, above and below.
To be clear: I'm responding to your approach to frameworks, not your first example of search result aggregation. I also want to emphasise I'm posting this out of genuine interest, not contrarianism or antagonism.
Is there overlap between your philosophical layers and practical utility? The kinds of things that have been required to change in my career so far were base assumptions in the business domain, which no amount of premature abstraction could have prepared me for.
I've never witnessed a need to "swap out" an entire layer. Have you? In what scenario did you need to swap out... what exactly? Did these philosophical abstractions turn out to be the correct ones when a need did arise? Did they make the transition to the new reality easier? Does the transition cost outweigh the slower development cost incurred from the abstractions' overhead?
I keep seeing people claim your approach is a good one, and I'm genuinely curious if there is any evidence backing it up. I'll gladly take anecdata.
And quite often the philosophy doesn't line up with reality. The OSI layers have little relation to how networking actually works and it would be next to impossible to replace some of those layers.
This is a good point. I once worked on a system that checked projects met various legal standards and rules before allowing changes to be saved. This system was complex because the rules were complex, the only way to make it simpler would have been to convince the government to make the rules simpler.
I agree with your comment. However, a good tool for controlling complexity is deciding what your system is going to do. As I said in a sibling comment, consider method overloading in java: this is a real world feature, not uncommon in other languages. There are arguments for and against it (I am against it.)
The implementation of it may be amazing code, but none the less it makes the java compiler and runtime far more complicated that it would be if the feature were omitted.
So, again, I agree with you, but I also agree with the articles point that choosing features carefully is an important tool in controlling complexity.
I'd say that the only reason that software seems too complex, rather than as complex as it needs to be, is because every programmer thinks he can rewrite it in a simpler way, but when he's done, it's as complex that which he has rewritten.
I've seen it happen so many times, and I've done it. It's the very same principle that leads to almost every construction project running behind schedule — a man simply underestimates the complexity of nigh every task he endeavors to complete.
I see your points, and I see the merit of "rewrite syndrome", and lean strongly towards automated-test backed refactoring, and all in all I disagree with your thesis.
Sometimes, software patches and new features get tacked on and tacked on and the system loses all semblance of cohesion or integrity. Thinking of the system as a whole, iterating with the confidence brought by tests of some sort, one can begin to detangle all the unncessary intermixing and duplicate work and begin to make the system sensible.
I completely disagree. certainly the chances that a rewrite in a standard software organization are pretty good that the new version will be just as broken, but in a new and different way than the last.
but I've taken several projects in the 100s of k-lines and translated them into projects with equivalent functionality and spanning between 1-2 decimal orders of magnitude less source code.
that's not an argument for rewriting in all circumstances - I just think at least half of most mature software is just 'junk dna' - useless boilerplate, unused paths, poor abstractions, etc
Well, when Wayland started they went in on the assumption that they could cast away quite a bit that “no one was using” or that “wasn't necessary”.
And then they had to include more of what they cast away because they underestimated the number of consumers of things they personally weren't using.
libinput originally actually did not have a way to disable and configure pointer acceleration I believe because the developed thought there was no reason to ever turn it off. He was not a gamer and was largely ignorant of how essential being able to disable it is for the level of accuracy required for video games.
That doesn't work in reality because for any large, complex piece of software it's impossible to rediscover all of the requirements. There are always hidden requirements which were never properly documented but somehow ended up as code in the legacy system.
People do get promoted and paid for simplifying and refactoring code. Heck, I got promoted and am currently being paid to do just that. It often involves adding value in other ways simultaneously, but what you said is just false.
Funny that I'm nobody again. I work for a big german retailer. A big part of my work includes simplifying and refactoring code, so that we have lower maintenance cost and can move faster as a organisation unit. This was also a big factor in my last promotion.
We also increased profit by lowering our runtime cost by some of those optimizations.
Not directly. But simplifying and refactoring can help you understand the code better than doing routine maintenance. This helps you solve bugs and write new features faster and with more stability, and also give better input during meetings.
So, indirectly, yes: you can get promoted and collect revenue by simplifying and refactoring.
Most of the complexity and bugs I see in software are not because of the problem domain, but rather because of over-abstraction, under-abstraction and abstraction leaks, and also because of limitations and complexities introduced by the programming model or environment.
(unless you consider that "supporting five operating systems and the language must be X" is part of "essential complexity")
Of course, the more complex your domain is, the bigger is the program. But the non-essential complexity that exists due to the bureaucracy of languages/libraries/frameworks is a much bigger factor in adding complexity, bug and lines of code. Some examples:
- Manual allocation and deallocation of memory is a good example of something that we might think as essential, since it's intertwined with our domain code, but turns out to be unnecessary (even though the replacement has downsides). The billion-dollar problem (nulls) is another one.
- Supporting multiple environments/browsers/platforms. Competition is good, but the cost is steep for third-party developers. Using multiplatform frameworks partially solves but also has drawbacks: performance, limitations, bugs, leaky abstractions. If you need to be closer to metal, then different OSs have different threading models or API styles. Sometimes they don't even expose the main loop to you. You need to work around those limitations.
- In most environments we still don't have a nice way of handling async operations without leaking the abstraction. The current solution is adding "isLoading" data everywhere (or creating some abstraction around both buttons and the fetching mechanism). Concurrent Mode in React is probably the best thing we have so far.
- Most modern Javascript toolchains need multiple parsing steps: in the transpiler (to convert to ES5), in the bundler (to detect dependencies), in the linter, in the prettifier, and in tests. Compatibility between them is not guaranteed, and you might get conflicts which have to be solved by finding a middle ground, and that sometimes take more time than writing features.
- Dogmatism is another issue. I remember in one workplace years ago there was a "ORM only" rule and most of us would work the SQL and then convert to Rails ActiveRecord (or worse: Arel). In the end it was a complete waste of time and the results were impossible to maintain.
- I also think that the old Peter Norvig quote that "design patterns are missing language features" still stands. Go has proven that it's possible to have "dumb, simple code", but in other languages our best practices involve adding non-essential complexity to products.
The only exception to that in my experience is SQL: if a query is too big is not due to some bureaucracy of the language, but rather due to the complexity of the domain itself.
For me what you're describing is more mess than complexity. I see your point tho, but I'd also say that some of things you describe here are direct result of real world problem domain being complex, and changing over time.
For example, wrong abstractions. As long as engineers writing the software are competent (if they aren't - that's a totally different story) they'll try to chose right level of abstractions for current understanding of the problem. Years down the road, a lot of their choices maybe end up to be a mistake, very often because understanding of the problem changed, or problems itself changed, and something that was nice and clean solution isn't one anymore. If problem isn't fully fixed and 100% understood, you'll never be able to make all the right, future-proof decisions about right abstractions.
Here's my approach. I think many feature requests fall under the X/Y problem.
- view a new feature request as a new user capability
- extend the model that the software implements, to encompass that capability - regardless of how the feature was implemented in the requester's head.
- extend the software to match the new model. This may require refactoring as the model may have had to undergo shifts to encompass the new capability
For example:
I have a car. I model the car as four wheels, an engine, a chassis, and a lever. The engine drives the wheels, the wheels support the chassis, the chassis contains the engine. A lever in the chassis sets the engine in motion. It's a simple model and is capable of 1. sitting still and 2. moving forwards and backwards. This is all the capabilities we've needed so far.
The default industry response is to either implement the change as-requested, or reject it. I propose that the correct move instead is to ask the user WHY they want mecanum wheels. They reveal that they want the car to move in 2 dimensions, rather than one. From that understanding you can extend the model of the car to encompass the feature - you may add the mecanum wheels and a mechanism to control them, you may add a steering wheel and rack-and-pinion, you may do something completely different - totally depending on how and why the user wants 2d movement (depending on further questioning, ie "5 whys"). But you are working to the capability, not the feature. By extending the model, you can then change the software to match this new model.
I think as software engineers we have a tendency to forget the model and focus only on the code. A request for mecanum wheels becomes a question of how to change the software to encompass that feature. But we must always remember the existence of the model, and the user's relationship to it.
In my humble opinion, a lot of projects go quietly bad when they experience some sort of new requirement that gets underestimated in terms of its architectural impact by project management. At such time senior Devs have either already moved on, or have their eye off the ball such that new features get incorporated without the necessary architectural support. These inflection points can themselves introduce complexity but often become the gateway for all sorts of subsequent small things that explode in size. In short, don't miss architecture moments
The other thing that happens, and really dooms a project is when the senior people leave. Eventually you end up with a team that doesn't really understand the code.
This leads them to just tack on features while changing as little as possible. This will grow into something truly unmaintainable, virtually guaranteeing no competent work will be done on the project again.
One of my favorite real world examples is method overloading in Java. It's not a particularly useful feature (especially given alternative, less complicated features like default parameter values), interacts poorly with other language features (e.g. varargs) and ends up making all sorts of things far more complex than necessary: bytecode method invocation now needs to encode the entire type signature of a method, method resolution during compilation requires complex scoring, etc. The JVM language I worked on probably had about 10% of its total complexity dedicated to dealing with this "feature" of dubious value to end users.
Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity. I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.
Somehow people feel that if they won't do a generic solution for a problem at hand they failed.
In reality the opposite is often true, when people try to make generic solution they fail to make something simple, quick and easy to understand for others. Let alone the idea that abstraction will make system flexible and easier to change in the future. Where they don't know the future and then always comes a plot twist which does not fit into "perfect architecture". So I agree with that idea that abstraction is not always best response to complex system. Sometimes making copy, paste and change is better approach.
The take-away I took from it is that as developers, we love to solve problems using technical solutions. Sometimes, the real problem is one of narration. As we evolve our languages, better technical abstractions become available. But that's not going to prevent 'enterprisey' code from making things look and sound more difficult. Just look at any other field where abstractions aren't limited by technicalities: the same overcomplicated mess forms. Bad narrators narrate poorly, even when they are not limited.
Abstraction can be used to evoke category theoretical things, but more often it’s used to avoid making a decision. It’s fear based. It’s overbuilding the bridge to avoid problems we don’t know or understand. And that is not Engineering.
I find sometimes that it helps me to think of it as a fighter or martial artist might. It is only necessary to not be where the strike is, when the strike happens. Anything more might take you farther from your goal. Cut off your options.
Or a horticulturist: here is a list of things this plant requires, and they cannot all happen at once, so we will do these two now, and assuming plans don’t change, we will do the rest next year. But plans always change, and sometimes for the better.
In Chess, in Go, in Jazz, in coaching gymnastics, even in gun safety, there are things that could happen. You have to be in the right spot in case they do, but you hope they don’t. And if they don’t, you still did the right thing. Just enough, but not too much.
What is hard for outsiders to see is all the preparation for actions that never happen. We talk about mindfulness as if it hasn’t been there all along, in every skilled trade, taking up the space that looks like waste. People learn about the preparation and follow through, instead of waiting. Waiting doesn’t look impressive.
I’m self taught so the former has been more challenging than it might be if I’d gone through a rigorous CS program, but I’ve benefited from learning among peers who had that talent in spades. The latter talent is one I find unfortunately lacking in many engineers regardless of their experience.
I’m also coming from a perspective where I started frontend and moved full stack til I was basically backend, but I never lost touch with my instinct to put user intent front and center. When designing a system, it’s been indispensable for anticipating abstraction opportunities.
I’m not saying it’s a perfect recipe, I certainly get astronaut credits from time to time, but more often than not I have a good instinct for “this should be generalized” vs “this should be domain specific and direct” because I make a point to know where the domain has common patterns and I make a point to go learn the fundamentals if I haven’t already.
I also have a lot of experience starting with very lo-fi and manual scripting prototypes to validate user needs and run a process like release management or db admin, which would then need to be wrapped in some light abstractions to hide some of the messy details to share with non-maintainers.
Problem is, I've noticed that more junior developers tend to look at a complex prototype that hits all the user cases, and see it as being complicated. Then they go shopping for some shiny toy that can only support a fraction of the necessary cases, and then I have to spend an inordinate amount of time explaining why it's not sufficient and that all the past work should be leverages with a little bit of abstraction if they don't like the number of steps in the prototype.
So, not-generic can also end up failing from a team dynamic perspective. Unless everyone can understand the complexity, somebody is going to come along and massively oversimplify the problem, which is a siren song. Queue the tech debt and rewrite circle of life.
In true it’s an optimisation problem where both under abs over abstracting, or choosing a the wrong abstractions lead to less optimal outcomes.
To get more optimal out comes it helps to know what your optimisation targets are: less code, faster compilation, lower maintenance costs, performance, ease of code review, adapting quirky to market demands, passing legally required risk evaluations, or any number of others.
So understand your target, and choose your abstractions with your eyes open.
I’ve dealt with copy paste hell and inheritance hell. Better is the middle way.
I often remember that old joke:
When asked to pass you the salt, 1% of developers will actually give it to you, 70% will build a machine to pass you a small object (with a XML configuration file to request the salt), and the rest will build a machine to generate machines that can pass any small object from voice command - the latter being bootstrapped by passing itself to other machines.
Also makes me remember the old saying
- junior programmer find complex solutions to simple problems
- senior programmers find simple solutions to simple problems, and complex solutions to complex problems
- great programmers find simple solutions to complex problems
To refocus on the original question, I often find the following misconceptions/traps in even senior programmers architecture:
1) a complex problem can be solved with a declarative form of the problem + a solving engine (i.e. a framework approach). People think that complexity can be hidden in the engine, while the simple declarative DSL/configuration that the user will input will keep things apparently simple.
End result:
The system becomes opaque for the user which has no way to understand how things work.
The abstraction quickly leaks in the worst possible way, the configuration file soon requires 100 obscure parameters, the DSL becomes a Turing complete language.
2) We have to plan for future use cases, and abstract general concepts in the implementation.
End result:
The abstraction cost is not worth it. You are dealing with a complex implementation for no reason since the potential future use cases of the system are not implemented yet.
3) We should factor out as much code as possible to avoid duplication.
End result:
Overly factored code is very hard to read and follow. There is a sane threshold that should not be reached in the amount of factorization. Otherwise the system becomes so spaghetti that understanding a small part requires untangling dozens and dozens of 3 lines functions.
---
When I have to argue about these topics with other developers, I often make them remember the worst codebase they had to work on.
Most of the time, if you work on a codebase that is _too_ simplistic and you need to add a new feature to it, it's a breeze.
The hard part, is when you have an already complex system and you need to make a new feature fit in there.
I'd rather work on a codebase that's too simple rather that too complex.
I'm pretty good at fighting off features that add too much complexity, but the abstraction trap has gotten me more than once. Usually, a moderate amount of abstraction works great. I've even done well with some really clever abstractions.
Abstraction can be seductive, because it can have a big payoff in reducing complexity. So it's often hard to draw the line, particularly when working in a language with a type of abstraction I've not worked much with before.
Often the danger point comes when you understand how to use an abstraction competently, but you don't yet have the experience needed to be an expert at it.
This may be a price worth paying: transformations that actually reduce complexity are much easier to perform on an abstraction of a program than on the whole mess with all its gory details. It's just something to keep in mind.
[0] https://news.ycombinator.com/item?id=22601623
- default parameter values
- union types
- names arguments
I would also throw in list and map literals, to do away with something like varargs.
All of these are much simpler, implementation-wise, than method overloading. None would require anywhere near the compiler or bytecode-level support that method overloading does. It just has a very low power to weight ratio, when other langauge features are considered. And, unfortunately, it makes implementing all those other features (useful on their own) extremely difficult.
But unfortunately, it is way overused. It takes a long time to develop good judgement. I'm still working on it.
I don't consider myself to be an exceptional developer, but this alone has launched my career much faster than it would if I was purely, technically competent. Ultimately, this is a sense of business understanding. The more senior/ranking you are at a company, the more important it is for you to have this tune in well.
It can be really, really hard to say no at first, but over time the people ask you to build things adapt. Features become smaller, use cases become stronger, and teams generally operate happier. It's much better to build one really strong feature and fill in the gaps with small enhancements than it is to build everything. Eventually, you might build "everything", but you certainly don't need it now. If your product can't exist without "everything", you don't have a strong enough business proposition.
----
Note: No, doesn't mean literally "I'm/we're not building this". It can mean two things:
* Forcing a priority. This is the easiest way to say no and people won't even notice it. Force a priority for your next sprint. Build a bunch of stuff in a sprint. Force a priority for another sprint. Almost inevitably, new features will be prioritized over the unimportant left overs. On a 9 month project, I have a 3 month backlog of things that simply became less of a priority. We may build them, but there's a good chance nobody is missing them. Even if we build half of them, that still puts my team 1.5 months head. For a full year, that's almost like getting 2 additional months of build time.
* Suggesting an easier alternative. Designers have good hearts and intentions, but don't always know how technically difficult something will be. I'm very aggressive about proposing 80/20 features - aka, we can accomplish almost this in a much cheaper way. Do this on 1 to 3 features a sprint and suddenly, you're churning out noticeably more value.
I'm not surprised; that and a lot of the Java "culture" in general seems to revolve around creating solutions to problems which are either self-inflicted or don't actually exist in practice, ultimately being oriented towards extracting the most (personal) profit for a given task. In other words: why make simple solutions when more complex ones will allow developers to spend more time and thus be paid more for the solution? When questioned, they can always point to an answer laced with popular buzzwords like "maintainability", "reusability", "extensibility", etc.
You don't have to go all C++ insane with it and allow overloading of everything, but just being able to do arithmetic with non-primitive types would be very nice.
A couple of operator overloading abominations in C++ are iostreams, and a regex DSL (where operators are re-purposed to kinda sorta look like regex expressions).
One in-between option I have kicked around w/ people is offering interfaces that allow you to implement operator overloading (e.g. Numeric). Then you wouldn't have one-off or cutesy operator overloading, but would rather need to meet some minimum for the feature. (Or at least feel bad that you have a lot of UnsupportedOperationExceptions)
Java had/has a ton of potential, but they kept/keep adding features that make no sense to me, making the language much more complex without some obvious day-to-day stuff like list literals, map literals, map access syntax, etc.
Oh well.
All three features, useful on their own, could be added to java at maybe 10% of the complexity of method overloading. With method overloading, they are all exponentially more complicated.
It's crazy how many places method overloading ends up rearing its ugly head if you are dealing with the JVM.
Deleted Comment
That said, I fully agree on the OSGi point. Makes me worried about a lot of the new features I hear are on the way. :(
I've implemented both, I disagree. Unless you are talking about default arguments in the presence of method overloading, which is insane, and which I have also implemented.
You have to develop an opinion of the thing you're building/maintaining and what it should be able to do and not do. You can't count on project managers to do that.
The trick to doing this effectively is to find out the problem the feature is actually trying to solve and providing a better solution.
Usually the request is from end users of software and they have identified the problem (we need to do xyz) and prescribed a solution (put the ability to do xyz in a modal on this page.) But if you can look to what other software has done, do a UX review and find a way to add a feature in that solves their problem in a way that makes sense in the larger context of the software, they won't have a problem with it since it solves their problem and the codebase will take less of a hit.
Unfortunately, it's a lot easier to just add the modal without complaint.
This. This is why you want senior devs too. You want people who can stand up to poorly conceived features. The most important job is to say “no”, or “how about X instead?”. I get furious when I see senior colleagues defend horrible design decisions with “it was what was specified”. Your job as a developer is to take the spec from the product owner and suggest one that actually fits the system in terms of maintainability, performance etc. Blindly implementing a specification is a horrible idea.
In team A, I challenged many ideas coming from the designer and the product owner. I was also pushing back on technical decisions coming from the CTO. They always listened: I would change their minds a couple of times, sometimes I realized I was wrong. Only a few times could we not resolve the issue, but in the end I felt that they heard me and considered my point of view.
In team B, I started out with the same mindset and was trying to challenge the validity of their product decisions. It was superficially acknowledged, but 98% time my input was basically ignored, and I felt like a party-pooper for pointing out contradictions and mistakes in their thinking. After months of trying to be listened to, I realized I'm here to be a coding monkey, they don't want my input neither on product, nor on technical problems. I learned to just nod and smile, cheer them on on their bad decisions, they felt great because they felt validated. It was also better for my happiness short term, as it's not a great feeling to feel that I'm bumming them out.
Long term, I started looking for new positions, and since then quit already. I still feel it's a shame as the "idea" had great potential.
This is one of my pet peeves when it comes to software development. I _really_ think that software development project managers ought to be able to spot the difference between a good architectural decision and a bad architectural decision, a good design decision and a bad design decision, a well implemented function and a badly implemented function. It sinks my heart, as a software development professional, having to work for project managers who, in many cases, would be hard pressed to explain what a byte is. It's just so wrong.
It's like working for a newspaper editor who does not know how to read or write. It does not mean that you cannot produce a newspaper, but it depends upon the workers stepping in and doing all the strategical technical decision behind the project managers back. As an engineer you can live with it for some time, but eventually it ends up feeling fake, and like a masquerade.
I'm much more in favor of hands on leadership types like Microsoft's Dave Cutler, with the technical skills to actually lead, and not just superficially 'manage'.
example from my current project: 1. Inherit half finished large software with lots of features. 2. It contains bugs and is impossible to effectively maintain/develop for the allocated manpower. 3. Management still wants all features. 4. Be brave and don't work on anything except essentials until they're sorted out. Lie to management if you have to e.g. that you found serious bugs that must be fixed (which is kind of true but they wouldn't understand)
I don't know what good examples would be. maybe a word processor without support for bold and italics. Maybe a code compiler with no error messages. Maybe an email client with no threading.
Does a word processor need every feature of Word? No. But does it need more than notepad? Yes!
Basically you get one chance to make a good first impression. If the customers and press label your product a certain way it will take a ton of effort to change the inertia of anchor you've given them.
There is, as far as I can tell, and enormous amount of accidental complexity in software. Far more than there is inherent complexity. From my personal experience, this largely arises when no time has been taken to fully understand the problem space, and the first potential solution is the one used.
In that case, the solution will be discovered as partially deficient in some manner, and more code will simply be tacked on to address the newfound shortcomings. I'm not referring here to later expansion of the feature set or addressing corner cases, either. I'm referring to code that was not constructed to appropriately model the desired behavior and thus instances of branching logic must be embedded within the system all over the place, or perhaps some class hierarchy is injected, and reflection is used in a attempt to make make the poor design decisions function.
I don't think adding features makes software more complex, unless those features are somehow non-systemic; that is, there is no way to add them into the existing representation of available behaviors. Perhaps an example would be a set of workflows a user can navigate, and adding a new workflow simply entails the construction of that workflow and making it available via the addition to some list. That would be a systemic feature. On the other hand if the entirety of the behaviors embedded within the workflow were instead presented as commands or buttons or various options that needed to be scattered throughout the application, that would be a non-systemic addition, and introduce accidental complexity.
Even if you start off with the best intentions about not putting in too many features it won't always help.
This is why the second mover can also have an advantage in some areas. If they recognize the appropriate contours they can avoid the crufty features and more directly and effectively tackle the main problem.
The majority of complexity in software is unavoidable. Accidental complexity just makes it even worse.
Relatedly, I have a simple new maths for counting complexity, so that you can compare two "complex" solutions and pick the less "complicated" one: https://github.com/treenotation/research/blob/master/papers/...
SVG form: https://treenotation.org/demos/complexityCanBeCounted.svg
If someone has a philosophical aversion to something like abstraction, then they will label it "complicated," but I use abstraction, all the time, to insert low-cost pivot points in a design. I just did it this morning, as I was developing a design to aggregate search results from multiple servers. My abstraction will afford myself, or other future developers, the ability to add more data sources in the future, in a low-risk fashion.
I also design frameworks in "layers," that often implement "philosophical realms," as opposed to practical ones. Think OSI layers.
That can mean that adding a new command to the REST API, for example, may require that I implement the actual leverage in a very low-level layer, then add support in subsequent layers to pass the command through.
That can add complexity, and quality problems. When I do something like that, I need to test carefully. The reason that I do that, is so, at an indeterminate point in the future, I can "swap out" entire layers, and replace them with newer tech. If I don't think that will be important, then I may want to rethink my design.
That is the philosophy behind the OSI layers. They allow drastically different (and interchangeable) implementation at each layer, with clear interface points, above and below.
Is there overlap between your philosophical layers and practical utility? The kinds of things that have been required to change in my career so far were base assumptions in the business domain, which no amount of premature abstraction could have prepared me for.
I've never witnessed a need to "swap out" an entire layer. Have you? In what scenario did you need to swap out... what exactly? Did these philosophical abstractions turn out to be the correct ones when a need did arise? Did they make the transition to the new reality easier? Does the transition cost outweigh the slower development cost incurred from the abstractions' overhead?
I keep seeing people claim your approach is a good one, and I'm genuinely curious if there is any evidence backing it up. I'll gladly take anecdata.
What you're describing sounds like "essential complexity" vs. "accidental complexity." See "No Silver Bullet."
Sorry, this is pedantic, but using the incorrect terms adds accidental complexity to a topic that is already essentially complex. ;)
No. Complex is the opposite of simple. Complicated is the opposite of easy.
Simple and easy aren’t synonyms - see the talk by Rich Hickey linked in a sibling comment.
The implementation of it may be amazing code, but none the less it makes the java compiler and runtime far more complicated that it would be if the feature were omitted.
So, again, I agree with you, but I also agree with the articles point that choosing features carefully is an important tool in controlling complexity.
Deleted Comment
I've seen it happen so many times, and I've done it. It's the very same principle that leads to almost every construction project running behind schedule — a man simply underestimates the complexity of nigh every task he endeavors to complete.
Sometimes, software patches and new features get tacked on and tacked on and the system loses all semblance of cohesion or integrity. Thinking of the system as a whole, iterating with the confidence brought by tests of some sort, one can begin to detangle all the unncessary intermixing and duplicate work and begin to make the system sensible.
but I've taken several projects in the 100s of k-lines and translated them into projects with equivalent functionality and spanning between 1-2 decimal orders of magnitude less source code.
that's not an argument for rewriting in all circumstances - I just think at least half of most mature software is just 'junk dna' - useless boilerplate, unused paths, poor abstractions, etc
It can be very frustrating to modify low quality and ugly code so I feel much better after a rewrite.
If the requirements stay exactly the same, then yeah, there’s no point.
And then they had to include more of what they cast away because they underestimated the number of consumers of things they personally weren't using.
libinput originally actually did not have a way to disable and configure pointer acceleration I believe because the developed thought there was no reason to ever turn it off. He was not a gamer and was largely ignorant of how essential being able to disable it is for the level of accuracy required for video games.
We also increased profit by lowering our runtime cost by some of those optimizations.
So, indirectly, yes: you can get promoted and collect revenue by simplifying and refactoring.
Most of the complexity and bugs I see in software are not because of the problem domain, but rather because of over-abstraction, under-abstraction and abstraction leaks, and also because of limitations and complexities introduced by the programming model or environment.
(unless you consider that "supporting five operating systems and the language must be X" is part of "essential complexity")
Of course, the more complex your domain is, the bigger is the program. But the non-essential complexity that exists due to the bureaucracy of languages/libraries/frameworks is a much bigger factor in adding complexity, bug and lines of code. Some examples:
- Manual allocation and deallocation of memory is a good example of something that we might think as essential, since it's intertwined with our domain code, but turns out to be unnecessary (even though the replacement has downsides). The billion-dollar problem (nulls) is another one.
- Supporting multiple environments/browsers/platforms. Competition is good, but the cost is steep for third-party developers. Using multiplatform frameworks partially solves but also has drawbacks: performance, limitations, bugs, leaky abstractions. If you need to be closer to metal, then different OSs have different threading models or API styles. Sometimes they don't even expose the main loop to you. You need to work around those limitations.
- In most environments we still don't have a nice way of handling async operations without leaking the abstraction. The current solution is adding "isLoading" data everywhere (or creating some abstraction around both buttons and the fetching mechanism). Concurrent Mode in React is probably the best thing we have so far.
- Most modern Javascript toolchains need multiple parsing steps: in the transpiler (to convert to ES5), in the bundler (to detect dependencies), in the linter, in the prettifier, and in tests. Compatibility between them is not guaranteed, and you might get conflicts which have to be solved by finding a middle ground, and that sometimes take more time than writing features.
- Dogmatism is another issue. I remember in one workplace years ago there was a "ORM only" rule and most of us would work the SQL and then convert to Rails ActiveRecord (or worse: Arel). In the end it was a complete waste of time and the results were impossible to maintain.
- I also think that the old Peter Norvig quote that "design patterns are missing language features" still stands. Go has proven that it's possible to have "dumb, simple code", but in other languages our best practices involve adding non-essential complexity to products.
The only exception to that in my experience is SQL: if a query is too big is not due to some bureaucracy of the language, but rather due to the complexity of the domain itself.
For example, wrong abstractions. As long as engineers writing the software are competent (if they aren't - that's a totally different story) they'll try to chose right level of abstractions for current understanding of the problem. Years down the road, a lot of their choices maybe end up to be a mistake, very often because understanding of the problem changed, or problems itself changed, and something that was nice and clean solution isn't one anymore. If problem isn't fully fixed and 100% understood, you'll never be able to make all the right, future-proof decisions about right abstractions.
- view a new feature request as a new user capability
- extend the model that the software implements, to encompass that capability - regardless of how the feature was implemented in the requester's head.
- extend the software to match the new model. This may require refactoring as the model may have had to undergo shifts to encompass the new capability
For example:
I have a car. I model the car as four wheels, an engine, a chassis, and a lever. The engine drives the wheels, the wheels support the chassis, the chassis contains the engine. A lever in the chassis sets the engine in motion. It's a simple model and is capable of 1. sitting still and 2. moving forwards and backwards. This is all the capabilities we've needed so far.
A user requests a new feature where the wheels are instead mecanum wheels (https://en.wikipedia.org/wiki/Mecanum_wheel).
The default industry response is to either implement the change as-requested, or reject it. I propose that the correct move instead is to ask the user WHY they want mecanum wheels. They reveal that they want the car to move in 2 dimensions, rather than one. From that understanding you can extend the model of the car to encompass the feature - you may add the mecanum wheels and a mechanism to control them, you may add a steering wheel and rack-and-pinion, you may do something completely different - totally depending on how and why the user wants 2d movement (depending on further questioning, ie "5 whys"). But you are working to the capability, not the feature. By extending the model, you can then change the software to match this new model.
I think as software engineers we have a tendency to forget the model and focus only on the code. A request for mecanum wheels becomes a question of how to change the software to encompass that feature. But we must always remember the existence of the model, and the user's relationship to it.
This leads them to just tack on features while changing as little as possible. This will grow into something truly unmaintainable, virtually guaranteeing no competent work will be done on the project again.