Readit News logoReadit News
JoeAltmaier · 5 years ago
Oh this, I've known for years. I learned it early, as a new OS engineer at Convergent Technologies. There was a nasty part of the kernel, where programmable timers lived, that was a large part of the change history of the OS. Looking at it, I saw folks had been thrashing for some time, to get all the issues resolved. But it was a '10 lbs of feathers in a 5 lb box' kind of thing. One change would cause another issue, or regress a bug.

So I took 2 days, made a chart of every path and pattern of events (restarting a timer from a timer callback; having another time interval expire while processing the previous; restarting while processing and another interval expires; restarted timer expires before completing previous expiration and on and on). Then writing exhaustive code to deal with every case. Then running every degenerate case in test code until it survived for hours.

It never had to be addressed again. But it did have to be addressed. So many folks are unwilling to face the music with complexity.

stagger87 · 5 years ago
Regarding being 'unwilling to face the music', is it possible that you were only able to perform this refactor due to all the 'thrashing' that came before you? Is it possible that the majority of commits leading up to your refactor were dealing with unforeseen complexities and addressing real bug fixes that you were able to conveniently take in and synthesize all at once?

Refactoring too soon, before understanding the entire system (and all the possible changes/additions/issues that could arise) is probably worse (time-wise) than thrashing for a bit.

JoeAltmaier · 5 years ago
As I recall (and it was a long time ago) the history gave some insight into the complexity. But nobody had exhaustively enumerated all the cases. I remember the chart had more conditions that had ever been addressed.
senderista · 5 years ago
A great example of eliminating accidental complexity by embracing and clarifying the essential complexity. I have seen many such systems where a single exhaustive state machine could replace a mess of buggy and incomprehensible spaghetti code.
hinkley · 5 years ago
I think a critical skill for a developer is to be able to realize when you are frustrated, distance yourself from the problem and ask what is going on, 5 Why's style.

Developers know about the Boiled Frog problem. But we sometimes don't know when we are the frog. We keep hill-climbing one solution because we are only trying to do one little thing so of course I should be able to do a small to medium thing to accomplish it.

We also send out mixed messages on rewrites, and I'm guilty of it too. But there's gotta be some litmus test where we compare the number of interacting states in the requirements versus the number of interacting states in the code and just call Time of Death.

heymijo · 5 years ago
> One change would cause another issue, or regress a bug.

> made a chart of every path and pattern of events

> It never had to be addressed again

Am I reading you correctly--every possible issue was knowable and preventable with enough prior thought?

pjc50 · 5 years ago
State machines are a surprisingly powerful tool. They do require you to capture all the possible states of the system; the combinational explosion of "X while Y but not while Z" can get a bit nasty, but that can often force a redesign simply to reduce the state space.

If all your logic is immutable apart from a small area around the state machine the state gets much easier to manage. You write everything as f(state, inputs) => next_state. You can at any point inspect or print the state of the state machine, and record its transitions so you have a clearer idea of how it got there if there was indeed a bug.

blasterjeff151 · 5 years ago
This is what RxJS was born for . . . :)
UweSchmidt · 5 years ago
Complexity is often frivolously created during specification. If the true consequences of complexity (wherever it may live) were understood, we'd simplify things a lot. But they want that next button, they want ancient emails be searchable instead of having them archived, and they have no idea that their wish leads to new servers installed in a data center, new bugs introduced, and development time lost at for each new release.

There are also ways to design things better (or worse) with regards how they handle complexity. Reuse UI, patterns, workflows and languages. Keep things consistent, make things discoverable.

Point is, the slogan "Complexity has to live somewhere" could also be used as an excuse to do sloppy work. Then again, this keeps us all employed.

majormajor · 5 years ago
> Complexity is often frivolously created during specification. If the true consequences of complexity (wherever it may live) were understood, we'd simplify things a lot. But they want that next button, they want ancient emails be searchable instead of having them archived, and they have no idea that their wish leads to new servers installed in a data center, new bugs introduced, and development time lost at for each new release.

I think features are definitely worth servers being installed in a data center, and most people coming up with product requirements are fine with the monetary costs.

Features leading to bugs and wasted development time... that's where the point of the article lives. There are ways to build software that can more easily accommodate changes.

However, it's more common that developers try to hide the complexity of features behind abstractions that hinder more than they help. And that's what results in breakages and lost time.

breischl · 5 years ago
I agree with this. At perhaps a slightly higher level, I've been toying around with the idea of "business process debt" to describe a different-but-similar concept. Basically that many type the business/product people keep adding new features, new methods, new nifty little ways to make money. And that's all fine, but they come with a certain amount of inherent complexity - sometimes quite a bit. Eventually that complexity will drown your development process, and it can look like a software problem even though it largely isn't.

Of course your software will also add some accidental complexity as well (perhaps a lot) but ultimately the software can't be any simpler than the business process it's trying to model. And if that process is a spaghetti nightmare, well, you're just stuck with that.

Nemi · 5 years ago
You may not mean it this way, but the your post makes it sound like complexity is created out of whole cloth (so to speak) by adding (in your scenario) searchable old emails.

And true to the article, that is only half of the equation. Complexity is not created as much as it is moved around. By adding code that makes old emails searchable instead of simply archiving them moves the business complexity of customers with this specific edge case to your code base.

It is then a business decision. And I agree with you that it is often not worth it, but the art of business is in understanding when taking on the added complexity is worth it to differentiate your product over your competitors. Or stated another way, strategically deciding to reduce the complexity of your customer's business and increasing the complexity of your own.

This is not always an obvious decision, hence why the developer rank and file often look at it from one side only, and the product manager's/sales staff often look at it from their side.

It takes a strong leader who understands both sides to make the right strategic decision. When this decision is not made thoughtfully you end up with an overburden of complexity for a subset of customers that don't really move the needle on your business.

quicklyfrozen · 5 years ago
I find this is a strong reason to build test cases as early as possible in the dev process. I find myself removing 'nice to have, but not strictly necessary' options all the time when I realize they'll double or worse the number of test cases.
bertmuthalaly · 5 years ago
I think the conversation has shifted slightly this decade to avoiding incidental (accidental) complexity (I didn’t realize Fred Brooks popularized the term!), which I wish the author would address.

Otherwise this essay is spot on when it comes to essential conplexity.

Incidentally, the question of “where complexity lives” is one of the focal points of “A Philosophy of Software Design,” which comes highly recommended if you’re trying to come up with your strategy for managing complexity from first principles.

mononcqc · 5 years ago
I think the most contentious part of the post is that I just simply assert that people are an inherent part of software. You often avoid the incidental complexity in code by indirectly shifting it to the people working the software.

Their mental models and their understanding of everything is not fungible, but is still real and often what lets us shift the complexity outside of the software.

The teachings of disciplines like resilience engineering and models like naturalistic decision making is that this tacit knowledge and expertise can be surfaced, trained, and given the right environment to grow and gain effectiveness. It expresses itself in the active adaptation of organizations.

But as long as you look at the software as a system of its own that is independent from the people who use, write, and maintain it, it looks like the complexity just vanishes if it's not in the code.

munificent · 5 years ago
> You often avoid the incidental complexity in code by indirectly shifting it to the people working the software.

Yes, this is Larry Wall's waterbed theory: https://en.wikipedia.org/wiki/Waterbed_theory

I do think it's important to distinguish accidental and essential complexity. Some complexity is inherent and if you think you've eliminated it, all you have really done is made it someone else's problem.

But there is also a lot of complexity that is simply unnecessary and can be eliminated entirely with effort. Humans make mistakes and some of those mistakes end up in code. Software that does something that no one ever intended can be simplified by having that behavior removed.

Deleted Comment

spacedcowboy · 5 years ago
Complexity is like energy, it cannot be created or destroyed, it is a fundamental property of any process or thing.

However, complexity can be managed. Humans do this using abstraction as a tool. We divide the complex problem up into a sequence of several simpler states, and we find it easier to understand the simpler problems along with the sequence within which they lie.

Good software uses this same approach to reduce complex issues to a manageable process. A good tool makes the simple things easy and the complex thing possible, the design of the tool reflects the effort and work that the designer out in to u the problem they’re trying to solve, and to produce something that helps guide others along that self-same path of understanding, without them having to put in the same level of effort; it establishes the golden path through the marshes and bogs of difficulties that the problem domain throws up.

“Embracing complexity” is a measure of last resort, IMHO. It means the tool developer could not analyze the problem and come up with a good solution; it means “here, you figure it out”; it means giving up on one of the fundamental reasons for the tools existence.

Sometimes, embracing complexity and the ensuing struggle that this necessitates is simply what you have to do, but not often. Maybe, maybe this is one of those times, but I always start off with a critical eye when someone tells me that a complicated thing is “the only way it can be done”. Colour me sceptical.

ninjapenguin54 · 5 years ago
Complexity can easily be manufactured. Comparing it to something as fundamental as energy is pure bollocks.

Heres an amusing and simple example of manufactured complexity: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

dmreedy · 5 years ago
There is a distinction between necessary complexity and "busy beaver" artificial complexity. And it turns out that there's a pretty compelling mapping between energy and information and fundamental complexity. There's a reason they both use a concept of 'entropy'.

That said, there's no denying that there's probably a lot of unnecessary complexity in most software.

pron · 5 years ago
If only there were some scientific discipline that studied complexity and proved that problems possess some essential, minimal, complexity below which no implementation can go...

If anyone proved such a result, I'm sure they would be awarded a prize of some kind.

majkinetor · 5 years ago
That is not exactly how you claim it to be.

Introduce enterprise into any system and complexity goes over the roof ASAP - suddenly its not only your machine but TEAM of people, history, dashboards with metrics, logs, automation, security etc...

This one isn't even approaching any of it except dev side and CI.

bproctor · 5 years ago
I think the analogy to energy is pretty good. Like the FizzBuzz thing, you can also build a machine that has a lot of energy but doesn't accomplish anything useful.

Dead Comment

taneq · 5 years ago
Complexity is like entropy, not energy. It increases, unless you spend a significant amount of energy removing it from a small area, and in the process increase the complexity somewhere else by a larger amount.

However, any degree of functionality requires at least a matching degree of complexity. In that sense, some complexity is 'essential' and that complexity does need to live somewhere in your project.

timw4mail · 5 years ago
I see embracing complexity as acknowledging it exists, and attempting to represent the complexity in parts and layers.

There are always cross-cutting concerns that aren't easily transformed, or become that way due to code changes. I think this area is where managing complexity is most difficult.

ashtonkem · 5 years ago
Manufacturing complexity is absolutely doable, have you never heard of a Rube Goldberg machine?
crimsonalucard · 5 years ago
In a certain sense complexity CAN be destroyed. Yes there is a minimal level of complexity that must exist, but given an entity with a purpose, in many cases it is possible to reduce the complexity of that entity without changing its' purpose.

Take for example this function which adds one to an integer:

   function addOne(x: int) -> int {
       return (x * 1) + 0 - x + x + 1;
   }
can be reduced to:

   function addOne(x: int) -> int {
       return x + 1;
   }

The examples are obvious but from the examples you can deduce that their are many instances in software engineering where such reductions can exist but are not obvious at all.

jerf · 5 years ago
I phrase it as "The solution can be as simple as the problem but no simpler", which means, you can "create" complexity. We see this all the time in code, after all, we must account for this.

But you can't make the solution simpler than the problem. There's kind of a "by definition" aspect to this, so it's kinda circular, but it's still a valuable observation, because we also see this in code all the time. For instance, it plays a significant role in the leakiness of abstractions; that leakiness is typically the attempt to make a solution that is simpler than the entire problem space. You can be fine as long as your particular problem fits in the space of the leaky solutions, but once your problem pokes out of that you can be in big trouble.

spacedcowboy · 5 years ago
I take your point, but the complexity I’m talking about is what remains after you’ve done any readily applicable simplifications.

Any normal task has an inherent level of complexity (“add one” is a very simple task, so can be reduced down as you point out). “Invert a matrix” is somewhat less reducible. “Calculate a quaternion” even less so, etc.

We generally string dozens or hundreds of these simpler tasks into sequences to get what we want, and human understanding being what it is, there’s quite a difference in conceptual complexity between “fly the plane to [here]” and doing all that maths.

There’s probably a relationship to subitising here as well, the brain does seem to be able to process small groups of things, even if it knows those things represent larger concepts, better than larger groups of things. This is just speculation on my part though.

jes · 5 years ago
With respect, I’m not seeing how your first addOne example returns an integer one greater than its integer argument.

Am I missing something?

Dead Comment

deathanatos · 5 years ago
I wonder if the author has read Out of the Tar Pit[1]¹. See, in particular, section 6.

Essentially, the author is, I think, arguing about what the paper calls "Essential complexity", complexity inherent to the problem one is trying to solve. And with that, I agree.

I think the author should acknowledge accidental complexity (or provide some argument as to why that must live somewhere), and I think a lot of comments here on HN are pointing out the fact that accidental complexity exists, and doesn't have to live somewhere. But my guess is that that's not what the author is saying, and that the author is only arguing about essential complexity.

[1]: https://github.com/papers-we-love/papers-we-love/blob/master...

¹I personally found this paper somewhat mixed. The definitions of complexity are what make it worth reading. Its conclusion of "functional programming languages will fix all the woes" I think is not practical.

emilecantin · 5 years ago
Yes, there's definitely a kind of complexity that _doesn't_ stem from the problem domain, and it can often be eliminated.

Stuff like over-abstracting, insane default values, duplicated state that needs to be synchronized (e.g in React components) or just overly-repetitive code, for example.

elteto · 5 years ago
> Yes, there's definitely a kind of complexity that _doesn't_ stem from the problem domain, and it can often be eliminated.

Yes, it can be eliminated, but only to some extent. Programming languages add a baseline of complexity by themselves that _can't_ be removed: programming languages are not infinitely flexible so there will always be problem characteristics (invariants, data structures, etc) that will not be easily expressible. Those are new sources of complexity.

Think how Java simplifies manual memory management when compared with C or C++. Or how the Rust borrow-checker provably prevents a whole class of bugs. But these are all tradeoffs, both Java and Rust are ill-suited to express other problems.

msla · 5 years ago
Manual memory management is accidental complexity unless very strict memory usage rules are part of the specification.

Some kinds of type systems similarly lead to accidental complexity, for example in the form of repetitive code because the type system makes sufficient abstraction impossible. It's possible for some kinds of requirements to mandate very concrete types with no room for abstraction in some parts of a program, but even then the compiler can do most of the work with making sufficiently abstract code concrete and performant.

dasyatidprime · 5 years ago
I notice that there's a kind of complexity that seems to straddle the bounds, related to the way it easily turns into effort-reallocation politics. Would it be useful to consider “coordinative complexity” a separate type that's intermediary on the essential–accidental axis but also has other properties?
msla · 5 years ago
> Would it be useful to consider “coordinative complexity” a separate type that's intermediary on the essential–accidental axis but also has other properties?

The complexity of getting different pieces to talk to each other? I can see how that's only conditionally "essential" in that, if you have enough control over the environment, you can make the various distributed components as similar as possible, never deal with version skew, and never deal with varying interpretations of the same protocol standard.

OTOH, if you're writing a web page, and you have to make that page useful across multiple browser types and versions, some way of dealing with the vagaries of different browser implementations becomes an essential complexity: You either push it into a library or framework or you handle it in the main codebase, but something has to know what MSIE can and cannot handle.

x3haloed · 5 years ago
“The trap is insidious in software architecture. When we adopt something like microservices, we try to make it so that each service is individually simple. But unless this simplicity is so constraining that your actual application inherits it and is forced into simplicity, it still has to go somewhere. If it's not in the individual microservices, then where is it?”

The failing of this point is that much of what we call complexity is disorganization. Certainly, there is a fundamental level of logic in any desired system that can not be willed away with cute patterns, but to consider all complexity of an existing system to be necessary is a fallacy. Dividing systems into problem domains does not inherently reduce the total complexity of a system. It probably usually adds to it. But organizing systems this way can drastically reduce the scope of complexity into manageable pieces so that mere mortals can work on it with out having to hold the entire system in their mind at one time.

It’s like saying that you can’t make a garage full of junk any less complicated, because no matter how you arrange it, it will still contain all the same junk. In fact, organizing all the junk into manageable storage can make it much easier to understand, work with, sort through, clean, and identify items that may be unnecessary.

pm · 5 years ago
Indeed, but we need to differentiate between the inherent complexity of the problem (which can't be mitigated, but only shifted around as the article points out), and the incidental complexity added when we create the solution.

The inherent complexity can be taken on by the solution (and the inherent complexity may be hidden behind a simple interface, or it may be introduced in the interface, but that's another problem), or it gets removed of the problem domain, and then needs to be dealt with by the user.

There's no correct answer; half the problem is defining the appropriate problem domain, and even then there are only better or worse solutions. The incidental complexity of the system comes purely down to the implementation, which often comes down to how well the problem domain is defined in the first place.

x3haloed · 5 years ago
Yes. That’s exactly what I’m saying. Although, from the way I read the article, it doesn’t seem to acknowledge the possibility for significant overhead in a chosen implementation. Sounds like they’re saying, no matter what you choose, it’s all the same. Don’t even try.
Nemi · 5 years ago
While I agree with you, I think of it more like moving the complexity more than removing it. The simplest way to store junk is to throw it all in one space. This is categorically the simplest thing to do. However, your complexity is then moved to the point of retrieval.

If you want to reduce the complexity of retrieving items out of the garage, you can move some of that complexity to when you store things in the garage by organizing them in a certain way. Without a doubt this is more complex when storing, but we would all agree that it is a creates a good balance in complexity between tasks.

However, I would argue that if you ever only store 10 items in a garage, then spending time organizing them in a way that reduces retrieval time would be an utter waste of time, and hence (to completely jump out of my/our analogy) why a good business person takes all of this into consideration when making decisions on how to structure their code and business and what complexity to take on.

Deleted Comment

worik · 5 years ago
It is saying that there is a lower bound to the complexity of the junk in your garage
x3haloed · 5 years ago
Didn’t I acknowledge that?
cutemonster · 5 years ago
I like the word "disorganization", simpler to understand than "accidental complexity" especially for those who didn't hear that phrase before
crazygringo · 5 years ago
This is a great essay.

Along the same lines, there's a great quote from many years ago that I unfortunately can't find the exact text of, but it goes like this (paraphrasing):

"Most Microsoft Word users only use 5% of its features."

"So why don't we get rid of the other 95%, since it's so bloated and complex?"

"Because each user uses a different 5%."

ebiester · 5 years ago
It was Joel Spolsky - https://www.joelonsoftware.com/2001/03/23/strategy-letter-iv...

It was 80/20, but the sentiment holds.

tydok · 5 years ago
Yeah, but perhaps the 90% of each 5% is common to all users.
marcosdumay · 5 years ago
As every discussion about bullshit jobs or frontend frameworks, or hardware abstraction VMs (recently, WASM) will easily show, we are swimming in accidental complexity. And even when it's not obvious, every general advance on science or technology is fundamentally the removal of some complexity that everybody just adapted to like if it was essential.

So, no, it doesn't have to live somewhere. It can be created and destroyed, this happens every day. Probably some of it can not be destroyed, but nobody knows what part so any article about it will be useless.