Readit News logoReadit News
ChrisMarshallNY · 6 years ago
This is a no-brainer.

As a development manager for a quarter-century, and an active software developer for a lot longer than that, I can definitely say that every place there's a "meeting of the minds" is a place for bugs.

In the software itself, the more complex the design, the more of these "trouble nodes" (what I call them) there are. Each interface, each class, each data interface, is a bug farm.

That's why I'm a skeptic of a number of modern development practices that deliberately increase the complexity of software. I won't name them, because I will then get a bunch of pithy responses.

These practices are often a response to the need to do Big Stuff.

In order to do Big Stuff, you need a Big Team.

In order to work with a Big Team, you need a Big Plan.

That Big Plan needs to have delegation to all the team members; usually by giving each one a specific domain, and specifying how they will interact with each other in a Big Integration Plan.

Problem is, you need this "Big" stuff. It's crazy to do without it.

The way that I have found works for me, is to have an aggregate of much more sequestered small parts, each treated as a separate full product. It's a lot more work, and takes a lot more time, with a lot more overhead, but it results in a really high-quality product, and also has a great deal of resiliency and flexibility.

There is no magic bullet.

Software development is hard.

jiggawatts · 6 years ago
So, just a day ago, I got dragged into a meeting where many people were involved in a discussion about the new Cloud Enterprise Application Architecture Template. Or whatever.

It had a 3-tier architecture.

I asked: Why?

And they answered: Why not?

I answered: Because layers must only be introduced if needed. Is there a need?

They answered: The standard design is the need.

I clarified: Is there a technical requirement? Or perhaps an organisation one, such as disparate teams working on the two components?

They answered: No! Of course not! It's a unified codebase for a single app written by a single person! But it is not Enterprise enough! It must be split into layers! And then, you see, it will will match our pattern and belong.

I verified the insanity: Are you saying that this finished, working application isn't currently split into layers, but you want it split into layers simply so that it can have layers?

They chorused: Yes.

hyperman1 · 6 years ago
The 3 tier architecture was a reaction to VB and RAD-like tools, where things like data validation and database I/O were coupled directly to the input component. It was common for these frameworks to not even have an object for data transfer sitting between UI and DB.

This was the timeframe where more and more manual work was automated. Hence it was a common situation where input used to be given by a human, but now comes from another application. The simplest way to do that kind of retrofit was to drive the UI from the application: The application fills in its own gui fields which triggers the validation, then simulates a click on OK.

This caused al kinds of ungodly messes. You need a gui for background processes, reliability was low, etc.. 3 tier architecture were a way to say 'never again' to this style of programming. Forcing people in to it was necessary.

But that was another time. Mindlessly applying an architecture without understanding why is of course dumb. But not applying an architecture without understanding it's pros and contras is just as dumb. It all depends on the quality of the architects in question.

Not that I want to call you dumb, of course. IT today is different from 20 years ago.

afriendofjungs · 6 years ago
You're lucky you got some form of discussion out of the people you work with. Whenever I try to steer conversation towards "technical accountability" where I ask what I think are legitimate questions concerning application design and architecture, the mood is as if no-one else believes it's of any importance and I am that weird guy putting spokes in the wheels of a great project. My arguments aren't refuted, they're conveniently ignored. People are honestly dumbfounded and prefer to change subject at best, at worst they lash out with what seems to be simplistic thinly-veiled "culture" attacks -- "oh, please, not with that abstraction/encapsulation argument again, we've got work to do"-sort of utterances.
Vinnl · 6 years ago
> I answered: Because layers must only be introduced if needed. Is there a need?

It might have gone better if you had also stated why that is the case, e.g. "every additional layer exponentially increases the likelihood of bugs being introduced, so their introduction must be worth that risk, or the higher cost of mitigating measures".

Of course, the challenge will still be that "likelihood of bugs" is rather abstract, and often people believe they can be prevented just by paying more attention, and assume that that will happen of its own accord.

marmaduke · 6 years ago
I went through this, more or less, several years ago. But now I see the uselessly layered app being almost effortlessly retrofitted with new top and bottom layers, keeping only the original middle layer.

If they hadn't layered it, the new retrofitting wouldn't even be possible and the company wouldn't have that contract.

Was it insanity?

organsnyder · 6 years ago
I did a project like that at my old job. It was a single screen with a single purpose (login the user via oauth SSO, then collect a couple of pieces of input and submit them to a backend webservice), so it was assigned to me rather than creating a project team. To make it match the rest of our application architecture I created a REST layer in C# that communicated via SOAP to an ESB that forwarded the SOAP request to a Java webservice that finally forwarded it to the backend system. I did depart from our organizational norm by doing the frontend with plain HTML+JS (plus Boostrap, IIRC) rather than AngularJS.

Yes, it was complicated, but I think there is a benefit: it's very clear where certain functionality should live. The C# REST layer was application-facing, so it took care of SSO and basic validation. The Java webservice contained the business logic to validate things from an broader enterprise perspective. The ESB was a piece of trash that did provide authnz so the Java webservice didn't have to.

Was it worth the complexity? Probably not, in this case. But those sorts of applications tend to have long lifespans and evolving requirements, so the standardization can be helpful.

wayoutthere · 6 years ago
I would say that this could very easily be justified as standardizing and de-skilling development and production operations. That's a very good business reason to standardize architecture patterns, because it's cheaper to use a pattern that is sometimes overkill than it is to have a bespoke architecture for every app.
NicoJuicy · 6 years ago
I actually like DDD, where code is executed per domain with a bounded context.

Not for a single person though, but it will force kin developers to think more deeply.

Otherwise it could easily result in a code-mesh/hell.

debt · 6 years ago
I’m trying an experiment in my new role where I deliberately avoid these types of projects.

They usually don’t deliver on time and are too stressful to work with. It’s not worth it both personally and from a career perspective.

However if they manage to deliver a complex design on time, I’ll have lost a great career opportunity. It’s a gamble either way, but high complexity both in organization and design, usually yields a high failure rate on just about every metric I can think of except the metric of “I’m going to use this complex design to get another job and bail myself out of the dumpster fire I’m creating before I have to deliver anything of value”.

spookthesunset · 6 years ago
I’ve had those “well, your project is fucked” meetings. The ones where all the leads are architecture astronauts inventing requirements as they go. “We have to have the ability to pass the original authentication the entire way down the stack including the rabbitmq instance too.

Why?

“Security”

Okay....

... astronaut proceeds to fill two whiteboards with ungrounded nonsense.

All you can go is smirk and just wait until the whole project gets mysteriously canceled because no mortal could ever implement it successfully.

Honestly as long as I don’t need to directly depend on them, these are some of my more favorite meetings. So surreal. That and when the team infights the entire time.

Deleted Comment

ZainRiz · 6 years ago
The number of code repositories I've seen where the database layer was (leakily) abstracted away "in case we ever need to migrate" is WAY greater than the number of code bases where there was actually a need to migrate.

Yet they all still introduced the headaches of having to update the abstraction layer whenever you wanted to make schema changes

yellowapple · 6 years ago
Are these three tiers respectively called "model", "view", and "controller", by chance?
trophycase · 6 years ago
Decoupling and modular design is pointless?

Deleted Comment

he0001 · 6 years ago
What do you propose instead?
the_af · 6 years ago
> That's why I'm a skeptic of a number of modern development practices that deliberately increase the complexity of software

Maybe I'm nitpicking, but the article points out the number #1 predictor of software bugs is not the complexity of software but of the organization itself. A single person can make an hugely complex piece of software, and a relatively large team can make a conceptually simple system.

As for software complexity itself, there's an interesting research result that the thing that matters the most is line count. Not cyclomatic complexity, not the type system, not modularity or test coverage, not the programming language -- looking at the line count alone trumped all the other metrics in predicting flaws. (I can't look for this paper now, but I'm sure with a bit of googling anyone can).

iainmerrick · 6 years ago
A single person can make an hugely complex piece of software, and a relatively large team can make a conceptually simple system.

In practice I think you tend to hit Conway’s law -- organizations build software that mirrors their own organizational structure. So it’s hard for large teams to make simple designs.

I’m very skeptical about that line count metric; in my observation bugs tend to sit between modules due to bad interface design. But I could certainly believe that bad modularisation is correlated with line count (in the form of excessive boilerplate).

ChrisMarshallNY · 6 years ago
That makes sense. Every line is a potential bug.

But it's not the only factor, and, quite frequently, it's a matter of correlation, as opposed to causation.

That kind of thing can be very tricky to determine.

When I write software, my first stab at a function tends to be a fairly linear, high-LoC solution, which I then refactor in stages; reducing LoC each time, and ensuring that the quality level remains consistent, or improves.

As far as quality goes, my first, naive stab, was just fine, and I have actually introduced bugs during my refactoring reduction.

0x445442 · 6 years ago
Do you know if the paper controlled for all the variables in tandem. For example, I wouldn't think line count is necessarily mutually exclusive to modularization. That is to say, I'd expect well modularized code to reduce line count generally.
andydavieswork · 6 years ago
> In order to do Big Stuff, you need a Big Team.

Depends on your definition of Big Stuff. If you mean send a rocket to Mars, then yes. But the vast majority of us are working on simple web apps that might call a few apis, yet these seem to require Big Teams. Compare that to what a single game developer might produce, and compare the complexity and performance of the product.

I think we need Big Teams for Small Stuff precisely _because_ of these 'modern development practices' that you mention. Getting things done in these paradigms takes _forever_, so you need a Big Team.

ChrisMarshallNY · 6 years ago
That's true. Like I said, I can only speak from my experience.

I do think that we are in a sort of "dependency hell," that is sorting itself out. In the end, a few really good dependencies will still be standing in the blasted wasteland.

Dependencies mean that a small team can do Big Stuff, but that relies on the dependency being good.

"Good" means a lot of things. Low bug count is one metric, but so is clear documentation, community support, developer support, and even tribal knowledge. It doesn't necessarily mean "buzzword-compliant," but sometimes aligning to modern "buzz" means that it benefits from the tribal knowledge that exists for that term, and you can deprecate some things like documentation and training.

People often think that I'm a dependency curmudgeon. I'm not. I am, however, a dependency skeptic.

I will rely on operating system frameworks and utilities almost without question, but I won't just add any old data processor to my project because it's "cool." I need to be convinced that it has good quality, good support, and a high "bus coefficient," not to mention that it meets my needs, and doesn't introduce a lot of extra overhead.

Nothing sucks more than building a system, based on a subsystem that disintegrates a year down the road. I suspect many folks that have built systems based on various Google tech, can relate. I have had that experience with Apple tech, over the years (Can you say "OpenDoc"? I knew you could!).

chiefalchemist · 6 years ago
> I think we need Big Teams for Small Stuff precisely _because_ of these 'modern development practices' that you mention.

Perhaps. But what I've also seen is the head count of a given project is a direct reflection of the intra-org status of the person heading the project.

There's a belief - that's a myth - that if 3 ppl is good the 6 is twice as good and time will be cut in half. I think we also know - with rare exception - that productivity slides as heads increase.

That's a given.

Then there's also a belief - again a myth - that some mod dev practices can fix the increased head count issue. It might mitigate it here and there. But MDP can only do so much to fix a dysfunctional org/group.

Ultimately it's a leadership/management issue. Process and technology are too often lipstick on a pig.

pjc50 · 6 years ago
In order to get Big VC Money, before doing a Big IPO, you need a Big Team doing a Big Plan. Nobody's going to give you a billion dollars for something that's simple enough for one person to do.

You may say, "but this problem doesn't need a billion dollars!", to which I say "your corporate ownership structure isn't complicated enough, you need to make sure that as much of the billion dollars sticks to your hands as possible after you fail". WeWork passim.

bob33212 · 6 years ago
I interviewed at a well funded company for engineering manager. The interview centered around how they were going to build a 100 person team so the job would include lots of interviewing and hiring.

I assume that some investor was told that "We are going to have 100 developers while our competitor has only 20" and the investor bought into that plan.

blowski · 6 years ago
Plus managers who define their worth by easily measurable values like how many 0s on their p+l statement, and lines to them on the org-chart. Success of the project is much more subjective.

Deleted Comment

0x445442 · 6 years ago
I agree with everything you have said. But the trends in "The Enterprise" are to eschew these ideas in favor of overly complex serverless architectures with poorly specified interfaces where any pluggable developer/resource can perform hit and runs on any component. What I've seen this lead to is the most brittle, bug ridden, low quality software in my career. And the irony is it's led to none of the perceived benefits of serverless architectures but has enhanced all the drawbacks of monolith architectures.

My best guess is to why things have become this way is that middle management in "The Enterprised" reckoned "Agile" as an opportunity to commoditize software development.

streetcat1 · 6 years ago
You might be ignoring the productivity gains in software dev over the last 10 years.

With open-source, languages like go/rust, excellent IDEs and basically free compute, the amount that a single developer can produce is 10x/20x more.

abraxas · 6 years ago
This is pure hogwash. All that productivity and more has been available for two decades with Java and C#. It's just that hipsters rejected it wholesale because those are their parents' programming languages.
ChrisMarshallNY · 6 years ago
Tell me about it.

I write in Swift. I love it.

I started with Machine Code (not ASM -Machine Code).

Also, all those lovely system frameworks are wonderful.

I used to use MacApp (Google it), and PowerPlant (Same).

AppKit and UIKit knock them into a cocked hat. SwiftUI shows promise, but it may be a year or two before it can really match the standards.

afarrell · 6 years ago
> This is a no-brainer.

It is NOT obvious to someone who hasn't thought about it for a while. Suppose someone is trying to persuade another person and just assumes that they already realize the costs of organisational complexity. There's a good chance they'll run into a wall and not get the message across.

If you think realizing it is a no-brainer, then your 25 years of experience is showing.

ZainRiz · 6 years ago
There's a name for the bias where something seems obvious once you've been told about it, but I can't remember what it's called (I don't think it's hindsight bias...)
jve · 6 years ago
> I'm a skeptic of a number of modern development practices that deliberately increase the complexity of software. I won't name them, because I will then get a bunch of pithy responses.

Bring it on. Share your experience with youngsters. And let the elite confront with methodologies you maybe didn't have experience with.

carlmr · 6 years ago
I'd also be interested. Some drivers of complexity I would think of:

* Using many of the GoF/OOP patterns, because you may need extensibility at some point. Basically YAGNI.

* Complex, hard to mentally map, build systems (e.g. CMake).

* Designing for purity over simplicity (I'm actually big on FP, here I'm thinking of the Haskell crowd which IMHO sometimes overdoes it).

* Writing a complex architecture without prototyping. Often your prototype will tell you what you need. If you start architecting too much beforehand then you often waste time on some details that don't matter, and even worse, afterwards you try to force it into your architecture which doesn't actually fit the problem. The beauty of software is that it's easy to change things. Architecture on buildings is different because you need to make sure that you're not building the wrong thing. In software building the wrong thing can give you the right insights and still be faster than planning for every eventuality.

ChrisMarshallNY · 6 years ago
Nah, that's OK. Thanks.

Just to clarify. I have been down this road. I am not interested in sacred cows or third rails.

I'm trying to do all my writing and commenting, based only on my own experience and insight.

I'm done with fighting on the Internet. I don't have the energy for it anymore.

Meai · 6 years ago
I've started to see it that way too. Every place where there is source code, the complexity grows instantly - it's like a virus. The only firewall you can have for this is if you have it all in different projects that need to justify themselves on their own on all merits, financially, technically, etc.
calineczka · 6 years ago
> an aggregate of much more sequestered small parts, each treated as a separate full product

What does it mean exactly? I feel you are trying to share a nice idea but I can't comprehend it. What are those small parts? Classes? Modules? Services? What does it mean to treat them as a separate full product within an organization?

asdfman123 · 6 years ago
One simple way of hermetically sealing organizational complexity: microservices.

Many people don't seem to understand when to use microservices. They're not for small teams.

I believe the real benefit of them is that you can have a team at say, Amazon, who works on their product prediction engine. They have well defined input data, and they have well defined data consumers need as an output. Beyond that, they just have to coordinate within their own team to build what they need to.

They don't have to meet with stakeholders across the organization and get into debates with ten other guys in other departments about adding a database field. They have their own database, of their own design, and they do with it what they want. If they need more data they query some other microservice.

carlmr · 6 years ago
Not OP, but to me this is the Unix philosophy of having many small tools that work well and interact well.

Even if your modules are very separated, if you can't individually use and play around with them they become a part of a big blob of software. Services may be products, but only if they're idependently usable.

If you have a small product that's useful in and of itself (e.g. git) you can much more easily make it work well and then integrate with other good tools and replace those if necessary (e.g. if you have problems with Bitbucket/Jira/Confluence, you can switch them out for other solutions, e.g. Gitea).

But if you have a huge clomplex product then at some point it becomes organizationally impossible to move away from it.

ChrisMarshallNY · 6 years ago
TL;DR: Lots of small modules (they could be drivers or classes), with narrow, well-defined, highly agnostic intersections.

Most of my coding work has been done as a lone programmer. Even when I was working as a member of a large team, I was fairly isolated, and my work was done within a team of 3 or fewer (including Yours Troolie).

I have also been doing device interface development for most of my career, so I am quite familiar with drivers and OS modules.

When I say "sequestered," I am generally talking about a functional domain, as opposed to a specific software pattern.

Drivers are a perfect example. They tend to be quite autonomous, require incredible quality, and have highly constrained and well-specified interfaces. These interfaces are usually extremely robust, change-resistant and well-understood.

They are also usually extremely limited; restricting what can go through them.

The CGI spec is sort of an example. It's a truly "opaque" interface, completely agnostic to the tech on either side.

There are no CGI libraries required to write stuff that supports it, there's no requirement for languages, other than the linking/stack requirements for the server, etc.

It's also a big fat pain to work with, and I don't miss writing CGI stuff at all.

It is possible to write a big project his way, but it is pretty agonizing. I've done it. Most programmers would blanch at the idea. Many managers and beancounters would, as well. It does not jive well with "Move fast and break things."

But there are some really big projects that work extremely well, that don't do this. It's just my experience in writing stuff.

When you write device control stuff, you have the luxury of a fairly restricted scope, but you also have the danger of Bad Things Happening, if you screw up (I've written film scanner drivers that have reformatted customer disks, for instance -FUN).

YMMV

winrid · 6 years ago
Based on my understanding a common name would probably be Slimlane, or Blade...
downerending · 6 years ago
> Problem is, you need this "Big" stuff. It's crazy to do without it.

Sometimes you do. But many times big stuff gets written for reasons other than need. One of the best wins in our industry is to recognize that the big stuff isn't needed and to never start the project in the first place.

habosa · 6 years ago
This rings true for anyone who has ever worked at a big tech company (I work at Google).

At Google when your project begins to scale up you can ask for more money, more people, or both. Most teams ask for both.

What you can't ask for is different people. You can't solve your distributed systems problems by adding 5 more mid-level software engineers to your team who have not worked in the domain. Yet due to how hiring works, this is what's offered to you unless you want to do the recruiting yourself. Google views all software engineers as interchangeable at their level. I have seen people being sent to work on an Android app with hundreds of millions of users despite never having done mobile development before. That normally goes about as well as you'd expect.

So you end up with teams of 20 people slowly doing work that could be done quickly by 5 experts. In some cases all you lose is speed. In other cases this is fatal. Some things simply cannot be done without the right team.

natalyarostova · 6 years ago
I see the same thing, and the only way I can reconcile this is that the benefit to sr leadership in terms of treating SDEs as fungible is so massive that it is still worth the massive productivity loss from assuming exchangability.
amznthrowaway5 · 6 years ago
What are the benefits to treating SDEs as fungible?

At Amazon, Sr. Leadership and HR love to pretend all SDEs at a given level are interchangeable, level actually indicates competence, and leetcoding external hires with zero domain knowledge have far more worth than internal promos. All of the above assumptions seem completely insane to me and have resulted in the destruction of many projects.

MontyCarloHall · 6 years ago
Looking at the metrics used in the publication[0], it seems most of them focus on the absolute number of engineers working on a given component. This makes sense — more engineers touching a component introduces more opportunities for bugs. (Edit: as other commenters have pointed out, total lines of code, highly correlated to number of engineers, is likely the best first-order predictor of bugginess.)

I bet we can improve predictive power by considering the degree of overengineering, i.e., the number of engineers working on a task (edit: or lines of code) relative to the complexity of the task they’re working on. 100 people working on a task that could be accomplished by a single person will result in a much buggier product than 100 people working on a task that actually requires 100 people. The complexity of code expands to fill available engineering capacity, regardless of how simple the underlying task is; put 100 people to work on FizzBuzz and you’ll get a FizzBuzz with the complexity of a 100 person project[1]. Unnecessary complexity results in buggier code than necessary complexity because unnecessary components have inherently unclear roles.

Edit: substitute "100 people" with "10 million lines of code" and "1 person" with "1000 lines of code" and my statement should still hold true.

[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...

[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

ezzzzz · 6 years ago
My own (current) personal hell is essentially the inverse of this assumption. I work in an area that was essentially ran by 1 extremely overworked developer. The result? Now there are 40 people (including management, PMs, Scrumlords, QA and devs) doing the work originally done by a single person. The tech-debt and cruft is unbelievable (considering the domain is also quite complex). Every decision made before the 40 people were hired was made just to check off 1 of 1000 things on this person's plate... I could probably write a book about this if we are ever able to turn things around, which requires explaining to management why the things that have been 'working' for over a decade are longer working.

The sad part is, it would seem like all the engineers we have are overkill, but in my little silo, we could easily split our work into even more sub-teams, hire 12 more people, and still keep churning just to stay afloat. Sorry for the rant, I'm not sure exactly what I'm driving at. I guess I'm just trying to give a cautionary example of how not to manage large-scale software projects.

spookthesunset · 6 years ago
Often times, I've found situations that you describe (ones where you need to throw infinite developers at to fix) to be a smell that what your "product" does isn't something that isn't line with how your business delivers value. Such things are almost always candidates to be replaced by third party software of some kind.

Maybe I'm wrong though. When you are charting new ground, building new shit that has never been build before--which is what your product teams should be doing--you don't have years long backlogs because you can't see that far out. Good, productive feature work is iterative.

If you can see with a high degree of clarity what you will be working on 5 years from now, it probably means it's been done before and you are better off cutting a check for it.

Hopefully this makes sense :-)

augustl · 6 years ago
That's super interesting! What would be a good way to measure the comlexity of a task in some objective way?

Also, the study doesn't really take "tasks" into account at all, it seems. Just modules and data relating to the modules.

MontyCarloHall · 6 years ago
>What would be a good way to measure the comlexity of a task in some objective way?

From an existing codebase this would be very difficult to objectively assess. I think you’d have to study it empirically — come up with a set of tasks (“A”) that each takes a single programmer “P” on average a week to complete. Then come up with a set of tasks (“B”) that each takes a team “T” of 10 programmers on average a week to complete (ensure that 10 programmers is a lower bound, i.e. decreasing the number of coders causes the project to take longer). Across multiple solo programmers and teams, compare the quality of the code produced by programmers P on tasks A, teams T on tasks B, and teams T on tasks A. I’d bet P/A > T/B > T/A.

kqr · 6 years ago
I wonder if number of engineers is strongly correlated with lines of code. This would indicate maybe lines of code is still the best predictor... will read the report to see if they bring it up!
MontyCarloHall · 6 years ago
I agree that lines of code is probably the overall best predictor to first order (didn't mean to imply otherwise; I've edited my original post to clarify). I just meant that x lines of overengineered code will almost always be buggier than x lines of non-overengineered code.
specialist · 6 years ago
I like the "surface area" metaphor. Every where I've worked took a divide & conquer approach.

Some day, I'd love to participate in the NASA / JPL style. Everyone reviews the entire code base together. Bugs are assumed a failure of process. I guess the thinking is all bugs are shallow given enough eyeballs.

Realizing now that I'm a hypocrite (again). I hate pair programming. But do kind of enjoy code reviews. Now I don't know what I believe.

MontyCarloHall · 6 years ago
P.S. I’m an idiot who can’t keep orders of magnitude straight. “10 million LoC” should be “100k LoC.”
he0001 · 6 years ago
How do you measure/quantify over engineering?
he0001 · 6 years ago
Isn’t this aligned with Conway’s law? I mean, a complex business model requires, most likely or eventually, a complex solution? If not, the two systems are at odds, and the computer system is even more complex/buggy since it doesn’t follow the organization's complexity, it doesn’t do what the organization need it to do.

That at least is my experience anyway.

tekmaven · 6 years ago
I was surprised that Conway's law was not mentioned.
MontyCarloHall · 6 years ago
In the original publication that’s the subject of the article, it was: https://www.microsoft.com/en-us/research/wp-content/uploads/...
thenewnewguy · 6 years ago
Just skimmed over the post, so it's possible they pointed this out and I didn't notice - but I think this is misleading. The title makes it sound like organizational complexity _causes_ bugs, but in reality I think both are simply effects of a more underlying cause.

Larger and more complicated software both requires a bigger team (therefore more organizational complexity) and is more likely to contain bugs.

kitd · 6 years ago
Steve McConnell identified it as the number of lines of communication in the team or dept creating the module, incl dependents and dependers.

It's why Conway's Law exists, and points towards the importance of well-designed and -specified APIs.

daveslash · 6 years ago
If you consider people on a team as nodes in a graph, and lines of communication as edges, then a team of n people has potentially (n(n-1))/2 potential lines of communication. I try to express to people that the more potential lines of communication you have, the greater the chance of miscommunication. I think this is also called out in Brooks' The Mythical Man Month.
ChrisMarshallNY · 6 years ago
Upvote for mentioning Steve McConnell.
augustl · 6 years ago
I think that's a fair point. But organizational complexity is compared to other measures, such as the complexity of the software itself, the number of dependencies, etc - i.e. the size of the software itself - and the study found that organizational complexity is still the #1 method.
ChrisMarshallNY · 6 years ago
I agree with this assessment. It's basically an observation of parallel occurrences.

Big projects require big teams, and also have a lot more "trouble nodes," so there are many more places for bugs.

The big team is not the cause. It is simply a natural coincidence.

artsyca · 6 years ago
Conway's principle
the_gipsy · 6 years ago
There surely also exists some large and complicated software that was not developed by large and complex organizations.
Merrill · 6 years ago
>Organizational Complexity. Measures number of developers workong on the module, number of ex-developers that used to work on the module but no longer does, how big a fraction of the organization as a whole that works or has worked on the module, the distance in the organization between the developer and the decision maker, etc.

After one of the early big software project failures (maybe Multics?) there was a quote about software projects going around (maybe John R Pierce?) that "If it can't be done by two people in six months, it can't be done."

One of the functions of good software design is to break the system down into pieces that a couple of people can complete in a reasonable length of time.

hos234 · 6 years ago
Herbert Simon is who you read first when you thinking about orgs - https://en.m.wikipedia.org/wiki/Satisficing

That will take to you to healthy and productive places.

wycy · 6 years ago
The article examines this through the lens of Windows Vista. Am I the only person who actually did like Vista and didn't have any problems with it? I gathered that most of the issues people had with it was caused by third party incompatible software and hardware.
sp332 · 6 years ago
Microsoft tried not to increase the minimum specs required over XP very much. This lead to certifying a bunch of underpowered hardware as "built for Vista" which lead to many, many people having super slow computers. On top of that, the UAC used to be a security boundary and it would prompt very often because Windows software had been written to behave like it was on a single-user system and mess with system files all the time, so it was really annoying.
kryptiskt · 6 years ago
I also had little trouble with Vista and liked it well enough. But I had plenty of RAM, I believe it performed badly with 1 GB or less. And of course people got hung up on UAC.
swiley · 6 years ago
That’s kind of crazy. Firefox and libreoffice don’t do well in 1 Gig of ram but at least they’re doing something. You can run a decent DE and few decent apps in 1gig just fine with most Linux distros.
the_af · 6 years ago
Obviously you're not the only person, but the data points to a flawed release.

For me, Vista was slow as molasses, which was enough to upset me and make me hate it. For a lot of people, it also had driver problems.

finnjohnsen2 · 6 years ago
No. It was Windows Longhorn. Microsoft abandoned it.