Readit News logoReadit News
jmull · 4 years ago
Nice, interesting article.

But I would stress less that "Seeing Like a State" -- that is a top-down, global solution -- was not the problem.

The problem was that "Tim" didn't really understand the problem he was trying to solve (well, none of us truly understand very much at all, but he didn't understand it better than many of the teams associated with the individual services).

"Tim"'s proposal probably solved some problems but created various other problems.

The best solution, though, (IMO) isn't that Tim should be smarted and better informed than everyone else combined, nor that every team should continue to create an independent solution. Instead "Tim" could propose a solution, and the 100 micro service teams would be tasked with responding constructively. Iterations would ensue. You still really, really need "Tim", though, because multiple teams, even sincere and proficient ones, will not arrive at a coherent solution without leadership/direction.

> A global solution, by necessity, has to ignore local conditions.

That's just flat wrong. A global solution can solve global concerns and also allow for local conditions.

hosh · 4 years ago
> That's just flat wrong. A global solution can solve global concerns and also allow for local conditions.

Past a certain level of complexity, that's no longer true.

_Seeing Like a State_ is a great introduction to this, but I think Carol Sanford's work goes much more into detail. The main thing with the high-modernist view that James Scott was critiquing is that it comes from what Sanford would call the Machine World View. This is where the entire system can be understood by how all of its parts interact. This view breaks down at a certain level of complexity, of which James Scott's book is rife with examples.

Sanford then proposes a worldview she calls the Living Systems World View. Such a system is capable of self-healing and regenerating (such as ecologies, watersheds, communities, polities), and changing on its own. In such a system, you don't affect changes by using direct actions like you do with machines. You use indirect actions.

Kubernetes is a great example. If you're trying to map how everything work together, it can become very complex. I've met smart people who have trouble grasping just how Horizontal Pod Autoscaling works, let alone understand its operational characteristics in live environments. Furthermore, it can be disconcerting to be troubleshooting something and then have the HPA reverse changes you are trying to make ... if you are viewing this through the Machine World View. But viewed through Living Systems World View, it bears many similarities to cultivating a garden. Every living thing is going to grow on its own, and you cannot control for every single variable or conditions.

For similar ideas (which I won't go into detail), there is Christopher Alexander's ideas on Living Architecture. He is a building architect that greatly influenced how people think about Object Oriented Programming (http://www.patternlanguage.com/archive/ieee.html) and Human-Computer Interface design (what the startup world uses to great affect in product design).

Another is the Cynefin framework (https://en.wikipedia.org/wiki/Cynefin_framework). Cynefin identifies different domains -- Simple, Complicated, Complex, and Chaos. Engineers are used to working in the Complicated domain, but when the level of complexity phase-shifts into the Complex domain, the strategies and ways of problem-solving that engineers are used to, will no longer work. This includes clinging to the idea that for any given problem, there is a global solution which will satisfy all local conditions.

astrange · 4 years ago
> He is a building architect that greatly influenced how people think about Object Oriented Programming (http://www.patternlanguage.com/archive/ieee.html)

The funny part about this speech is that he's just telling everyone they did it wrong, and Richard Gabriel agrees:

https://dreamsongs.com/Files/DoingItWrong.pdf

The point of his pattern language is to enable people to create their own architecture for their own needs. The point of OOP design patterns is to lock you in a prison of enterprise Java programming in the Kingdom of Nouns. Of course, I think everyone realized this like a decade ago.

tcgv · 4 years ago
> This includes clinging to the idea that for any given problem, there is a global solution which will satisfy all local conditions.

The parent comment wasn't stating this. It was stating that there could be a partial global solution that would benefit all microservices, a solution which teams would have to adapt for covering local conditions as well. A middle ground per se.

Thanks for sharing the "Living Systems World View" btw, very interesting!

jmull · 4 years ago
Truly, a fascinating perspective, thank you.

Just for the context: I would say I'm a natural intuitive bottom-upper, except that I can't help but reconsider everything my intuitive self learns from a strongly analytical top-down way.

From that perspective and 30+ years of experience (where I like to think I'm at least open to being completely wrong about anything and everything), I think top-down, prescriptive solutions can be useful and effective, but need to understand and carve out the holes (and conservatively large ones at that) for "local" concerns - BTW, "local" often typically just means lower, where there the lower level itself can have "global" and "local" concerns.

Now, I know this often doesn't happen, so let's lay out how it can work:

- there's a top-down person -- "Tim" in the article -- who has responsibility for for developing a solution

- there are the separate teams, who are responsible for communicating feedback on potential solutions.

Also, I wish I didn't need to point this out, but "responsibility for" === "authority/control over".

(If that's not the case, then never mind: you essentially have a "free-for-all" organization, and just better hope someone who's not too crazy wins the cage-match and can hang on long enough to be a net positive.)

salixrosa · 4 years ago
Thanks for the reading recommendation! Learning about the Cynefin framework and thinking about those kinds of problems led me to James Scott and to Hayek, but I haven't come across Sanford's work before.
hosh · 4 years ago
Oh yeah, and I just remembered -- Go. It's a great way of training strategic analysis and making decisions. After moving past the basis, what one explores are global vs. local, influence vs. territory, discerning urgent vs. big moves, direction of play, and so forth. It is theoretically a perfect-information game, but it is sufficiently complex enough for humans that it simulates the fog of war and having to make decisions in face of uncertainty.
phkahler · 4 years ago
>> > A global solution, by necessity, has to ignore local conditions.

>> That's just flat wrong. A global solution can solve global concerns and also allow for local conditions.

So lets rephrase that. A global solution that ignores local conditions will have problems and will likely fail.

bentcorner · 4 years ago
Makes sense. In my work I've seen this when trying to get developers on my team using certain patterns, styles, types, conventions, or tools (or the inverse - deprecating them).

Suggestions are usually well grounded (e.g., "let's migrate to this `std` class instead of this old home-rolled wrapper), but sometimes there's some nuance to how something is currently done and deep discussion of the proposal can work through these bits.

whakim · 4 years ago
> That's just flat wrong. A global solution can solve global concerns and also allow for local conditions.

In theory, yes. But in practice, no (and this is the author's point, I think). In theory, the more "local conditions" you have to account for, the more exponentially complex your "global solution" becomes. (This is the "state" metaphor.) In practice, you can't build that impossibly complex system (and it might not be desirable, anyways!) - so you're likely to try to change local practices in service of a more streamlined global solution. The more you do that, the farther away you move from respecting local conditions.

sokoloff · 4 years ago
That depends a lot on the cardinality of the set of Tims.

If there’s one Tim per team, you’ll have 100 Tims proposing different global improvements and 100 teams needing to respond intelligently to those suggestions.

vlovich123 · 4 years ago
That's when the business needs to assign one Tim, or a team of Tims as dictator(s). Everyone else can provide feedback but isn't the decision maker.
ryukoposting · 4 years ago
Through enough iteration, all problems can be solved. But, how many iterations will be required to reach a solution that works for everyone? At that point, is there a solid business case for the project?
chasil · 4 years ago
The scenario reminds me of this story:

https://mwl.io/archives/605

GiorgioG · 4 years ago
> A global solution can solve global concerns and also allow for local conditions.

Not if standardization is the priority.

CogitoCogito · 4 years ago
> Not if standardization is the priority.

Is standardization always a priority?

darkerside · 4 years ago
I think the key difference is whether the local teams have the choice to opt out or not, and my belief is that they should. If they can, they can solve their own problem if the global solution doesn't work. If the global solution wants to keep them as consumers, they must adapt. If they can't leave, the global team will almost certainly stop responding to their needs over time. Like communism, a global solution is terrific in theory, but human behavior causes it to break down in practice.

Caveat, for small enough problems, good enough solutions, and charismatic enough leaders, global solutions can work. But they all break eventually.

NateEag · 4 years ago
For anyone interested in social systems that help avoid this top-down, centralized failure mode, I cannot recommend RFC 7282 enough:

https://datatracker.ietf.org/doc/html/rfc7282

A whole lot of wisdom is captured in that document, including a deep understanding of the differences between unanimity, majority rule, and consensus.

If you're involved in standardization efforts in any way, whether it's deciding where your team will put braces in source code or running software architecture for a Fortune 100, it will well repay your reading time.

svilen_dobrev · 4 years ago
interesting. For long time i've found that negative logic is more powerful/overarching than positive one - #ifndef NOT_THIS is more powerful than #if THIS .. and this article applies that even to agreeing vs not-disagreeing.
sudhirj · 4 years ago
This seems to be hallmark of a “Middle” developer. Not so junior that they couldn’t build a working solution that they assume everyone should use, but not senior enough to think twice about whether they should be building it.

The “we should make a common framework” for this line is the dominant thought at this level. Never even a library. A framework. Everyone must do it this way.

The more senior people share concepts and maybe libraries, and allow the team to use them if they see fit.

shuntress · 4 years ago
It's the large-scale version of taking "DRY" too literally.

Junior devs just repeat themselves because they don't know better.

Middle devs rush into an incomplete abstraction by overzealously not-repeating-themselves.

Senior devs just repeat themselves because they know they don't understand what it would take to abstract out the solution.

Like everything... "It Depends". Don't Repeat Yourself Too Much.

novembermike · 4 years ago
One thing to remember here is that a senior dev might be at the beginner stage for org wide changes.
chmod600 · 4 years ago
Part of the motivation behind DRY was to avoid the mess of repeating yourself within a task by updating 17 levels of classes and factories to add a field somewhere. This is mainly solved by using a sane language and not having useless intermediary code.

But you are right: applying DRY between tasks requires good judgement, and sometimes it's best to just copy some similar code around than to prematurely invent an abstraction.

Deleted Comment

GiorgioG · 4 years ago
I've worked at bigger companies and there are plenty of folks much higher than 'middle dev' forcing these types of things down the organization's throat.
pc86 · 4 years ago
You can certainly be a mid-level in skill but be a senior/staff/principal at the company, or a senior/staff/principal in technical skill but middle or junior in strategic or design skill.
lawn · 4 years ago
Maybe even the CEO or CTO.
1123581321 · 4 years ago
This kind of consequential decision can happen at high levels. Obviously less often when a truly brilliant developer ends up in a small organization (but that has its own risks.)

In the example, it was determined that they could not afford to let each service solve its individual bottlenecks ad hoc. So a corresponding strategic error was also made/forced at the senior business level.

It's easy to speculate in hindsight, but in this case I could imagine a globally enforced throughput mandate supported by a widely visible and frequently reviewed dashboard, new tools/libraries as needed, and an optional central queue service required to 'compete for business' of the individual service teams.

I can see potential problems with that too, though. In a sense, failure has already happened when growth management is deemed to be too important to be left to capable individuals on decentralized teams.

Enjoyed the article.

Fiahil · 4 years ago
I agree.

People use the most practical things at their disposal. If Tim had opted to publish a repository of easy and _simple_ recipes for managing kafka and postgres integrations, while retaining the ability to use original libraries, then I see no reason why it would not have gained traction.

yellowstuff · 4 years ago
This article does a good job describing one failure mode that's not understood well, but the opposite failure mode is much more common in my experience- having lots of ways to do the same thing can be very inefficient and brittle, even at small companies. The right answer is not "never unify systems" or "always unify systems", but develop judgement about when things should be unified.
didibus · 4 years ago
Agree, I too have seen the lack of unification more often then not, because business projects are always local. This client wants feature Y, why build it for all clients right now if only one client wants it, I only want to pay for getting the feature out to the client as cheaply and quickly as possible. And now you've got a single use feature. Then next client comes over, and you can't reuse the feature, so you build it again in a slightly different way by different people, maybe even in a different team, rince and repeat. I see that all the time. And that's just one example of how people get their velocity down to a crawl over time. The only solution then is to hire more and more engineers until you're a huge engineering department maintaining a single product.

Of course, this is such a rampant problem in the software industry that a whole market for reusable standard generic solutions was created. That's why we got the cloud, and the array of SaaS, PaaS, IaaS, etc. And don't forget the entire open source is about standards, being able to reuse existing components and frameworks.

What I think the article doesn't mention is that unifying and creating a standard solution is a harder task then creating custom solutions one after the other for each use case/local context. In practice I've seen people try and fail, but often it's not the person with most experience trying, or the business isn't truly willing to put in the effort to succeed, both of these can sabotage things. And again, because it is hard, you have to be willing to fail the first time, but use those learning to try again, and again, until you crack it. And doing that is often worth it long term, cause when you crack it the efficiency and scale will go through the roof, if your business is smart, you might even realize what you have is more valuable than your current business, and pivot to being a SaaS vendor haha. Or you can keep it secret as a competitive advantage.

travisgriggs · 4 years ago
Lots of resonant points here. It’s worth making it to the end.

I work at a company where there’s a number of different little less-than-one-man projects, and there’s a lot of variety, and so a couple of non-tech types, frustrated with resource allocation (having the right kind of skills at the right place at the right time in the right amount) wants to standardize and simplify.

What I’ve observed though is that when you tell your house painters they can only work with black paint, they can only give your customers black walls, and when your customer wants wood panel, or textured fuschia, then you can’t earn revenue from that market demand.

kerblang · 4 years ago
In general, "unity" is something software developers routinely pursue just for the sake of unity itself, failing to understand that unity comes with significant tradeoffs. It is much harder to build a unified solution than a localized, one-off solution. Divide-and-conquer is often a much better engineering strategy: DAC might create more work than unity, but the work is more likely to succeed instead of falling apart because we failed to anticipate all the use cases within the unified framework, especially when we lack experience in the domain.

Also refer to Jeff Atwood's Rule of Threes (which he borrowed from someone else) here.

eternityforest · 4 years ago
I've noticed that ALL beginners seem to have a reinvented global solution phase.

Everyone who does electronics might say "Oh I'm going to use this one connector for everything". And it's either ok, if it's a standard connector, or a giant pile of crap that means they can't use a lot of existing stuff because they insisted on this insane DIY grand scheme.

Usually such things have an element of "I want to do Y, so I'll build a modular kit X and use that to Y". And then X becomes the real project and Y is never finished.

The insidious part is how the new product is often a tiny bit better than what's out there. But it doesn't matter. The mediocre standard solution is still way less trouble than the beautiful perfect custom thing. I'd rather have Just Works tech than tech that's Just Right. Anything that seems perfect and beautiful and simple, I don't trust, because it was probably made for one specific task, not to be a general standard you don't have to think about.

I think of the failures with global solutions are because someone did them on a small scale, or because they have to do with natural systems.

Fully top down planning of manmade things by a giant industry consortium is most of why tech is great. Otherwise we would have no USB C, and 12 different CPU architectures.

Sometimes design by comittee protocols suck, but usually because they didn't have enough control, and instead of a protocol, they deliver a description language for companies to make their own protocol, with every feature optional so that compliance does not necessarily mean compatibility.

When you do it internally it can suck because it's more effort than it's worth to replace all your existing stuff.

kingdomcome50 · 4 years ago
Counter example: Tom standardized a bunch of services... and it worked! Everything is easier and more efficient now.

I agree with the thrust of this post: Changing something that is not understood is a dubious undertaking. But the author fails to make a compelling connection between the above and software development. A poor solution may be a result of not understanding enough of the system as a whole, or it may not. We simply can't tell.

Standardization (i.e. simplification) is generally a good thing in software development. How would Tim's system look if they had opted for his approach from the start? How does the 3rd iteration of the system compare to the 1st iteration? Maybe Tim's solution is stepping-stone to something better. Impossible to tell.

EnKopVand · 4 years ago
> Counter example: Tom standardized a bunch of services... and it worked! Everything is easier and more efficient now.

I’m sorry, but that isn’t really a counter point unless you have some cases to back it up.

In my completely anecdotal experience standardisation never really works. I say this as someone who’s worked on enterprise architecture at the national level in Denmark and has co-written standardisations and principles on how to define things from common building blocks.

The idea was that something like a journal of your health can be defined as a model that can be used by everyone who ever needs to define a journal for health data. And for some cases it works well, it lets thousands of companies define what a “person” is as an example and which parts are the person and which parts are the employee and so on, and it lets them exchange data between systems.

Until it doesn’t. Because all of the sudden an employee is two different things depending on what time of the day it is, because a Nurse has different responsibilities while patients are awake, in some hospitals, and not in others. But because the “standardisation” doesn’t account for this, 50 years of enterprise architecture in the Danish public sector is yet to really pay off.

Some of our best and most successful public sector projects are the ones that didn’t do fanatical standardisation but build things with single responsibilities so that they could easily be chained together to fit a myriad of unique needs.

Now, I’m not against standardisation in any way, but sometimes it just doesn’t make sense and sometimes it does. The issue is that the standardisation approach tends to begin before anyone knows which situation you are actually in.

kingdomcome50 · 4 years ago
> I’m sorry, but that isn’t really a counter point unless you have some cases to back it up.

My counter example is about exactly as detailed as the author's example. Of course I was being tongue-and-cheek, but clearly standardization has worked in software.

You can toss your example right on top of all of the other failed attempts at standardization. It in no-way supports the conclusion that "standardization" is a problem. Like I said, I agree with the author's argument, but their conclusion is not supported by that argument. There are many failure modes to large projects.

stonemetal12 · 4 years ago
As far as I can tell how well standardization works depends on "how close to the humans" it is.

HTTP; TCP; json; xml: all standardize pretty well. Want to standardize your micro services on nginx with data in json? It will work swimmingly and save time because it is one less decision to be made, and overtime everyone will become familiar with how nginx is setup. Standardizing on what json libs to use so that everyone can dig into the json marshalling code without a lot of head scratching would be another big win.

Trying to standardize people never works because they want to do things their own way and view whatever standard you try to impose as wrong.

jamesfinlayson · 4 years ago
> How would Tim's system look if they had opted for his approach from the start? How does the 3rd iteration of the system compare to the 1st iteration? Maybe Tim's solution is stepping-stone to something better. Impossible to tell.

Reminds of something a senior developer once told me about rewriting systems: the first iteration is ad-hoc and messy; the second iteration is well thought out but completely over-engineered and the third iteration gets it right because the developers have done extremes and know where the correct middle ground is.

a1445c8b · 4 years ago
This has been my experience as well! This is why, if I have enough time to do it, I'd normally go through at least 2 throw-away prototypes before settling on a design to implement.