Readit News logoReadit News
jedberg · 3 years ago
I am one of the biggest proponents of microservices. I helped build the platform at Netflix, I've literally traveled around the world extolling the virtues of microservices.

But I also advise a lot of startups, and you know what I tell them nearly every time?

Build a monolith.

It's so much easier to start out with one codebase and one database, and that will scale for a while. Especially if you use a key/value store like DynamoDB (although you will lose relational functionality that can be helpful at the start). And did you know that you can deploy a monolith to Lambda and still get all the benefits of Lambda without building services?

And then, when you start growing, that's when you break out an independently scalable part of the system into a microservice (with it's own data store!). The one that may need to scale independently, or that you want to be able to deploy separately.

Microservices takes at least 25% of your engineering time just maintaining the platform. It's not worth it unless you can recoup that 25% in efficiency.

crooked-v · 3 years ago
"A while" is underselling it. As long as you have people who are half-decent with SQL, "just put Postgres on a big db server" will get you to 50 million row tables before you have to start thinking about even hiring a real DBA.
baq · 3 years ago
50M is something that can easily be done with just a dev who understands that sql is more than select, insert and update with the manual, google and chatgpt.

You can get really damn far with a fat postgres box.

jedberg · 3 years ago
True. "While" for some large value of time. And if you set up the auto-vacuumer correctly from the start, you can go even further!
jiggawatts · 3 years ago
It’s so cute that you think 50M rows is big. Your phone can handle many times that, and update it tens of thousands of times per second.
n0us · 3 years ago
> As long as you have people who are half-decent with SQL

:/

aledalgrande · 3 years ago
> And then, when you start growing, that's when you break out an independently scalable part of the system into a microservice

Respectfully disagree. At this point you start refactoring your monolith into components and actually look at performance measurements via tracing.

Do not do microservices when you're growing (or ever, most of the time).

And by the love of god, don't split your data. Data is much more complex to manage than code.

sethammons · 3 years ago
At several places, the biggest challenge to engineering velocity and the ability to innovate is that models are joined in interesting ways that prevent them from being decoupled. A user model tied to the package model preventing independent scaling of either. Every module reaching into all the tables. Joins, joins, and joins. Vast scans. The queries become inefficient and even the mighty postgres slows down at 2k rps. It is just too tempting to reach into another team's datastore. Now the two teams are in lockstep and can't alter their own datastore because others have assumptions on how it is stored and they access it directly.

At SendGrid, we had an few tables to deal with IP management. Over time, the like 5 tables were in use by 15+ services owned by different teams with competing priorities. When we finally had to scale the database, it took over three quarters of working with other teams while we supported both legacy and the new hotness.

It is hard to see early on where whole teams can sprout up and have domain ownership and to know which areas will be common/platform-like for other teams. But as soon as two services share a table, you should see a train coming at you.

Aeolun · 3 years ago
> And by the love of god, don't split your data.

I think people feel that if they introduce a new data store, they won’t have to deal with the existing nearly unusable massive data store.

Of course that just exacerbates the problem.

theshrike79 · 3 years ago
Splitting data is just a hardcore way of splitting the responsibility for the data.

You can't have 42 different classes directly poking the User-table for example. You need one clear location that has the responsibility for the data.

If you move the User-table to a different database schema, other places CANNOT touch it because they won't have access to it =)

Copenjin · 3 years ago
> And by the love of god, don't split your data.

Most data problems can be fixed on the frontend with a few relatively simple graphql queries. /s

ngc248 · 3 years ago
There is a way to splitting data. The service which owns the data always does the writes and others who need reads on that data can store replicas. ofc the complication then will be in replicating data, but this will enable services to massively scale and eliminate SPOFs
rewmie · 3 years ago
> Respectfully disagree. At this point you start refactoring your monolith into components and actually look at performance measurements via tracing.

No matter how hard you modularize and instrument your service, that won't give you regional deployments, which you absolutely need to have ti be able to drive down customer-facing latencies from many hundreds of milliseconds to a few tens of milliseconds.

How do you pull that off with a monolith when you have a global userbase?

Deleted Comment

usrbinbash · 3 years ago
Wise words. They boil down to a very simple truth that was as accurate half a century ago as it is today:

Make things as simple as possible, and as complex as necessary.

I can always make something more complex than it is now. As you say, we can take out things from a monolith and make them into a service. It can be hard to do so, sure. But nowhere near as hard, as trying to get complexity OUT of a system once it's in. Everyone who ever tried to revert a bunch of microservices back into a Monolith knows exactly what I am talking about. It usually amounts to the same work as a ground-up rebuild.

arp242 · 3 years ago
> It's so much easier to start out with one codebase and one database, and that will scale for a while.

Lots of businesses don't even need to scale anyway; Netflix is a "high customer, low revenue per customer" type of business, but there's lots of "low customer, high revenue per customer" businesses too, perhaps even more than the first one. These are often the type of products where you could quite literally run production on your laptop if you wanted to.

At the last place I worked they built all this microservice bonanza for ... 300 customers... If they ever got a few thousand customers it would be quite successful, and tens of thousands would be hugely successful. What usually happens is that they don't spend that "25% of your engineering time just maintaining the platform", so the platform was impossible to run locally, and production was hanging together with duct tape.

(Aside: in a recent interview I was asked to design a system for "500 million concurrent users" – this is for a business that's mostly in the low-customer/high-revenue type. I still don't know if the test was to point out that 500M is an utterly bonkers number – about a 10% of the internet connected people on the planet – or that they really thought this was somehow a vaguely realistic number for a startup. I answered with "build the simplest thing possible, a monolith, and focus on features and making sure you're building something people want". I didn't get hired, so I guess they were somehow under the misapprehension that you need to design things for 500M users right from the start?)

elliotec · 3 years ago
Ah, it’s you!

You’ve got a hell of a resume. And been accidentally incredibly convincing in getting many people into many early messes.

Jokes aside, thanks for at least coming around to advise startups sanely.

I’d personally never advocate for microservices until at the scale of Netflix or Amazon or Reddit, and even then only with in-house expertise at your level. Otherwise it’s a nightmare.

Thanks for everything, especially your contributions of sanity.

jedberg · 3 years ago
Aww, thanks for the kind words. I apologize if I caused you any harm with my talks. I did in fact start out saying everyone should use microservices, but I pulled back as I saw how damaging that can be to a small startup, or even a large enterprise that doesn't actually need it.

We all make mistakes!

hliyan · 3 years ago
I wrote this 8 years ago: Microservices vs. "air-gapped" modules https://www.linkedin.com/pulse/maintainable-software-archite.... You can achieve the same end as a microservice using a module, by simply having lint rules that prevent you from importing other application level modules. This way the only possible comms interface is to pass events with a payload of primitive typed parameters.
jedberg · 3 years ago
An interesting idea. Sort of a good half way. But one issue I see is that you can still have a shared data store. That means you can accidentally (or intentionally) use the database to pass back-channel messages —- have one module store data and another read it.

That could lead to hard to find bugs. Did this ever come up for you?

3np · 3 years ago
I think if we also "air-gap" (loosely) the dependencies, we get a typical monorepo in, say, JS or Golang? That is, a module in a monorepo is a special case of your airgapped modules?
MattRogish · 3 years ago
I agree - microservices are a technical solution to a people problem. Stevey's seminal "Platforms Rant" (https://gist.github.com/chitchcock/1281611) touches on this - the reason why Dread Pirate Bezos mandated services was not because Kubernetes was cool[1], but because their teams were too interconnected, moving too slowly, and Conway's law was getting in the way.

Splitting into services siloed each team and allowed them to move independently and by side effect, faster. Not due to inherent properties of (micro) services but because one goes faster by removing the things that slow you down.

As a startup, you do not have this problem. You likely will _never_ have this problem as your default future state is 'dead'.

Do the simplest thing that can possibly work - build a monolith using tools you know that give you the ability to predictably, rapidly iterate on the product.

After you hit some semblance of product/market fit and need to scale, you can do that. Scaling is a solved problem. Premature scaling is not.

[1. This is a joke. Kubernetes wasn't even a thought in Google's eye at this point in history]*

rewmie · 3 years ago
> the reason why Dread Pirate Bezos mandated services was not because Kubernetes was cool[1], but because their teams were too interconnected, moving too slowly, and Conway's law was getting in the way.

Not quite. Amazon's problem was that they were experiencing too much impedance between teams, and some teams were even siloing themselves to the extent they were creating problems for everyone around them. Bezos' diktat was intended to break through a bunch of petty office politics bullshit that was dragging down the company and preventing teams from delivering their work. The diktat boiled down to basically ordering everyone to grant access to the service they owned and provided to external teams that need it, no exception, and would be held liable if they failed to provide it, intentionally or not

glimshe · 3 years ago
Most people don't understand the point of microservices. They look at the idea and are attracted by the power of modular interfaces, customizable scalability and independent deployment, without really thinking of the additional engineering overhead that are required to unleash these capabilities.

I've seen ex-Netflix software engineers taking jobs elsewhere and proposing microservices for systems that would receive little to no benefit from them. In practice, the implementation of microservices in these contexts become a costly solution looking for a problem.

johnboy123 · 3 years ago
Just chipping in with thoughts on DynamoDB, (although I have worked on much smaller scale systems)

I am a long term dev, done lots of SQL, but for the past few years I have been using DynamoDB, and I am using it for my new startup (So I rate it).

Cons - You have to be very aware of your query patterns, and not having ad-hoc queries is a pain.

Plus sides - With on demand billing, its free if you aren't using it - Built correctly, it will scale - No Schema upgrades (This one is massive for me)

On the last point, I really do appreciate not having to worry about keeping schemas upto date across all devs and environments.

We use quite a simple pattern of a table per entity, as opposed to single table design, because it allows us to just use the API at the highest level of abstraction, where you just write your objects to dynamoDB. (You can still do lower level requests to poke values and such like)

jwestbury · 3 years ago
> No Schema upgrades (This one is massive for me)

At Amazon, relational databases are banned unless you get an explicit exemption from senior leadership. This is the primary reason why. Too many cases of schema upgrades causing outages.

The problem with DDB or other NoSQL applications, like you say, is how much you need to consider your query patterns. The last major project I worked on using DDB, we spent a couple of days just thinking through our query patterns so we could come up with the right database design and data structures. (We still believe it was the right choice, though.)

Copenjin · 3 years ago
I wish you good luck with your redemption arc, every monolith counts.
nijave · 3 years ago
>Microservices takes at least 25% of your engineering time just maintaining the platform. It's not worth it unless you can recoup that 25% in efficiency.

My take is, microservices multiply your operational problems so you need a really solid platform/infrastructure/developer experience team otherwise you'll introduce a bunch of new headaches. These include things like CI/CD (building, storing artifacts, testing artifacts, promoting artifacts between environments), observability (metrics, logs, distributed tracing, error monitoring), distributed systems quirks like cascading failures, release/change management and communication, service discovery/routing, automated infra provisioning for things like datastores. Unless you have a really good handle on all these pieces (which I think most startups don't), you end up in an operational nightmare spending tons of time trying to keep the system going.

Of course, the next step is throwing on 3rd party products to try to solve this which adds even more complexity. Throw in Datadog for observability, Istio service mesh for networking/traffic, ArgoCD/kustomize/Helm/Kubernetes to manage all this infra, etc etc

lifeisstillgood · 3 years ago
How would you split the total overhead between monolith and monorepo?

(sorry this ran away with me)

A dumb example is that if I start with my single codebase running on a single server, I am likely to have a single git repo (foo).

Then I have a genius idea and put all the email handling code into foo.mail and soon I have foo.web and foo.payments.

All is fine as long as I am just checking out HEAD each time. The code running in the runtime is still one big set of code.

If I get creative and put in load balancers it's still a monolith.

But if I split out the web servers from the email servers, then I kind of start to see microservices appear.

I am trying not to be pedantic, but I am truly interested in experienced views on where the pain really starts to appear.

At this point (server to server comms), I should look at mTLS, and centralised logging and all the good stuff to manage microservices.

But how much pain was there before?

What if I was a large company and so hired a dev or two per repo (you know to get that 9 women one month effect). Co-ordinating multiple devs over different repos, even with one monolithic runtime, is painful (experience tells me).

So I am interested in where the break points are, and whether there are easier paths up the mountain?

afiori · 3 years ago
Splitting out the webserver (assuming it is the main entrypoint for users) seems more like an infrastructure choice than application architecture and having an independent email-sending-service looks more like replacing a third-party offer (like turboSMTP) with an in-house service.

I do not think that this is what people mean by microservices.

danielovichdk · 3 years ago
I honestly believe that doing small services is long term better approach than a monolith. But as you I strongly believe it's not where you want to start.

But I do not agree on is more time consuming than the other. It's just time spent on different matters. For the sake of money spent, I have not seen any data on how a monolith outperforms small services. For maintaining the platform, I think 25% sounds a bit high.

Money spent should be measured in many different aspects. One is definitely productivity. And I have seen more stale monoliths than I have seen stale small services. Whether it's better or not is not up to me to judge. I just know what I prefer.

Having been exposed to small service architecture where it has been working really well and very poor, there are few things that stand out.

- Conquer and divide (your monolith over time). - Responsibility boundaries are easier to cope with for developers in a small service since they often don't have generic and yagni abstractions. - Less code to comprehend for a small service and the cognitive load decreases. - Build time, test time, deployment time. - Should lean up against a direct business measurement and value. - Group chatting services into one.

I have yet to see a monolith that over time is not really deteriorating, but I haven't worked with SO or Shopify. We could also argue that Cobol and Fortran is still of good use but time has also changed leaving that style of system development exposed as old and dusty.

But like any software development occurrence, it takes responsibility, mandate and proper leadership to get things done in a decent manner. So if you start by making 3 services that has to chat to find a user profile you probably don't know what you're doing. And 25 people (the Threads image) is not a small team IMO.

Good luck

ddalex · 3 years ago
My go to page about this is http://widgetsandshit.com/teddziuba/2008/04/im-going-to-scal... which perfectly summarizes the problem AND solution
1letterunixname · 3 years ago
The issue isn't blindly adopting microservices or a monolith as a religion or bikeshedding thereabouts.

Architectural and ops concerns lead to dividing an overall service into sufficient and necessary units of abstraction. To efficiently run more than 1 service leads to standardization of automation of the concerns of application platform infrastructure:

- stateful data backup, protection recovery

- configuration management

- OS security

- authentication, authorization, audit, encryption, and identity management

- monitoring

- analytics

- rate limiting

- A/B, etc. feature tests

- sharded deployment

Chopping up 1 service into many more services doesn't make the above concerns go away. Neither does collapsing many into 1.

The internal pieces (and often interfaces between business units) need to be broken down into decoupled units of abstraction. If people want to call that "microservices" or a "monolith", it's kind of irrelevant. Containers of abstraction should serve a purpose rather than hinder it.

rewmie · 3 years ago
> And then, when you start growing, that's when you break out an independently scalable part of the system into a microservice (with it's own data store!).

Wasn't this the rule of thumb for microservices from the very start? That the main driving force is to ensure separate teams work independently on stand-alone projects they own exclusively? Scaling tends to only become an issue when the project grows considerably.

jedberg · 3 years ago
Yes. But that idea got lost in the fray. Even I tried to build a system from scratch with microservices at one point.
thinkmassive · 3 years ago
> break out an independently scalable part of the system into a microservice … The one that may need to scale independently, or that you want to be able to deploy separately.

How often would you estimate that security concerns are a valid reason to isolate functionality into a microservice? Separation of concerns, reduced attack surface and blast radius, etc

Gud · 3 years ago
Wise. Every time I’ve started a project trying to make an amazing platform with everything segregated with micro services etc. I’ve spent so much more time on the _platform_ than on the actual product.

Now I go with SQLite and some basic python script and pivot from there. Ironically that’s how I used to do it before the micro service fad.

dysoco · 3 years ago
> And did you know that you can deploy a monolith to Lambda and still get all the benefits of Lambda without building services

Could I get more info on this? All I've found so far is that it's an anti-pattern. Isn't the startup time a killer?

jedberg · 3 years ago
> Isn't the startup time a killer?

It depends on your programming language and how many libraries you are importing. If you use Python you can get startup times less than 200ms. Also, if your app is active, most startups won't be cold (Lambda reuses the VM a bunch of times before recycling it if it's staying active). You can also add a health check API and just poke that once a minute to keep it warm if you want (at the cost of increased invocations).

But yes, if you're doing something that requires a lot of setup, then this pattern isn't for you. But it turns out most web apps are simple enough it can be done this way, especially if you're already using a pattern where the frontend is doing a lot of the heavy lifting and your API is mostly just marshaling data from the client to the database with some security and small logic in between.

Shinchy · 3 years ago
Absolutely agree, I've seen far too many companies spend far too much time working out the infrastructural relationships of microservices. Spending very little time working on the actual applications needs itself.
HerculePoirot · 3 years ago
The problem is not with the devs but with the investors I believe. They're looking for unicorns, unicorns attract million of customers, so they need to work at scale, hence microservices.
hobs · 3 years ago
Well, in the previous cycle it was clear that the investors were looking for buzzwords and marketing hype because they often don't understand and frankly dont care - they just wanted to make you look good enough to sell to the next guy, as they were all assuming you were unprofitable in and unprofitable out.

They all just want a story that you are going to be the next google so they can sell that story to the next guy.

Will that hold in the non zero interest rate world? The AI hype doesn't make me think the world has changed that much.

gremlinunderway · 3 years ago
make up a new word for monolith that sounds sexy instead of big, bulky and scary.

Something like "uniform-services" or "harmonized-services". Vertically-distributed harmonized services.

shortlived · 3 years ago
> And did you know that you can deploy a monolith to Lambda and still get all the benefits of Lambda without building services

I did not know. Does anyone have pointers or examples on this?

jedberg · 3 years ago
As an example, you'd write your entire API as a flask app, and then deploy that app to Lambda. Then send all requests to that one Lambda. As long as your startup time is quick (and your datastore is elsewhere, like in DynamoDB) then it will work great for quite a while. Lambda will basically run your app enough times to handle all the requests.

You have to be careful when you design it such that you don't rely on subsequent requests coming to the same machine, but you can also design it so that if they do come to the same machine it will work, using a tiered cache.

spixy · 3 years ago
I saved url of your comment into my favourites, if I need to convince people in a future.
wutwutwat · 3 years ago
Obligatory "PostgreSQL when it's not your job" share. It'll get many shops pretty damn far imo

https://www.pgcon.org/2016/schedule/attachments/437_not-your...

amelius · 3 years ago
Of course it is even better to build a distributed monolith...
bradhe · 3 years ago
I work on a massive monolith that has about 800 contributors and its just as complex to add something as simple as a user’s birthday, just not all the complexity is technological. It requires “organizational alignment” since you’re touching everyone’s code.

There will be endless iterations on design and review. Sign off required by at least 2 architects. It will get added to multiple planning iterations. The actual code will take an afternoon or less. We’ll have to ensure we hit out 90% test coverage during code review but because of all the tests, it’ll be too big for one PR so it will need to be broken up into multiple PRs probably landing over multiple weekly releases. To facilitate that, we’ll put it behind a feature flag (of which there are currently 13,000). Once it hits production, and the dashboards/monitoring are put in place, it will get enabled and disabled over and over again as we’re not totally sure why our birthdate feature broke the metering service, but we think that’s the root cause—need to do a few weeks of analysis.

Then, finally, in a year the engineer who was tasked with it will get a good performance rating, maybe be up for a promo! Just in time for him to jump into another project that’s failing horribly headed into its 3rd year in development: Allowing the user to set their timezone.

srvaroa · 3 years ago
This comment shows IMO that the real issue here is not really microservices or not microservices, but what the article calls "The apostles of the Church of Complexity".

Neither microservices or monolith are a golden hammer, silver bullet or whatever (nor the opposite). They are tools each with their tradeoffs, which combine with the many context-dependent tradeoffs of each organization. They are not the problem.

The problem is a) engineer's fascination with complexity, and confusing "simple" with "hack", b) how organizations keep cargo-culting tools, architectural patterns, and so forth. Applying architectures, design patterns, whatever naively, based on the belief that usage alone will deliver benefits. It doesn't.

appplication · 3 years ago
I don’t think complexity is usually intentional though. Really, writing complex code is easy. It’s writing simple code that’s hard.

I’m in the process of wrapping up a reactor of one of our systems that I wrote a few years ago. Largely the refactor has been an exercise in just simplifying everything. The only reason I can effectively do this now is because we have a much better idea of our use case and how the entire system fits together. At the time, the complicity seemed necessary, though I’m not quite sure why. I think I just didn’t really understand what exactly this was really going to do when we were building it.

I think this probably applies to companies growing quickly as well. Beyond core product, there’s a lot of things you can do, but what benefit does it bring? How does it fit into overall strategy? It’s easier to build something if you just carve off your own little area to ignore the rest of the system and assume you’ll eventually understand better how of make it integrate nicely or add value.

theshrike79 · 3 years ago
Some people who advocate for huge monorepos, like Google has, tend to forget that Google has whole teams building tools just to wrangle the huge singular codebase for refactoring and testing.
bradhe · 3 years ago
Bingo. Monorepos work because tooling makes it work. Just putting all your code in one place doesn't make it a monorepo--just makes it a mess.
ramraj07 · 3 years ago
This sounds like you work at a place that’ll bungle any tech stack paradigm.
camgunz · 3 years ago
It sounds like this company doesn't have a competitive need to ship code changes, but when it does (i.e. a challenger appears or the moat disappears) it'll be trouble.

I've tried--with varying levels of success--to really take to heart that software engineering is a never ending battle against complexity. You have to do it all the time and it has to be your paramount value, otherwise stuff like this happens. I don't think there's an architecture or ideology that ends the war; this is just the nature of the job.

danmaz74 · 3 years ago
Having 13,000 feature flags is just insane. I'm so happy that at my previous job I insisted that we use "release" flags for every new feature were there wasn't an explicit requirement to be able to enable it for some customers and disable it for other ones. We made it so that you couldn't enable one of these release flags in production, and developers were required to remove the code for the flag before final tests and release. It requires a little more work for every feature which requires a flag, but it really pays off in terms of reduced code base complexity in just a few months.
paledot · 3 years ago
It is indeed insane, but there is some value to leaving coarse feature flags in production. I've seen outages resolved by flipping a feature flag that was accidentally left in production months after deployment, where without it we would've had to resort to more drastic or hackish measures to restore service. (Eg. If one query is overloading the database, you can disable the feature that runs the query and investigate the real problem at your leisure while the database stabilizes.)

That only works in moderation, though - with 13k flags, there's no way the particular combination that you're about to release directly on production has been properly tested.

deterministic · 3 years ago
That sounds crazy. Imagine how much worse it would be if it was a microservices system!
scrlk · 3 years ago
"Why is it so hard to display the birthday date on the settings page? Why can't we get it done this quarter?"

"Look...I'm sorry, we've been over this, it's the design of our backend."

https://www.youtube.com/watch?v=y8OnoxKotPQ

ris58h · 3 years ago
Totally agree. I don't see how microservices would solve this. Also monorepository doesn't mean that service is monolith.
aledalgrande · 3 years ago
OMG this sounds painful ;) both in processes and state of the codebase

I can ship something in a couple of hours on a monolith where 3000+ engs work and deploy every day, but to be fair that's not what I've seen in any other company I worked at.

PartiallyTyped · 3 years ago
Where do you work at, and, are you hiring?

Deleted Comment

andrewstuart · 3 years ago
Ugh that would make me hate software.
crabbone · 3 years ago
Sounds like DB2?
bradhe · 3 years ago
Sounds like DB2.
martypitt · 3 years ago
I think this is a great point, and shouldn't be just hand-waved away like you hit some bizarre edge case of Monoliths.

Monoliths can have crippling downsides -- just different flavours of downsides from Microservices. What you gain in network latency and DRYness, you can lose in Autonomy and Breadth of codebase.

Microservices vs Monoliths - just like everything else - is a question of tradeoffs, and making informed choices about when to apply them, and how to mitigate their downsides.

The slightly more nuanced point in the OP's article is that adopting any engineering practice and blindly following as though your identity is linked to it, is a bad move.

synack · 3 years ago
Microservices are a solution to a social problem, not a technical one.

A team of N engineers requires N² coordination. Large teams get mired in endless meetings, email, design reviews. Small teams are more effective, but struggle to maintain large systems.

Splitting a system into subsystems allows each team to focus on their piece of the puzzle while minimizing the amount of peer-to-peer coordination.

Yes, microservices add complexity and overhead, but this approach enables a large organization to build and iterate on large systems quickly.

brhsagain · 3 years ago
> Splitting a system into subsystems allows each team to focus on their piece of the puzzle while minimizing the amount of peer-to-peer coordination.

This does not happen at all. When you break a system into subsystems, all the previous connections that get remapped to new connections between subsystems still need to happen, in order to solve the fundamental problem that the system solves — except now instead of just making the connection directly, there has to be a "cross-functional" meeting between teams and a complicated communication layer between the systems. And if somehow you find a breakdown that requires minimal connections between subsystems, then those connections wouldn't have existed in the original system either, and the N² problem doesn't exist.

devoutsalsa · 3 years ago
It's all fun & games until product wants to add a feature that doesn't map cleanly to your micro servies architecture. Then you end up hard coding you services into a macrolith. Good times.
jedberg · 3 years ago
If that's the experience then you're doing services wrong. Each service should have its own datastore and a single API. The interface between services should be a single connection.

There should be maybe one meeting where the caller defines what they need the service to return to them.

threeseed · 3 years ago
> there has to be a "cross-functional" meeting between teams and a complicated communication layer between the systems

Of all the things wrong with micro-services this isn't one of them.

The "complicated" communication layer is always either REST/JSON or GRPC.

Both of which are simple, easy to debug, proven and require nothing more than a simple discussion over an API contract.

baobabKoodaa · 3 years ago
I've never seen teams organized around microservices like that. What I've seen, again and again, is one huge team where "everyone is responsible for all the microservices" (meaning, no-one is responsible for anything).

On a theory level I would agree with you - I've just never seen that happen in practice.

misja111 · 3 years ago
I'm not that much of a supporter of microservices, but my experience is the opposite: in every company I've worked for that used microservices, each team had their own set of microservices they were responsible for.
mirekrusin · 3 years ago
People seem to forget they can create separate directories in their codebase.

They solve "people problem" by converting trivial technical problem into complex distributed system problem.

Well, now you have a _Problem_.

mattacular · 3 years ago
The complexities of software development could be solved with this one weird trick - if only programmers remembered FOLDERS. What an absurd thing to suggest.
mmcnl · 3 years ago
Sure teams in large organizations allow other teams to randomly create folders? Also I've rarely seen an internal code base that is easy to grasp. You can create your own microservice and set up an API in 1/5th of the time it takes to understand a foreign code base and make a small adjustment.
JackMorgan · 3 years ago
I'm honestly not even super convinced that small teams struggle to maintain large systems. I've been on a team that was only 7 good engineers maintaining a 3.5 million line project that had both a web UI and thick client. It supported 2 different databases and had a horizontally scalable job runner.

At one point it was 35 engineers, but layoffs took it down to 7, at which point we started to get a lot more done. There was just so much less time spent keeping everyone aligned. So many fewer meetings, sign-offs, reviews, plannings, retrospectives, management meetings, etc. Developers had a lot more agency, so they just got stuff done. Technical debt repayment became 50% of our time, as we easily knocked out features in the other half of the time. We kept ruthlessly cutting complexity, so it got faster to add new features.

I'm sure some projects just need more bodies, but I think there's an upper bound to how much complexity can be added to a system in a given unit of time. Adding developers over a threshold will result in the same amount of features per week, just everyone does a little less and spends a little more time on communication.

Repeat up to thousands of developers where adding a single field takes months.

tonyedgecombe · 3 years ago
>At one point it was 35 engineers, but layoffs took it down to 7, at which point we started to get a lot more done.

Years ago I did two back to back contracts for two different pharmaceutical companies. They were both about the same size but one had an IT group that was ten times the size of the other. You can guess which project was late and painful.

corethree · 3 years ago
Why do you have to split things into services?

How about moving things into different folders. Have you thought about that?

Why do you have to modularize it with a whole new repo, a whole new docker set up? Just use a folder bro.

atoav · 3 years ago
The problem microservices try (tried?) to solve isn't about namespaces, it is about too tight coupling between code. Whether that tightly coupled code sits in a subdirectory or in a different repo doesn't matter.

It can be benefitial to maintain well defined interfaces at boundaries between certain parts of your code. It can also produce a lot of work and add complexity. But beyond a certain scale adding systemic boundaries and honoring them isn't something you should avoid.

Devs who do microservices just tend to go too far too early.

nroms · 3 years ago
> Just use a folder bro

This will be my new go-to response when discussing microservices

hot_gril · 3 years ago
Usually the most important designation of the separate system is that it uses a separate database. You only have immediate consistency within one service.
abhiyerra · 3 years ago
I found Django apps to be a good middle ground.
meowtimemania · 3 years ago
if you have a well modularized monolith, you can get best of both worlds.
croo · 3 years ago
That is a big if that can be easily broken by a new guy who don't know the rules or if the pace is fast enough that you cannot review everything.

If the codebase is different you can force the separation not just ask nicely to keep the code well modularized.

jayd16 · 3 years ago
Not really. You can't really reduce the blast radius of crashes or bad deployments. You need to have the discipline of a good CI/CD instead of siloed but decoupled workflows.

Just keeping things neat doesn't go nearly as far as a separate process on separate machines. Monolith might be better but I don't think it's a situation where you can have it all.

synack · 3 years ago
Agreed! Software modules and libraries can achieve the same independence of subsystems without implying a network topology.
mmcnl · 3 years ago
Yes, only it takes a while to figure out the right level of abstraction for your organization. It's difficult to start right out of the box with the right type of modularization.
gscott · 3 years ago
Maybe a solution to an anti-social problem!

FT.com did a recorded seminar session on their microservices architecture and one of the benefits they extolled is if someone wanted to improve on a feature they could just make it all over again and replace the old microservice with a new one. No need to look at the last persons code, just blow it away like it never existed.

I gathered their site is actually a black box filled with hundreds of black boxes of microservices. All a mystery, they either work or they don't and if they don't they fail gracefully quickly.

https://www.youtube.com/watch?v=_qakAUjXiek

crabbone · 3 years ago
Microservices are not a solution to the problem you describe. Nothing in your problem description requires the micro part.

Microservices are about splitting the application into very fine-grained sub-applications. It's not about modularity in general, it's about making things as modular as possible (up to one function per service). That's why they are micro. Otherwise we'd just call them "services" and nobody would have any problem with that.

Deleted Comment

nijave · 3 years ago
>Splitting a system into subsystems allows each team to focus on their piece of the puzzle while minimizing the amount of peer-to-peer coordination.

Assuming coupling is reasonable. If you have a "distributed monolith", you still end up with all the meetings because every microservice change risks breaking interfaces other people are using.

In the context of coupling, I'd argue the same applies to monoliths. Multiple teams can successfully work on a monolith given architecture where they're not constantly stepping in each other's toes (each team works mostly in their own modules/classes/packages)

mmcnl · 3 years ago
Exactly, I missed this completely in the article. In my experience, microservices attempt to solve organizational problems and not technical problems. There are technical downsides to microservices that may be outweighed by organizational benefits. With monoliths you might have a large amount of hidden opportunity cost that technically never surface.

I do agree that this is less of a problem for startups so it doesn't make sense to start with a complex microservice architecture. But in large organizations, especially corporates, it absolutely makes sense.

vinay_ys · 3 years ago
Ummm, extremely large teams develop monolith (single binary) things called OSes or databases etc. The modularity for scaling developers cross-communication comes from.... modules! duh! Aka libraries.
hot_gril · 3 years ago
Yep, I work in a large org that used to be a monolith (which means single DB really). Was a mess for the reasons you'd expect. Even our subteam of 10 needs to split things up more.
dahwolf · 3 years ago
"Splitting a system into subsystems allows each team to focus on their piece of the puzzle while minimizing the amount of peer-to-peer coordination."

The coordination is still there because a microservice team does not live in a vacuum. They build services based on demand from other teams that typically build web apps, mobile apps, sometimes server-to-server.

Hence, the "independent" team now becomes a roadblock for higher order features.

DrScientist · 3 years ago
I get the point about creating boundaries to document the dependencies - however if your language supports packages and private keywords you can do that without having microservices.

And once you've split up your app into lots of independent microservices who owns the arrows between the microservice boxes? ( The actual app ).

politelemon · 3 years ago
Shouldn't it be n(n-1)/2 coordination. Assuming you mean communication channels?
jjgreen · 3 years ago
I'd assumed the OP meant O(n^2), and n(n-1)/2 = O(n^2)
chalcolithic · 3 years ago
I wonder why people ever see it differently?
switch007 · 3 years ago
A common pattern I've seen is:

- Current CTO/VPs built/helped build original monolith

- Nobody wants to tell the CTO that their code is shit (and/or is from a different era and needs a complete overhaul), unrelated to that fact it's a monolith. CTOs are too busy doing marketing/getting funding to make a decision on microservices vs monotolith, so the newly-hired architects gets to call the shots

- Everyone cheers on microservices because it fits within the story of a fast growing, serious, technical company and nobody wants to be that lone dissenting opinion/criticise the CTO.

Nobody is seriously and truthfully recommending microservices because they believe them to be the best trade off and superior choice. It's because they like their job, they like hiring people, and it fits within the narrative.

And it just so happens during the massive overhaul that you get to rewrite a ton of code and improve it, while just calling it a migration to microservices

So it's a way of not hurting the feelings of the CTO, going along with the crowd and a way of rewriting a ton of old bad code with an excuse supported by almost everyone

kirse · 3 years ago
Nobody wants to tell the CTO that their code is shit

It's a pattern because it's a factual inevitability. Whether you're an individual lead engineer or a CTO/Founder, eventually you always look back and conclude things could have been done better and watch in pleasure/horror as you reap the benefits/drawbacks of how you laid down patterns and processes that the team dutifully followed.

benmercerdev · 3 years ago
Even with good patterns and processes, the shifting landscape of requirements and priorities and business objectives can lead an application into the weeds.

Initially, you may think you're building a single, focused organ like a liver. A few months in you realize that liver needs to power a specialized toaster for English muffins. Eventually, your core offering becomes a conveyor belt that takes toasted rye bread to the buttering rack, and now you need the toaster that only fits two round English muffins per minute to produce 10,000 square pieces of rye per minute while still relying on the liver for power.

ConcernedCoder · 3 years ago
you just described the last 5 years of stumbleupon...
Attrecomet · 3 years ago
Wait, that's still around?
crabbone · 3 years ago
I worked there.
hnarayanan · 3 years ago
Oh my goodness, I feel like you’ve nailed it.
mojuba · 3 years ago
(God, how I love confirmation bias when it's my own bias.)

I've been saying this for years, the microservices insanity it's just an excuse for mediocre engineers to be in demand. It is fueled by mediocrity but it is also what keeps so many tech companies going.

There are simply not enough competent engineers who master UNIX and who can build beautiful minimalist systems like StackOverflow's or a bunch of others' mentioned in this article. Therefore microservices as a smoke screen for mediocrity is here to stay, it's not going away any time soon, especially considering that cloud providers like AWS promote themselves via all possible channels and encourage the decision makers to take that route anyway.

dan_mctree · 3 years ago
I don't know why but no one hires or listens to the architect who recommends sticking to safe monolithic-esque systems. If you're not talking cloud and microservices and the newest unproven frameworks, you're an old fogey who needs to get with the times. Even though those types of systems are rarely the most efficient, powerful or safe systems

I work for a company that builds software to be used internally by other business. We have like 200 people tops using it simultaneously with no usage spikes, a perfect environment for regular web servers as we'll never have unexpected scaling issues, yet everyone is dying to go to the cloud. Why? I think it's just because our management, programmers and even customers are convinced by cloud provider marketing that the cloud is cool

misja111 · 3 years ago
Architects that propose pragmatic and boring solutions are usually not hired. It's the emperor's new clothes, companies like the idea of an architect who comes with some revolutionary new concept that will lead to a breakthrough that finally will make everybody rich. If they don't understand the architect's idea, so much the better because then it must be really state of the art.
danjac · 3 years ago
While there are definitely developers who are enthusiastic about microservices for all the wrong reasons (e.g. it looks good on the resume) I think it's more about how companies deal with complexity.

Companies don't just ship their org chart, they ship all their dysfunction and historical baggage. A beautiful, well-architected platform from 5 years ago, with an efficient, thought-out data model and API and well-written and tested code, might be a total mess after 5 years of constant pivots from the CEO, last minute customer requests the sales team have pushed through, product managers doing their hydrant-meets-dog act of adding features nobody needed or asked for, and never enough people and time to do as good a job as the developers would like.

One day you wake up with a big pile of tech debt and fixing bugs and adding features takes way longer than it should, and microservices are that siren call that promises a solution that doesn't involve burning the whole thing to the ground and starting over.

tonyedgecombe · 3 years ago
>There are simply not enough competent engineers who master UNIX and who can build beautiful minimalist systems like StackOverflow's

StackOverflow is built on Windows not UNIX.

I know I am nitpicking here but there is a larger point. The technology choice is rarely the bottleneck.

danmaz74 · 3 years ago
In my experience, a mediocre engineer can build a decent monolith, especially if they're using an opinionated framework (like Ruby on Rails). On the other hand, building microservices is much more complex as there are more moving parts and more failure modes, so the mediocre engineer - and even many non mediocre ones - is much more likely to create problems.
Tade0 · 3 years ago
> I've been saying this for years, the microservices insanity it's just an excuse for mediocre engineers to be in demand.

Or just inexperienced ones who want to pad their CVs.

I think we failed to educate the current generation on what's important in this job.

PartiallyTyped · 3 years ago
Other senior in my team and I agree with you..

We are wasting 20+ people's time because we were forced to rush into something that resulted in 5-6 different microservices where a monolith written by 6 competent engineers over 6 months would have sufficed.

What you describe regarding microservices and AWS is also a thing internally.............

cfeduke · 3 years ago
I feel this, working on a small team moving things to microservices. My primary problem is observability. It's become a huge chore figuring what, exactly, is going wrong in production when something goes wrong. It's not enough to tail the logs of some distributed application, I need to tail the logs of several distributed applications where there messages are interspersed with one another. I suppose when we get some way to visualize these traces - tooling - it'll be okay. But, small team, limited human bandwidth, and we don't have this tooling in place yet.

The monolith, in contrast, had NewRelic integrated years ago. There were performance problems with this monolith which have been mostly solved through indexes and a couple of materialized views. Trivial to figure out what is going wrong. The code may be old and full of race conditions, but solving problems isn't difficult.

I dread dealing with multiple separate database instances each backing their own microservice when it comes time to upgrade those databases instances. I was hoping for a single database instance with multiple databases, but that particular architecture isn't on the menu. :\

jskrablin · 3 years ago
Take a look at the OTEL (Open Telemetry) tooling and libraries. Or Grafana stack/offering with Prometheus, Tempo and Loki. Centralized logging and service calls/code execution tracing is not exactly new. It is often an afterthought.. and then you get yourself is this kinds of unpleasant situations.

And since you didn't implement correct tooling from the start, your team is even smaller and more limited... because you have little to zero idea on what your services are up to.

As per db instances... you upgrade them one by one. Unless there's some really bad bugs present (security or otherwise) there's no rush in upgrading stuff just because.

antonvs · 3 years ago
Are you not using cloud? Because the cloud providers provide centralized logging, so all you need to do is pass a request id between services and include that I’d in log entries, and you can trace requests across services.
ConcernedCoder · 3 years ago
I find it weirdly theraputic to read things like this, and reminisce about loudly proclaiming the same a decade ago while being shushed as a non-believer...

Please excuse me if the paradigm of "microservices" has left a bad taste in my mouth, but I have real-world experience with the repercussions of whole-heartedly embracing the latest tech-dejour without completely understanding the tradeoffs...

Many years ago I was hired at stumbleupon around the time the leading compsci doctor decided to take a working and profitable monolithic php app and turn it into a scala/java microservices architecture... in fact part of the newhire process was a weird one-on-one with said mad-compscientist where he extolled the many merits of microservices and skillfully dodged questions like "why would you build a distributed service that just adds a list of numbers?" with a bunch of "you wouldn't understand why it's so much better..." type hand-waving. Fast-forward to 30+ new hires and 4+ long years of intense development and the no-longer profitable company was left with a new slower, buggier, impossible to debug distributed hellscape... as the main designer/architect of it all decided it was a great time to take a "sabbatical"... it wasn't long after that the Nth round of investor money ran out and we were all looking for work.

CraigJPerry · 3 years ago
This article is off the rails (to borrow the authors amtrak metaphor).

The author posits that if you make pile of crap microservices then all you had to do instead was make a monolith and magically it’ll all be fine.

You can put the same kind of design and engineering work in that results in a pile of crap microservices but if you target a monolith instead then you’ll be golden.

The author enjoys stroking his own ego as he goes (the comment about full stack js devs for example). And yet, there’s very little here in terms of actual engineering. Want some measurements or some data to back up the waffle? Well the author says tough! You just get a diatribe instead.

Keeping the cost to change a system low by managing complexity is a fine goal, but that’s not what’s being proposed here. This article could have been better if it recognised this. This article could have been better if it gave some data - hell even anecdata, a single motivating example, would have been a start.

On my team, I’ll take on a bright enthusiastic front end dev who decided they want to spread their wings and grow into a full stack dev over someone who believes they already know everything and has no growing left to do.

bradhe · 3 years ago
Agreed the dichotomy presented is so reductive that it makes you question the author’s credibility. Any architecture that isn’t well maintained will become crushing over time.

Ask me how I know.

ConcernedCoder · 3 years ago
upvoted, but honestly, the other side of that coin is:

"The author posits that if you make pile of crap monolith then all you had to do instead was make microservices and magically it’ll all be fine."