What an appallingly bad article. It starts with a premise only backed with an unsubstantiated and outright false appeal to authority ("the likes of Amazon are moving to monoliths!1") and proceeds to list a few traits that are so wrong they fall into the "not even wrong" territory. For example, things like "incorrect boundary domains" and circular dependencies are hardly related to how distributed services are designed.
This nonsese reads like a badly prompted machine-generated text.
> ("the likes of Amazon are moving to monoliths!1")
I've been at an amazon-scale company, and the thing is: yes, such companies do use a service-oriented architecture... but they do split the services into microservices because that means they can a) further optimise throughput/latency and b) they can delegate responsibilities (ie: split teams when they get too large).
The throughput gains you can get when your software only does one thing are really incredible. FAANG-sized companies do optimize everything then: software, operating systems, hardware. And they can do that because their software is highly specialized. But most non-faang companies? They barely optimize the software, they don't even consider much optimizing the OS or the hardware.
Outside of FAANGs many companies do split stuff into microservices mostly because they want to be trendy and stay on whatever the latest craze is and only secondarily to delegate responsibility and split teams.
I think most "microservices" could be a module or a library within a monolith. The boundary would largely be the same (API contracts) minus the operating overhead. Integration testing would cover the usual issues, and needless to say there would be less "distributed-systems-headache".
Don't get me wrong, I'm not against microservices: it's just that it's often over-used in my opinion.
I agree that microservices are vastly overused, and I would add that they are often misused
If you can't set up a development environment without running a bunch of local microservices then you are probably misusing the concept. They are too tightly coupled to run independently, so they probably should not be separated
All that does is slow everything down by introducing network requests where there shouldn't be, imo
It also leads to situations where layoffs leave services behind that are running and mission critical but have no owner anymore in the company.
Is data integrity strictly a microservices problem? I swear I've seen a ton of monoliths with not-demoralized databases where it's fairly confusing as to what column is correct since there's a bunch of conflicting entries.
I'm a bit surprised that data migration is considered a problem for microservices. Is any "class" in the application allowed to create SQL statement? It seems like it'd be super easy to lift db functions to a microservice and replace the monolith's SQL calls with HTTP (?) calls.
I'm an independent developer right now, building systems for businesses and there is literally no better way to deliver line-of-business internal applications than via a monolith.
Even the biggest corps still have only 20k employees or so; that's far from the limits of a single VM's capabilities.
> biggest corps still have only 20k employees or so
While I agree with your general statement, Corps can easily have 100-500k people and keep being a fairly unknown brand.
Employees is not equal to users. I worked for a not so big insurance company that had 1500 employees, but the number of users (not clients! People using apps whole day) of our systems were around 50k.
Some challenges in systems I have to integrate with now, are that with 10k users they saturate any network interface Corp throws at it, even though it has hundreds of separate instances (that don't have to sync each other). And we are still talking about a web app on prem, on decent hardware. Not exactly a standard one, but still.
> I'm an independent developer right now, building systems for businesses and there is literally no better way to deliver line-of-business internal applications than via a monolith.
This is the same sort of myopic, naive, clueless take that led people armed with this blend of specious reasoning to dive head-first onto microservices architectures without looking at what they were doing and thinking about the problems they are solving.
The main problems that microservices solve are a) organizational, b) resilience, c) scalability.
If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours then there's nothing preventing you from keeping all your eggs into a single basket.
If you work on a professional environment where distinct features are owned by separate teams then you are way better off running separate services, and perhaps peel out shared responsibilities to a separate support service. This is a fact.
But let's take it a step further. You want to provide a service but some of the features are already provided by a separate service either provided by a third-party or whose project you can simply download and run as part of your deployment. Does this count as a microservices architecture to you or is it a monolith?
Consider also that your client teams have a very specific set of requirements and they rolled out services to provide them. Is this a microservices architecture or a monolith?
Consider also that you start with a monolith and soon notice some endpoints trigger workflows that are so computationally demanding they cause brownouts, and to mitigate that these are peeled out of the monolith to dedicated services to help manage load. Is this a monolith or microservices?
Consider that you run a monolith and suddenly you have new set of requirements that forces you to do a major rewrite. You start off with a clone of the original monolith and gradually change functionality, and to avoid regressions you deploy both instances and have all traffic going through an API gateway to handle dialups. Is this microservices or monolith?
The main problem with these vacuous complains about monoliths is that they start from a place of clueless buzzwords, not understanding what they are talking about and what problems are being addressed and solved. This blend of specious reasoning invariably leads jumps from absolutisms to other absolutisms. And they are always wrong.
I mean, if problems are framed in terms of fashion tips, how can the possibly be right?
> If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours then there's nothing preventing you from keeping all your eggs into a single basket.
There's a whole spectrum between that and "needs to go down for less than a minute per year". For every project/job/app that needs the AWS levels of resilience and availability, there are maybe a few 100k that don't, and none of those are the "barely-used, down for hours" type of thing either.
Having been a developer since the mid-90s, I am always fascinated by the thought that computer, server and/or network resilience is something that humanity only discovered in the last 15 years.
The global network handling payments and transactions worked with unnoticeable downtime for 30-odd years; millions of transactions per second, globally, and it was resilient enough to support that without noticeable or expensive downtime.
I keep hearing monoliths are returning, without any indication of what has improved to make big single apps more manageable? All the same problems are there, so seems like this is just fashion trends tbh.
Yes if you have some small saas app don't go crazy with tiny interdependent services prematurely. But if/as you grow in team size, functionality, get out of here with your "majestic monoliths".
It's just the pendulum swinging back the other way. Nothing has changed, we just have a generation of engineers that grew up buying into the microservices as panacea narrative who are more mature and are teaching the next generation not to make the mistakes they did without the awareness that that's what the generation before them was trying to do for them.
Both approaches are valid, there is no silver bullet. Critical thinking will always be necessary to choose the right approach, though perhaps an ai will be the one doing that for the next next generation.
Package management has become a turnkey solution even in large enterprises. I know Microsoft is just a myth and doesn't exist, but Azure DevOps has a built-in artefact repository that plugs directly into Visual Studio with MFA+SSO authentication for secure private access. It Just Works, making it much easier to deploy a monolith where the versioning and rollback of individual components can be independently managed in a sane way.
Similarly, as long as deployment pipelines are fully automated and run fast enough, it doesn't really matter when different teams deploy their module changes.
Just increment or decrement a version number in some package.json type file and press play on the pipeline!
This works well enough up to the scale of a few hundred engineers.
Microservices start to make a lot of sense past a thousand developers, especially when individual services start to require the equivalent capacity of a large number of servers.
According to him, startups work better in monorepos. Microservices work better for medium to large size organizations. Monorepos again for Google size mega orgs.
In the end, its a dumb debate. Because it’s a tooling problem, not a code organization problem. It doesn’t matter which you pick, your success or failure in the organization pattern is entirely dependent on how much you invest in tooling and whether that is microservices tooling or monorepo tooling matters not. Anytime you see someone complaining about one or the other and you ask them why, the answer always can be resolved with better tooling.
Even what's referred to as a "monolith" is usually implemented as multiple services. Most Django and Rails apps have at least a database on the backend and an http server/reverse proxy on the frontend. And there are good, logical reasons why 3 tiers works - the data, logic, and network resources all have different operational and development cycles.
But 4 or 5 or 10 tiers might be just as appropriate for your problem domain. A data boundary may force a split. A different language or different hardware required (GPU) might force a split. Split services off for an actual reason, not because of fashion trends! It's not rocket science but it does require critical thinking.
From my perspective, the Internet is strung together by a bunch of micro services - DNS here, email there, etc. etc. And that seems to work. They even seem to work together for the most part. I'm surprised that there have been all these problems with micro services, have they been overdoing the "micro" part?
One person tells me that your average product team in a company should work on exactly one micro service. The next person shows me a similarly sized team working on dozens of micro services. I don't know what's right
> From my perspective, the Internet is strung together by a bunch of micro services - DNS here, email there, etc. etc. And that seems to work.
But that was true long before the term microservice was invented though.
And, arguably, a pet server running a full-on DNS server, logging locally on something else than stdin/stdout, and using it's own DB, ain't a microservice while another DNS server containerized, using standard logging, and a DB outside the container is a microservice.
So yeah it's all disputable but to me the Internet was working just fine on pet servers years if not decades before microservices communicating through APIs were invented.
Many microservices have no reason to exist in first place, they are a solution looking for a problem, that many folks don't master modular programming, or love monkey patching, so a network layer or OS IPC is placed among them.
However, those same folks usually never learned about distributed computing, now their spaghetti code is done via OS IPC or networking calls, with all the "fun" that is debugging distributed systems.
They are also a way to sell cloud business, and consulting hours (also have myself to blame here), so if that is what folks want, that is what they get.
This nonsese reads like a badly prompted machine-generated text.
I've been at an amazon-scale company, and the thing is: yes, such companies do use a service-oriented architecture... but they do split the services into microservices because that means they can a) further optimise throughput/latency and b) they can delegate responsibilities (ie: split teams when they get too large).
The throughput gains you can get when your software only does one thing are really incredible. FAANG-sized companies do optimize everything then: software, operating systems, hardware. And they can do that because their software is highly specialized. But most non-faang companies? They barely optimize the software, they don't even consider much optimizing the OS or the hardware.
Outside of FAANGs many companies do split stuff into microservices mostly because they want to be trendy and stay on whatever the latest craze is and only secondarily to delegate responsibility and split teams.
I think most "microservices" could be a module or a library within a monolith. The boundary would largely be the same (API contracts) minus the operating overhead. Integration testing would cover the usual issues, and needless to say there would be less "distributed-systems-headache".
Don't get me wrong, I'm not against microservices: it's just that it's often over-used in my opinion.
If you can't set up a development environment without running a bunch of local microservices then you are probably misusing the concept. They are too tightly coupled to run independently, so they probably should not be separated
All that does is slow everything down by introducing network requests where there shouldn't be, imo
It also leads to situations where layoffs leave services behind that are running and mission critical but have no owner anymore in the company.
I'm a bit surprised that data migration is considered a problem for microservices. Is any "class" in the application allowed to create SQL statement? It seems like it'd be super easy to lift db functions to a microservice and replace the monolith's SQL calls with HTTP (?) calls.
The database is denormalized. The developers are demoralized.
I'm an independent developer right now, building systems for businesses and there is literally no better way to deliver line-of-business internal applications than via a monolith.
Even the biggest corps still have only 20k employees or so; that's far from the limits of a single VM's capabilities.
While I agree with your general statement, Corps can easily have 100-500k people and keep being a fairly unknown brand.
Employees is not equal to users. I worked for a not so big insurance company that had 1500 employees, but the number of users (not clients! People using apps whole day) of our systems were around 50k.
Some challenges in systems I have to integrate with now, are that with 10k users they saturate any network interface Corp throws at it, even though it has hundreds of separate instances (that don't have to sync each other). And we are still talking about a web app on prem, on decent hardware. Not exactly a standard one, but still.
This is the same sort of myopic, naive, clueless take that led people armed with this blend of specious reasoning to dive head-first onto microservices architectures without looking at what they were doing and thinking about the problems they are solving.
The main problems that microservices solve are a) organizational, b) resilience, c) scalability.
If you work on single-person "teams" maintaining something that is barely used and does not even have SLAs and can be shut down for hours then there's nothing preventing you from keeping all your eggs into a single basket.
If you work on a professional environment where distinct features are owned by separate teams then you are way better off running separate services, and perhaps peel out shared responsibilities to a separate support service. This is a fact.
But let's take it a step further. You want to provide a service but some of the features are already provided by a separate service either provided by a third-party or whose project you can simply download and run as part of your deployment. Does this count as a microservices architecture to you or is it a monolith?
Consider also that your client teams have a very specific set of requirements and they rolled out services to provide them. Is this a microservices architecture or a monolith?
Consider also that you start with a monolith and soon notice some endpoints trigger workflows that are so computationally demanding they cause brownouts, and to mitigate that these are peeled out of the monolith to dedicated services to help manage load. Is this a monolith or microservices?
Consider that you run a monolith and suddenly you have new set of requirements that forces you to do a major rewrite. You start off with a clone of the original monolith and gradually change functionality, and to avoid regressions you deploy both instances and have all traffic going through an API gateway to handle dialups. Is this microservices or monolith?
The main problem with these vacuous complains about monoliths is that they start from a place of clueless buzzwords, not understanding what they are talking about and what problems are being addressed and solved. This blend of specious reasoning invariably leads jumps from absolutisms to other absolutisms. And they are always wrong.
I mean, if problems are framed in terms of fashion tips, how can the possibly be right?
There's a whole spectrum between that and "needs to go down for less than a minute per year". For every project/job/app that needs the AWS levels of resilience and availability, there are maybe a few 100k that don't, and none of those are the "barely-used, down for hours" type of thing either.
Having been a developer since the mid-90s, I am always fascinated by the thought that computer, server and/or network resilience is something that humanity only discovered in the last 15 years.
The global network handling payments and transactions worked with unnoticeable downtime for 30-odd years; millions of transactions per second, globally, and it was resilient enough to support that without noticeable or expensive downtime.
Yes if you have some small saas app don't go crazy with tiny interdependent services prematurely. But if/as you grow in team size, functionality, get out of here with your "majestic monoliths".
Both approaches are valid, there is no silver bullet. Critical thinking will always be necessary to choose the right approach, though perhaps an ai will be the one doing that for the next next generation.
Similarly, as long as deployment pipelines are fully automated and run fast enough, it doesn't really matter when different teams deploy their module changes.
Just increment or decrement a version number in some package.json type file and press play on the pipeline!
This works well enough up to the scale of a few hundred engineers.
Microservices start to make a lot of sense past a thousand developers, especially when individual services start to require the equivalent capacity of a large number of servers.
According to him, startups work better in monorepos. Microservices work better for medium to large size organizations. Monorepos again for Google size mega orgs.
In the end, its a dumb debate. Because it’s a tooling problem, not a code organization problem. It doesn’t matter which you pick, your success or failure in the organization pattern is entirely dependent on how much you invest in tooling and whether that is microservices tooling or monorepo tooling matters not. Anytime you see someone complaining about one or the other and you ask them why, the answer always can be resolved with better tooling.
This stuff isn’t rocket science
But 4 or 5 or 10 tiers might be just as appropriate for your problem domain. A data boundary may force a split. A different language or different hardware required (GPU) might force a split. Split services off for an actual reason, not because of fashion trends! It's not rocket science but it does require critical thinking.
Those engineers are probably remembering a time when microservices were The Way.
Any given application—monolith or otherwise—may make use of a number of different small or dedicated services like these.
But, "microservices" refers to a specific architecture for solving, generally, business/domain-layer problems.
But that was true long before the term microservice was invented though.
And, arguably, a pet server running a full-on DNS server, logging locally on something else than stdin/stdout, and using it's own DB, ain't a microservice while another DNS server containerized, using standard logging, and a DB outside the container is a microservice.
So yeah it's all disputable but to me the Internet was working just fine on pet servers years if not decades before microservices communicating through APIs were invented.
However, those same folks usually never learned about distributed computing, now their spaghetti code is done via OS IPC or networking calls, with all the "fun" that is debugging distributed systems.
They are also a way to sell cloud business, and consulting hours (also have myself to blame here), so if that is what folks want, that is what they get.