Readit News logoReadit News
threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
procaryote · 8 hours ago
If you've run a microservice stack or N at scale with good results, someone saying it's impossible doesn't look pragmatic
threethirtytwo · 6 hours ago
I’m not commenting on the pragmatic part.

My thesis is logical and derived from axioms. You will have fundamental incompatibilities between apis between services if one service changes the api. That’s a given. It’s 1 + 1 =2.

Now I agree there are plenty of ways to successfully deal with these problems like api backwards compatibility, coordinated deploys… etc… etc… and it’s a given thousands of companies have done this successfully. This is the pragmatic part, but that’s not ultimately my argument.

My argument is none of the pragamatisms and methodologies to deal with those issues need to exist in a monolithic architecture because the problem itself doesn’t exist in a monolith.

Nowhere did I say microservices can’t be successfully deployed. I only stated that there are fundamental issues with microservices that by logic must occur definitionally. The issue is people are biased. They tie their identity to an architecture because they advocated it for too long. The funniest thing is that I didn’t even take a side. I never said microservices were better or worse. I was only talking about one fundamental problem with microservices. There are many reasons why microservices are better but I just didn’t happen to bring it up. A lot of people started getting defensive and hence the karma.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
procaryote · 8 hours ago
> or they had other requirements that necessitated microservices

Scale

Both in people, and in "how do we make this service handle the load". A monolith is easy if you have few developers and not a lot of load.

With more developers it gets hard as they start affecting eachother across this monolith.

With more load it gets difficult as the usage profile of a backend server becomes very varied and performance issues hard to even find. What looks like a performance loss in one area might just be another unrelated part of the monolith eating your resources,

threethirtytwo · 7 hours ago
Exactly, performance can make it necessary to move away from a monolith.

But everyone should know that microservices are more complex systems and harder to deal with and a bunch of safety and correctness issues that come with it as well.

The problem here is not many people know this. Some people think going to microservices makes your code better, which I’m clearly saying here you give up safety and correctness as a result)

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
Seattle3503 · 8 hours ago
> Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.

This this is what I don't get about some comments in this thread. Choosing internal backwards compatibility for services managed by a team of three engineers doesn't make a lot of sense to me. You (should) have the organizational agility to make big changes quickly, not a lot of consensus building should be required.

For the S3 APIs? Sure, maintaining backwards compatibility on those makes sense.

threethirtytwo · 7 hours ago
Backwards compatibility is for customers. If customers don’t want to change apis… you provide backwards compatibility as a service.

If you’re using backwards compatibility as safety and that prevents you from doing a desired upgrade to an api that’s an entirely different thing. That is backwards compatibility as a restriction and a weakness in the overall paradigm while the other is backwards compatibility as a feature. Completely orthogonal actions imo.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
procaryote · 8 hours ago
You usually can't simultaneously deploy two services. You can try, but in a non trivial environment there are multiple machines and you'll want a rolling upgrade, which causes an old client to talk to a new service or vice versa. Putting the code into a monorepo does nothing to fix this.

This is much less of a problem than it seems.

You can use a serialisation format that allows easy backward compatible additions. The new service that has a new feature adds a field for it. The old client, responsibly coded, gracefully ignores the field it doesn't understand.

You can version the API to allow for breaking changes, and serve old clients old responses, and new clients newer responses. This is a bit of work to start and sometimes overkill, given the first point

If you only need very rare breaking changes, you can deploy new-version-tolerant clients first, then when that's fully done, deploy the new-version service. It's a bit of faff, but if it's very rare and internal, it's often easier than implementing full versioning.

threethirtytwo · 7 hours ago
> You usually can't simultaneously deploy two services

Yeah it’s roundabout solution to create something to deploy two things simultaneously. Agreed.

> Putting the code into a monorepo does nothing to fix this.

It helps mitigate the issue somewhat. If it was a polyrepo you suffer from an identical problem with the type checker or the integration test. The checkers basically need all services to be at the same version to do a full and valid check so if you have different teams and different repos the checkers will never know if team A made a breaking change that will effect team B because the integration test and type checker can’t stretch to another repo. Even if it could stretch to another repo you would need to do a “simultaneous” merge… in a sense polyrepos suffer from the same issue as microservices on the CI verification layer.

So if you have micro services and you have a polyrepos you are suffering from a twofold problem. Your static checks and integration tests are never correct and always either failing and preventing you from merging or deliberately crippled so as to not validate things across repos. At the same time your deploys will also guarantee to be broken if a breaking api change is made. You literally give up safety in testing, safety in type checking and working deploys by going microservices and polyrepos.

Like you said it can be fixed with backward comparability but that’s a bad thing to restrict your code to be that way.

> This is much less of a problem than it seems.

It is not “much less of a problem then it seems” because big companies have developed methods to do simultaneous deploys. See Netflix. If they took the time to develop a solution it means it’s not a trivial issue.

Additionally are you aware of any api issues in communication between your local code in a single app? Do you have any problems with this such that you are aware of it and come up with ways to deal with it? No. In a monolith the problem is nonexistent and it doesn’t even register. You are not aware this problem exists until you move to micro-services. That’s the difference here.

> You can use a serialisation format that allows easy backward compatible additions.

Mentioned a dozen times in this thread. Backwards compatibility is a bad thing. It’s a restriction that freezes all technical debt into your code. Imagine python 3 stayed backward compatible with 2 or the current version of macOS was still compatible with binaries from the first Mac.

> You can version the API to allow for breaking changes, and serve old clients old responses, and new clients newer responses. This is a bit of work to start and sometimes overkill, given the first point

Can you honestly tell me this is a good thing? The fact that you have to pay attention to this in microservices while in a monolith you don’t even need to be aware there’s an issue tells you all you need to know. You’re just coming up with behavioral work around and coping mechanisms to make microservices work in this area. You’re right it does work. But it’s a worse solution for this problem then monoliths which doesn’t have these work arounds because these problems don’t exist in monoliths.

> If you only need very rare breaking changes, you can deploy new-version-tolerant clients first, then when that's fully done, deploy the new-version service. It's a bit of faff, but if it's very rare and internal, it's often easier than implementing full versioning.

It’s only very rare in microservices because it’s weaker. You deliberately make it rare because of this problem. Is it rare to change a type in a monolith? No. Happens on the regular. See the problem? You’re not realizing but everything you’re bringing up is behavioral actions to cope with an aspect that is fundamentally weaker in microservices.

Let me conclude to say that there are many reasons why microservices are picked over monoliths. But what we are talking about here is definitively worse. Once you go microservices you are giving up safety and correctness and replacing it with work arounds. There is no trade off for this problem it is a logical consequence of using microservices.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
kccqzy · 15 hours ago
> The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate.

This is correct.

> Neither of these scenarios is practical.

This is not. When you choose appropriate tools (protobuf being an example), it is extremely easy to make a non-breaking change to the communication channel, and it is also extremely easy to prevent breaking changes from being made ever.

threethirtytwo · 14 hours ago
I don't agree.

Protobuf works best if you have a monorepo. If each of your services lives within it's own repo then upgrades to one repo can be merged onto the main branch that potentially breaks things in other repos. Protobuf cannot check for this.

Second the other safety check protobuf uses is backwards compatibility. But that's a arbitrary restriction right? It's better to not even have to worry about backwards compatability at all then it is to maintain it.

Categorically these problems don't even exist in the monolith world. I'm not taking a side in the monolith vs. microservices debate. All I'm saying is for this aspect monoliths are categorically better.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
mjr00 · 15 hours ago
> The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. Neither of these scenarios is practical.

IMO the fundamental point of disagreement here is that you believe it is effectively impossible to evolve APIs without breaking changes.

I don't know what to tell you other than, I've seen it happen, at scale, in multiple organizations.

I can't say that EC2 will never made a breaking change that causes RDS, lambda, auto-scaling to break, but if they do, it'll be front page news.

threethirtytwo · 15 hours ago
>IMO the fundamental point of disagreement here is that you believe it is effectively impossible to evolve APIs without breaking changes.

No certainly possible. You can evolve linux, macos and windows forever without any breaking changes and keep all apis backward compatible for all time. Keep going forever and ever and ever. But you see there's a huge downside to this right? This downside becomes more and more magnified as time goes on. In the early stages it's fine. And it's not like this growing problem will stop everything in it's tracks. I've seen organizations hobble along forever with increasing tech debt that keeps increasing for decades.

The downside won't kill an organization. I'm just saying there is a way that is better.

>I don't know what to tell you other than, I've seen it happen, at scale, in multiple organizations.

I have as well. It doesn't mean it doesn't work and can't be done. For example typescript is better than javascript. But you can still build a huge organization around javascript. What I'm saying here is one is intrinsically better than the other but that doesn't mean you can't build something on technology or architectures that are inferior.

And also I want to say I'm not saying monoliths are better than microservices. I'm saying for this one aspect monoliths are definitively better. There is no tradeoff for this aspect of the debate.

>I can't say that EC2 will never made a breaking change that causes RDS, lambda, auto-scaling to break, but if they do, it'll be front page news.

Didn't a break happen recently? Barring that... There's behavioral ways to mitigate this right? like what you mentioned... backward compatible apis always. But it's better to set up your system such that the problem just doesn't exist Period... rather then to set up ways to deal with the problem.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
ricardobeat · 17 hours ago
I believe in the original amazon service architecture, that grew into AWS (see “Bezos API mandate” from 2002), backwards compatibility is expected for all service APIs. You treat internal services as if they were external.

That means consumers can keep using old API versions (and their types) with a very long deprecation window. This results in loose coupling. Most companies doing microservices do not operate like this, which leads to these lockstep issues.

threethirtytwo · 15 hours ago
Yeah. that's a bad thing right? Maintaining backward compatibility to the end of time in the name of safety.

I'm not saying monoliths are better then microservices.

I'm saying for THIS specific issue, you will not need to even think about API compatibility with monoliths. It's a concept you can throw out the window because type checkers and integration tests catch this FOR YOU automatically and the single deployment insures that the compatibility will never break.

If you choose monoliths you are CHOOSING for this convenience, if you choose microservices you are CHOOSING the possibility for things to break and AWS chose this and chose to introduce a backwards compatibility restriction to deal with this problem.

I use "choose" loosely here. More likely AWS ppl just didn't think about this problem at the time. It's not obvious... or they had other requirements that necessitated microservices... The point is, this problem in essence is a logical consequence of the choice.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
kccqzy · 17 hours ago
> That shared type will break at the communication channels if you do not simultaneously deploy the two services.

No. Your shared type is too brittle to be used in microservices. Tools like the venerable protobuf has solved this problem decades ago. You have a foundational wire format that does not change. Then you have a schema layer that could change in backwards compatible ways. Every new addition is optional.

Here’s an analogy. Forget microservices. Suppose you have a monolithic app and a SQL database. The situation is just like when you change the schema of the SQL database: of course you have application code that correctly deals with both the previous schema and the new schema during the ALTER TABLE. And the foundational wire format that you use to talk to the SQL database does not change. It’s at a layer below the schema.

This is entirely a solved problem. If you think this is a fundamental problem of microservices, then you do not grok microservices. If you think having microservices means simultaneous deployments, you also do not grok microservices.

threethirtytwo · 15 hours ago
False. Protobuf solves nothing.

1. Protobuf requires a monorepo to work correctly. Shared types must be checked across all repos and services simulateneously. Without a monorepo or some crazy work around mechanism this won't work. Think about it. These type checkers need everything at the same version to correctly check everything.

2. Even with a monorepo, deployment is a problem. Unless you do simultaneous deploys if one team upgrades there service and another team doesn't the Shared type is incompatible simply because you used microservices and polyrepos to allow teams to move async instead of insync. It's a race condition in distributed systems and it's theoremtically true. Not solved at all because it can't be solved by logic and math.

Just kidding. It can be solved but you're going to have to change definitions of your axioms aka of what is currently a microservice, monolith, monorepo and polyrepo. If you allow simultaneous deploys or pushes to microservices and polyrepos these problems can be solved but then can you call those things microservices or polyrepos? They look more like monorepos or monoliths... hmmm maybe I'll call it "distributed monolith".... See we are hitting this problem already.

>Here’s an analogy. Suppose you have a monolithic app and a SQL database. The situation is just like when you change the schema of the SQL database: of course you have application code that correctly deals with the previous schema and the new schema during the ALTER TABLE. And the foundational wire format that you use to talk to the SQL database does not change. It’s at a layer below the schema.

You are just describing the problem I provided. We call "monoliths" monoliths but technically a monolith must interact with a secondary service called a database. We have no choice in the matter. The monolith and microservice of course does not refer to that problem which SUFFERS from all the same problems as microservices.

>This is entirely a solved problem. If you think this is a fundamental problem of microservices, then you do not grok microservices. If you think having microservices means simultaneous deployments, you also do not grok microservices.

No it's not. Not at all. It's a problem that's lived with. I have two modules in a monolith. ANY change that goes into the mainline branch or deploy is type checked and integration tested to provide maximum safety as integration tests and type checkers can check the two modules simultaneously.

Imagine those two modules as microservices. Because they can be deployed at any time asynchronously, because they can be merged to the mainline branch at any time asynchronously They cannot be type checked or integration tested. Why? If I upgrade A which requires an upgrade to B but B is not upgraded yet, How do I type check both A and B at the same time? Axiomatically impossible. Nothing is solved. Just behavioral coping mechanisms to deal with the issue. That's the key phrase: behavioral coping mechanisms as opposed to automated statically checked safety based off of mathematical proof. Most of the arguments from your side will be consisting of this: "behavioral coping mechanisms"

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
mjr00 · 16 hours ago
> If you communicate with one another you are serializing and deserializing a shared type.

Yes, this is absolutely correct. The objects you send over the wire are part of an API which forms a contract the server implementing the API is expected to provide. If the API changes in a way which is backwards compatible, this will break things.

> That shared type will break at the communication channels if you do not simultaneously deploy the two services.

This is only true if you change the shared type in a way which is not backwards compatible. One of the major tenets of services is that you must not introduce backwards incompatible changes. If you want to make a fundamental change, the process isn't "change APIv1 to APIv2", it's "deploy APIv2 alongside APIv1, mark APIv1 as deprecated, migrate clients to APIv2, remove APIv1 when there's no usage."

This may seem arduous, but the reality is that most monoliths already deal with this limitation! Don't believe me? Think about a typical n-tier architecture with a backend that talks to a database; how do you do a naive, simple rename of a database column in e.g. MySQL in a zero-downtime manner? You can't. You need to have some strategy for dealing with the backwards incompatibility which exists when your code and your database do not match. The strategy might be a simple add new column->migrate code->remove old column, including some thought on how to deal with data added in the interim. It might be to use views. It might be some insane strategy of duplicating the full stack, using change data capture to catch changes and flipping a switch.[0] It doesn't really matter, the point is that even within a monolith, you have two separate services, a database and a backend server, and you cannot deploy them truly simultaneously, so you need to have some strategy for dealing with that; or more generally, you need to be conscious of breaking API changes, in exactly the same way you would with independent services.

> The logical outcome of this is virtually identical to a distributed monolith.

Having seen the logical outcome of this at AWS, Hootsuite, Splunk, among others: no this isn't true at all really. e.g. The RDS team operated services independently of the EC2 team, despite calling out to EC2 in the backend; in no way was it a distributed monolith.

[0] I have seen this done. It was as crazy as it sounds.

threethirtytwo · 15 hours ago
>This is only true if you change the shared type in a way which is not backwards compatible. One of the major tenets of services is that you must not introduce backwards incompatible changes. If you want to make a fundamental change, the process isn't "change APIv1 to APIv2", it's "deploy APIv2 alongside APIv1, mark APIv1 as deprecated, migrate clients to APIv2, remove APIv1 when there's no usage."

Agreed and this is a negative. Backwards compatibility is a restriction made to deal with something fundamentally broken.

Additionally eventually in any system of services you will have to make a breaking change. Backwards compatibility is a behavioral comping mechanism to deal with a fundamental issue of microservices.

>This may seem arduous, but the reality is that most monoliths already deal with this limitation! Don't believe me? Think about a typical n-tier architecture with a backend that talks to a database; how do you do a naive, simple rename of a database column in e.g. MySQL in a zero-downtime manner? You can't. You need to have some strategy for dealing with the backwards incompatibility.

I believe you and am already aware. It's a limitation that exists intrinsically so it exists because you have No choice. A database and a monolith needs to exist as separate services. The thing I'm addressing here is the microservices and monolith debate. If you choose microservices, you are CHOOSING for this additional problem to exist. If you choose monolith, then within that monolith you are CHOOSING for those problems to not exist.

I am saying regardless of the other issues with either architecture, this one is an invariant in the sense that for this specific thing, monolith is categorically better.

>Having seen the logical outcome of this at AWS, Hootsuite, Splunk, among others: no this isn't true at all really. e.g. The RDS team operated services independently of the EC2 team, despite calling out to EC2 in the backend; in no way was it a distributed monolith.

No you're categorically wrong. If they did this in ANY of the companies you worked at then they are Living with this issue. What I'm saying here isn't an opinion. It is a theorem based consequence that will occur IF all the axioms are satisfied: namely >2 services that communicate with each other and ARE not deployed simultaneously. This is logic.

The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. Neither of these scenarios is practical.

threethirtytwo commented on Why Twilio Segment moved from microservices back to a monolith   twilio.com/en-us/blog/dev... · Posted by u/birdculture
wowohwow · 17 hours ago
Bingo. Couldn't agree more. The other posters in this comment chain seem to view things from a dogmatic approach vs a pragmatic approach. It's important to do both, but individuals should call out when they are discussing something that is practiced vs preached.
threethirtytwo · 17 hours ago
Agreed. What I’m describing here isn’t solely pragmatic it’s axiomatic as well. If you model this as a distributed system with graph all microservices by definition will always reach a state where the apis are broken.

Most microservice companies either live with the fact or they have round about ways to deal with it including simultaneous deploys across multiple services and simultaneous merging, CI and type checking across different repos.

u/threethirtytwo

KarmaCake day80November 24, 2025View Original