Readit News logoReadit News
bob1029 · 4 years ago
I have done the full theme park ride on monolith->microservices->monolith. Both have ups and downs.

The most important thing I learned is that "microservices" in absolutely no way necessitates "bullshit spread across multiple cloud vendors and other scenarios involving more than 1 computer". What part of microservices says things must be separated by way of an arbitrary wire protocol?

We now have a "monolith" (process) that is comprised of many "microservices" (class files), each of which is responsible for its own database, business logic, types, etc. These services are able to communicate amongst themselves using the simple path of direct method invocation.

If you are looking to scale up your capacity or even make things more resilient, microservices vs monolith is really not the conversation you need to be having. If you are trying to better organize your codebase, you might be in the right place but you also need to be super careful with how you proceed.

We wasted a solid 3 years trying to make microservices solve all of our problems. Looking back, we spent more time worrying about trying to put Humpty Dumpty back together again with JSON APIs and other technological duct tape than what our customers were complaining about.

rapind · 4 years ago
Pretty sure microservices will be remembered as a horribly convoluted stop gap once we have better language modularization.

They encourage modular patterns which is usually good, but all that plumbing will eventually become unnecessary. I can’t help but remember building 6 versions of each class in the old ejb days whenever I hear microservices hype.

steve_taylor · 4 years ago
OSGi was supposed to solve this problem. You could start by running all bundles together in the same JVM, then split them across a cluster later if required. But it never quite took off beyond an implementation detail of various application servers.
beefsack · 4 years ago
Even with language modularisation you lose the ability to provision modules independently if they are bundled together in a single service.

Microservices and monoliths are opposing extremes on a scale. There is a range somewhere in the middle which is entirely sensible.

I feel like I've found my greatest success with my service abstractions by separating things out based on resource and availability needs of the functionality.

throwaway1777 · 4 years ago
Example of better language modularization? Who is working on this?
withinboredom · 4 years ago
Currently at my day job, we have a monolithic code base with multiple entry points. If you update a library, you have to update the client or keep the changes backwards compatible. I really prefer this over multiple micro services.
ruffrey · 4 years ago
I’ve done monoliths and microservices. Having a monolithic codebase with many entry points is my preferred.
blakehaswell · 4 years ago
What exactly do you mean by multiple entry points? Do you have multiple processes which run independently but are co-located in the same repository or are you talking about something else?
chmod600 · 4 years ago
"What part of microservices says things must be separated by way of an arbitrary wire protocol?"

That's not a requirement, cf. Erlang/OTP.

But to qualify as a microservice, you need it decoupled enough that you can independently deploy and/or restart individual services. Pretty sure that's a minimum to call yourself microservice-based, and hard to do by just using classes in C++ or something.

Arguably you should also have quite a bit of control over dependencies and their versions. If your service isn't ready for python3, it can keep using python2, or whatever. Taking this to the extreme would generally mean a black box that serializes everything over a network, so that any service can be implemented on any stack. I'm not sure that's fundamental to the idea of microservices, though.

brundolf · 4 years ago
You could even have a mono-repo that holds services which run as separate processes, if you need that. That way you still get the benefits of sharing code, types, version numbers, build processes, etc, which seem like the main headaches with the usual approach.
bob1029 · 4 years ago
We passed right by this mode of operation while migrating from microservices=>monolith. Everything is a separate process but running on the same box. Ran that way for quite some time. We pulled services into the main process one-by-one. It went about as smoothly as one could have hoped for.
jamesfinlayson · 4 years ago
I've inherited a project like this - lots of common code in a shared library that gets used by multiple services. Doesn't work too badly but things have been split out more than they need to be.
jayd16 · 4 years ago
Farming out libraries or subprojects to separate teams is an ok way to go as long as you don't mind a single language/build system. Its not as flexible but you maybe get some perf out of it.

You don't get the convenience of a single datastore with simple transactions but it sounds like you prefer the flexibility of services owning their own data.

As it turns out there's no silver bullet. Do what works for you.

frozenport · 4 years ago
>> single language/build system

Imagine designing your entire architecture around your build system.

NicoJuicy · 4 years ago
That's exactly what i'm doing for my ( almost fulltime) side-project.

While we do Microservices at work ( distrubited team and Conway's law does make sense: https://ardalis.com/conways-law-ddd-and-microservices/ )

It doesn't make sense for me personally. So I applied all DDD logic and implemented it in my app.

In dotnet there is a useful feature where functionality from referenced projects is included in your main WebApp. Which makes everything pretty clean.

I use application logic over 2 "gateways". Where a gateway is a dummy solution that contains Swagger, global things for that project ( eg. auth verification) and uses the controllers/logic from the referenced projects.

- SPA - Angular that contains a Ocelot gateway on the same domain ( = no cors issues)

- ApiGateway - Contains all the backend api logic and custom integrations with partners.

- ShopGateway - Is included in the Frontend.

- Frontend - the shop itselve. Currently in the process of making this db-less ( everything over API / messages or using the gateway. The gateway is currently a referenced project ( see up), so everything is pretty fast)

In the meantime, i can make "reasonable" quick adjustments and the logic is really flexible.

Overview of the solution: https://ibb.co/k5wn07x

Note: Db migration from "1 project" to this is not complete yet :( . But it's one of the bigger shifts and i'm already 90% finished before i can start the actual shift to MartenDb from EF / Dapper )

If you want a quick summary on the logic behind, DDD would be the answer.

crdrost · 4 years ago
Yeah, the biggest problem that people get into with microservices is that they allow the communication structure to dictate the app structure. A monolith gives you some refactorability because you can run what would have been "integration" tests locally, as you massage your module boundaries to match the problem that you're solving. So a monolith can become a clean monolith, and then a clean monolith can maybe become microservices if you need the scale attributes.

The basic problem is that before you know where the module boundary is, you cannot have a clear module boundary. So three popular approaches emerge:

(a) DDD "milliservices". Basically define the different sorts of users, come up with a clustering of the different sorts of people that you think will be using your app. People are considered to be in the same cluster if they use the same jargon to refer to things, or different clusters if they both use some word but they both mean subtly different things by it.

(b) Gut-feeling microservices that become feature microservices. I should probably have an "auth service," I don't know what it does but I'm going to be doing auth so that's probably a service. I need to import a git library to contact GitHub, probably there should be a "git service" that handles all communication to GitHub or other git repos. Each of these things exposes some swagger/openAPI docs, maybe we should have a docs service? -- that sort of thing. The danger is that the things that are easy to break off are usually not the core competency of the product, and so there emerges some sort of "core services," one or two big honkers that basically are monoliths. People are aware that they aren't supposed to keep adding to the core services and so new services emerge named after feature requests, hello "sharing service," hello "wallet service." Except the core usually is tightly coupled to these new feature services and they all kind of connect to each other. The idea of splitting the core services into other services to fit the newfound module boundaries becomes complicated by all these ties to nearby features, you are not actually loosely coupled because you do RPCs and you probably code in the expectation of success into these RPCs like method calls.

(c) Every noun becomes a "nanoservice". This is a service that watches just one or two tables (or NoSQL document types or what have you) and exposes a CRUD API for that noun, plus a couple auxiliary verbs to do actual business needs with those nouns. So if you were implementing Git there would be a file service, a tree service, a branch service, a commit service, probably a working tree service... But you just have to "know" that logs and rebases and cherrypicks live in CommitService while diffing for some reason lives in TreeService and adding a file to your working tree requires first creating it in FileService and then handing that link off to the WorkingTreeService which will make needed calls to TreeService, creating your own tree is exposed via TreeService's CRUD but the developers tell you that down that road There Be Dragons and you should not have been messing with that.

Any of these three can be successful but only insofar as you can create new module boundaries and move module boundaries and test to make sure that your users will not notice any performance regressions. Of them I would only recommend option (a), because it gives a really nice place for these tests to live and an intuition that each test should document a user journey for the system.

jamesfinlayson · 4 years ago
A company I worked at went all in on option c - there was a random number service (that just wrapped a random number generation library), and sending an email was split across multiple services - one to pull the email request from the database, one to generate the email from the template, one to actually send the email and one to save the templated email to the database.
twic · 4 years ago
> Gut-feeling microservices

I'm in this comment and i don't like it.

My current team has this pattern. I think we've ended up with two core services, with a fairly thin channel between them, but a constellation of tightly-coupled peripheral services around each core.

To be honest, it works pretty well. It wasn't intentional, and it grew in exactly the haphazard way you describe. But it's not a disaster, or at least doesn't feel like one.

Maybe we should rebrand this as "natural microservices"?

tacone · 4 years ago
"Gut-feeling microservices" is a fantastic definition, thanks for that.
iamcreasy · 4 years ago
> simple path of direct method invocation

Does it mean calling a method of a class?

antishatter · 4 years ago
Pretty clearly the pros and cons come down to the application and necessary stability. Need ultra stable? More and smaller microservices usually better. Need fairly stable? Bigger services is fine. Need it to work generally? Build it however you can.
manuelabeledo · 4 years ago
> Need ultra stable? More and smaller microservices usually better.

I don't think I agree with this statement. "More and smaller" also means potentially more routing and communication paths, thus increased latency and complexity.

sorokod · 4 years ago
More smaller microservices bring along a combinatorial explosion of failure scenarios.
jiggawatts · 4 years ago
Architecture astronauts love microservices.

I'm watching a government customer take simple, cohesive systems developed by a small team (4-5 people) and split it up into tiny little pieces scattered across different hosting platforms and even clouds.

Why?

Because it's "fun" for the architects and pads out their resume.

Just now, I'm watching a "digital interactions" project that will have dozens of components across two clouds, including multiple ETL steps. In all seriousness, half of that could be replaced by a database index, and the other half with a trigger.

They're seriously going to deploy clusters of stuff on Kubernetes for 100 MB of data to make sure it "scales"... to 200 MB. Maybe. Eventually.

What kills me is that now that they've made the decision to over-engineer the thing, my consultancy firm can't bid on the tender because we don't have the appropriate experience building over-engineered monstrosities!

The simple and effective solutions to problems we've delivered in the past are disqualifying us from work.

You guessed it: my next project will be an architect's wet dream and will be over engineered just so that we can say on tender applications that "yes, we have the relevant experience".

Gotta play the game by the rules...

Nextgrid · 4 years ago
This reflects my opinion on the current state of the software industry.

The seemingly-infinite VC money being thrown around around means you can create a startup, raise some money and provide a comfortable salary & industry experience for yourself and your friends for a few years, regardless of whether the company "makes it" in the end or whether the business problem is even solvable.

At this point solving the business problem is no longer the primary objective. The longer you can drag out the process of "solving" this problem the longer you & your employees can enjoy the salary and build "experience". So instead of a simple solution that needs a team of 5, you end up with an engineering playground that needs 50 people (and associated managers, scrum masters, etc) spread across many teams just to keep the lights on. The outcome becomes endless busywork, self-inflicted problems to brag about solving on your engineering blog (every startup has got to have one obviously) and next AWS conference to attract more employees looking for this kind of environment and make yourself look "serious" in the eyes of VCs that keep bankrolling the disaster.

The business problem, if it gets solved at all, is a secondary concern since the VCs happily keep throwing more money into the dumpster fire and some bigger idiot might even buy out the company regardless of its inefficiency.

pphysch · 4 years ago
I'm involved in a (greenfield) project right now where the lead insists on gluing a bunch of Google Sheets/Forms/Docs/whatnot together with triggers and custom APIs (roadmap TBD) instead of just building a simple website in $framework.

Fingers crossed he doesn't discover k8s...

wmichelin · 4 years ago
Do you feel those hosted solutions are worse than rolling your own website? Why stand up your own website and architecture if you can use off the shelf solutions? I am genuinely curious.
kitd · 4 years ago
I'm watching a government customer take simple, cohesive systems developed by a small team (4-5 people) and split it up into tiny little pieces scattered across different hosting platforms and even clouds.

Governments are generally driven by a set of requirements that most of us have no inkling of. Distributed deployments are often not just for performance or resilience, but also politics and/or playing the vendors off against each other. It seems daft to us mere minions but there's often N-level chess shenanigans going on near the top.

scanr · 4 years ago
I have a theory that auto-wired Dependency Injection in a single DI container is partly to blame for monolith spaghetti. Once an app reaches a certain size and anything can depend on anything else, reasoning about the whole can become difficult.

I think there is value in wiring a monolith together in such a way that each course grained subcomponent exposes a constrained interface into the rest of the system (payments, orders, shipping, customers etc) before needing to break it into distributed micro-services.

Note: I quite like Dependency Injection, I just think the 1 giant bag of dependencies can lead to complexity at scale.

malort · 4 years ago
This is exactly what Shopify does with their monolithic Rails app [1]. I worked at a company with ~200 engineers that used the same general architecture and I really enjoyed it. We got a lot of the benefits (clear interfaces, teams able to work on their system without having to understand the whole platform, build optimization, etc) without any of the operational headaches that come with microservices.

[1]: https://shopify.engineering/shopify-monolith

lmilcin · 4 years ago
I wouldn't blame (badly implemented) DI for all the problems of monoliths.

I think the real main issue is lack of discipline when dividing the application into modules.

Spaghetti is basically defined as an application where real modularisation does not exist and everything talks to everything.

It is much easier to work with an application when you can abstract parts of it when you are solving your problem. You effectively work on much smaller part of the application.

Spaghetti == you effectively have to take into account possibility of the piece of code you look at interacting with any other piece of code in the application.

Well modularised application == you only need to take into account the contents of your current module and the interface of the other modules you are using.

One reason why microservices sort of work (when done well) is because they force people to think about APIs and how those services talk to each other.

In most cases you could just put these microservices as modules in a monolithic application and expend the effort on ensuring APIs and application structure.

I have successfully rolled back couple microservice initiatives by integrating services into monoliths. This usually results in the team getting back a lot of their time because their environment suddenly became much simpler. Less applications to manage, less network communication, less possible ways for things to break, less frameworks, less resources needed to run the application, less processes (like processes around deployment, release management, etc.), less boilerplate code, and so on. The list is very long.

Of course, when you work on a large monolith vs a lot of small microservices, it is now important to be able to structure your applications. But there is also an opportunity for improvement.

jstimpfle · 4 years ago
Ravioli == you have so many small distinct things on that are hard to stick together, which makes it hard to build a larger structure out of them.
throwaway894345 · 4 years ago
I’ve never been part of a monolith that used a DI framework, but I’ve seen quite a few monoliths fail (as in “the project becomes too convoluted and iteration slows to a crawl until the project is canceled or effectively rewritten”) and I certainly believe that one important reason microservices do well is that they enforce the modularity that you describe. That said, a lot of critics of microservices describe similar issues of indiscernible chaos, so either I’ve been very fortunate or microservice critics are gaslighting us. :)
BatteryMountain · 4 years ago
At that point it becomes useful to look at some indirection/decoupling patterns like the Mediator pattern.

Then instead of having a IService with 15 methods with a construction with 20 dependencies listed (multiply it with say 50 IServices in your project - mmmm spagheti project), you end up with a IHandler with a single method and a constructor that only has say 3 injected dependencies, only what it needs. The trade off is, you now have 100's of small Handler files, some people don't like that BUT you now have a pleasant git commit history per file and most of the code of a Handler file fits on one screen and it is easy to digest. You can also let a handler trigger other handlers if it needs to (via the same mediator). It also fits with SRP.

Auto-wiring 95% of your dependencies still stays intact as the above plumbing will need it to work.

ceesharp · 4 years ago
If your IService has 15 methods and your ctor has 20 dependencies injected, then you have other issues. :)
rzzzt · 4 years ago
There was another article about building this structure into a single, self-contained executable and decide via command line arguments which piece(s) to use at startup. For some reason I remember "microlith", but it must be another clever word combination because I can not find any relevant HN posts, just one about archaeology...

You can run a single copy of the resulting binary (eg. for testing) that spins up all sub-components, or copy it to multiple machines and start individual parts. The ones which happen to run together can use in-process communication as well, others will have to dial in via remoting.

jdlshore · 4 years ago
I’ve coined the term “microlith,” but I’m probably not the only one and it may not be what you’re thinking of. I wrote about it in the new edition of my book, and also discussed it here: https://www.jamesshore.com/v2/projects/lunch-and-learn/micro...
4rb1t · 4 years ago
In theory it all sounds great and makes total sense but when the company grows to a billion dollar firm and hires engineers to work on the said monolith thats when things breakdown. The founding team was a closet knit team and ensure there is no spaghetti code but the moment you move on from that closely knit group its hard to enforce constraints.

Have worked at multiple companies that started out as a monolith and are still running the monolith in some form or the other while breaking it down into micro services.

Deleted Comment

Deleted Comment

mcv · 4 years ago
I notice there's a lot of comments here saying this exact same thing, and it's also what seems most sensible to me. Yet microservices are getting all the hype. Should be hype a thorough modular design more?
heavyset_go · 4 years ago
Yes, but microservices maximize cloud vendors' revenue compared to monoliths.
truffdog · 4 years ago
The nice thing about microservices is that you don't need iron discipline to maintain modular boundaries, instead its just how things work.

I like to analogize it to assembly vs structured programming languages, or C vs Java- you can write the same programs in either, you can write bugs in either, but there are whole classes of pitfalls that get erased by moving from one language to another.

loevborg · 4 years ago
Check out Polylith for a description of this great pattern https://polylith.gitbook.io/polylith/
bob1029 · 4 years ago
> Once an app reaches a certain size and anything can depend on anything else, reasoning about the whole can become difficult.

I have experienced this pain so many times and in so many different varieties. Often, you cannot meaningfully subdivide the problem space without ruining the logical semantics per the business (i.e. bowl of spaghetti).

An alternative is to embrace the reality that circular dependencies are actually inevitable and appropriate ways to model many things.

The example scenario I like to use is that of a typical banking customer. One customer likely has a checking and savings account. For each of those accounts, there are potentially multiple customers (joint ownership). Neither of these logical business types will ever "win" in the real world DI graph. Certainly you can start to invent bullshit like AccountCustomer and CustomerAccount, but that only gets you 1 layer deeper into an infinitely-recursive rabbit hole of pain and suffering. There also exists the relational path, but I have heavily advocated for that elsewhere and it is not always applicable when talking about code-time BL method implementation. Being able to model things just as they are in the real world is a big key to success in the more complicated problem domains.

Instead of trying to control what depends on what, I shifted my thinking to:

> What needs to start up and in what order?

Turns out, most things don't really care in practice. The only thing I have to explicitly initialize before everything else in my current code base is my domain model working set (recovery from snapshot/event logs) and settings. I decided to not use DI for any business services. Instead, all services become a static class with static members that can be invoked from anywhere. This also includes the domain model instance which is used as the in-memory working set. This type just contains an array of every subordinate domain type (Customers, Accounts, etc.). By having the working set available as a public static type, every service can directly access it without requiring method injection. If I was working with a different problem domain (or certain bounded context within this one), I might prefer method-level injection.

Yes - according to every book on programming style you ever read, this is an abominable practice. Unit testing this would be difficult/impossible. But you know what? It works. It's simple. I can teach a novice how to add a new service in an hour. A project manager stumbling into AccountService might actually walk away enlightened. You can circularly-reference things at runtime if you need to. I've got some call graphs that bounce back and forth between customer & account services 5+ times. And it totally makes sense to model it that way too as far as the business is concerned. Everyone is happy.

tmstieff · 4 years ago
I think I agree with you as well. Although it is hard for me to picture exactly how you've structured your dependency graph. In any case, extracting the business logic into some sort of static method / class has definitely been one of the only useful things I can carry across projects that works in nearly all use cases. It also makes unit testing the actual business logic extremely easy. That said, you can end up with a static method that takes 20 parameters, which is always fun. But, in those cases, you are lefty dealing with complexity that is intrinsic to the business, rather than complexity introduced through some bad architectural decision, so at least it is isolated.
marcosdumay · 4 years ago
Do you know what kind of software has an absurd level of horizontal scalability by default? Web servers.

The idea of splitting your web servers into multi-tiered web servers for scalability is, well, weird. Yet, somehow it's the main reason people keep pushing it. Even this article repeats this.

There's nothing on microservices that adds scalability. They make it convenient to deal with data partitioning, but it's an ergonomics change, not one of capability.

JamesSwift · 4 years ago
It seems over time some nuance has been lost in translation on this. Microservices weren't 'more scalable' than monoliths, they were 'more appropriately scalable'. In other words, you could scale parts independently and shape your infrastructure more effectively for your workload. e.g. If your bottleneck is logins, you scale the LoginService and don't need extra copies of the AppointmentService running to keep up.
twic · 4 years ago
Even this premise is fundamentally mistaken. If you have a monolith, and the LoginModule is overloaded, you add more hardware, and it's the LoginModule which makes use of the additional resources. The AppointmentModule isn't going to suddenly start using more resources just because they're available.
aflag · 4 years ago
Sure, but I think the point is that it's unlikely the bottleneck will be in the web server, but in whatever database you're using for your logins. Just because you have a single program, it doesn't mean it is required to use a single database.

But even if the webserver is the problem, it feels unlikely that you'll end up saving much in infrastructure cost by scaling just the login service rather than just deploying more copies of your webserver.

Not saying it is impossible, but it's unlikely for that to be a good justification to adopt microservices. I'm not saying that there aren't good reasons to use microservices, but I agree that scalability is not a good selling point. I'd actually argue that it's harder to build scalable microservices than monoliths.

Nextgrid · 4 years ago
I never got the “scalability” argument of microservices. You can trivially deploy multiple instances of your monolithic web application - chances are you’re already doing so by running multiple workers/threads in your application server. Spreading that to other machines is trivial.

The real issue is in scaling the data store. Microservices typically work around that problem by each having their own separate database, but nothing prevents your monolith from also talking to different DBs.

pas · 4 years ago
Monolith becomes a problem when it becomes too big to build/test/deploy/debug in a sane fashion.

And when you have alternatives, great. But for example game devs don't. Or operating system devs. Though of course these are all active areas of ongoing research (for decades!).

If the problem/subject were that easy we would have already solved it and it wouldn't be a hot topic.

And since it's likely a nonlinear problem it's hard to map the problem-solution space, hard to get a good mental model for it, so we use the next best thing: stories. We have success stories and parables on what not to do. (And whole conferences dedicated to telling them :))

Tehnix · 4 years ago
It's in fact exactly in Horizontal scalability that it becomes important to have control over exactly what resources you need to scale :)

Let's take a typical monolith. You'll be serving endpoints that: - Are CPU intensive - Are memory intensive - Are I/O intensive

They are almost always heavy on some part, but not all. And definitely they are not uniformly all relying on the same amount of resources.

Now, you scale out horizontally, adding instances of the same size because each of these monolithic instances need to be capable of serving the entire domain without impacting tail latencies. Some part of your application becoming more resource intensive? You'll be bumping up the sizes of the entire monolith because you are essentially needing to do the equivalent of provisioning for peak load.

Contrast this with microservices (which I'm not arguing are a silver bullet), you can run your memory intensive part of memory optimized instances, and run your compute intensive parts on compute optimized instances.

That's the part about scalability.

Now, the more interesting part (for me, personally) is reliability. You are decreasing your blast radius of a bad component/thread/process taking down your entire monolith service, and instead compartmentalizing it into limited subparts of your entire API or application.

Finally, as you mentioned, microservices do help guide you towards better choices when it comes to structuring your data. You can do this with monoliths as well (and should), but it doesn't come naturally, and having a single data store is the main reason I see teams run into scalability issues :)

---

Addition: Some thing I don't see many people talk about is the ability to address tech-debt in Microservices. My experience has been that it is an enormous benefit to have made you domain smaller for when it comes to making changes that require sweeping across the whole codebase. Examples include upgrading language versions (e.g. Scala 2 -> 3, Haskell 8.10 -> 9.2), upgrading frameworks, introducing stricter compilation checks (e.g. TypeScript and strict: true).

These easily end up being either insurmountable or year-long projects in a monolith, where it's very hard to incrementally benefit from the work because they are often by nature all or nothing changes.

baash05 · 4 years ago
Totally.. and adding read replicas in the mix too. Even rails allows you to stipulate the DB each model lives in, in a rather trivial manner.
antihero · 4 years ago
I guess the idea is to split the workload into parts so if one part of the workload is proving to use more resources, it can then get a larger dedicated pool of resources.

Plus being able to choose different tech for different types of work, and also being able to split between teams when your org gets more vast.

Though I guess this is more about using services than micro services, per-say.

smrtinsert · 4 years ago
It's scalability with regard to parallelism if work streams and business units. One module can respond to market faster if all the other business units and teams don't have to weigh in
delusional · 4 years ago
> but it's an ergonomics change, not one of capability.

Isn't that a bit of an empty argument. Literally anything Turing complete is an ergonomics change when compared to anything else Turing complete.

We are writing for the same computer. There's nothing different (fundamentally) about the kernel and my program.

marcosdumay · 4 years ago
Hum... You won't model scalability and concurrency using Turing machines. The Turing model is all about theoretical computation and those two parameters are all about real limits.

What you have on your desk isn't equivalent to a Turing machine.

monocasa · 4 years ago
I agree with you for the most part, but to be devil's advocate, I believe the argument is that the data stores don't tend to horizontally scale the same way except in the most 'cdn' like data flows.
m4l3x · 4 years ago
In my opinion implementing strong interfaces and good modularization is something, that we should talk about more, than doing Microservices. In the end it might be easy to rip of a Microservice, when needed, if the code is well structured.
jerf · 4 years ago
This is what I do in practice. I've seen it called a "distributed monolith".

One of the good reasons to spend time with Erlang or Elixir is it'll force you to learn how to write your programs with a variety of actors. Actors are generally easy cut points for turning into microservices if necessary. As with many programming languages I appreciate not being forced to use that paradigm everywhere, but it's great to be forced to do it for a while to learn how, so you can more comfortably adapt the paradigm elsewhere. My Go code is not 100% split into actors everywhere, but it definitely has actors embedded into it where useful, and even on my small team I can cite multiple cases where "actors" got pulled out into microservices. It wasn't "trivial", but it was relatively easy.

ravi-delia · 4 years ago
That's where a lot of the fun of Elixir comes from I think. It's viscerally satisfying to split off functionality into what almost feels like an independent machine that happens to be in the same codebase. It clicked in a way normal object oriented programming never did for me, I guess since it's not feasible to mint 10,000 genservers to use as more complicated structs.
fatnoah · 4 years ago
> In my opinion implementing strong interfaces and good modularization is something, that we should talk about more, than doing Microservices.

This is my position as well. Strong interfaces, good logical separation of function under the covers, etc. should allow splitting off things at whatever level of micro you prefer.

alexvoda · 4 years ago
Indeed, the principal seems to often be forgotten. The why of monolith vs citadel vs microservices is ignored by some people.

This results in K8s-driven-development instead of microservices.

baash05 · 4 years ago
I love that sentence.
jayceedenton · 4 years ago
The example given, of adding new instances of a session service that consumes from a Kafka topic, is completely wrong.

Kafka producers use a partition key of your choice, so the UserLoggedInEvent and UserActivityEvent that relate to the same userId will always be written to same partition. This is how horizontal scaling of Kafka consumers works, without ordering problems. Anyone that isn't aware of this has very limited experience with Kafka and event-driven microservices.

I respect the author's attempt to give some balance, but I think some parts aren't well informed.

It's important to understand the benefits of microservices AND the costs. Small focused projects have small, fast test suites, can be wholly owned and developed by a single team amongst many, can be released and deployed without fear of breaking unrelated features across a vast platform, can be retired easily when obsolete. They also introduce the complexity of communication across many systems to complete an end-to-end journey, and require incredibly careful design and carefully chosen boundaries and responsibilities if you want to allow each one to evolve independently. This takes time and experience, and many organisations get themselves into a big mess with no governance, consistency or cohesion across a confusing sea of services and teams.

There's no free lunch, but let's get beyond fashion-driven lurching from one extreme to another. This cycle of having a few years where the costs of an approach aren't acknowledged, to a few years where the benefits aren't acknowledged, is very lame.

Orou · 4 years ago
> This cycle of having a few years where the costs of an approach aren't acknowledged, to a few years where the benefits aren't acknowledged, is very lame.

Well said. I do sympathize with some of the criticisms but only insofar as people who didn't have experience with microservices get sold on a lot of hype, the tradeoffs aren't made clear, and they get burned. The cycle seems to be "This is the silver bullet for programming complexity!" for a few years, followed by a few years of "This isn't a silver bullet!" before the Next Big Thing (TM) comes along and the cycle starts over.

mbrodersen · 4 years ago
If you don’t know how to manage the complexity of a monolith (by using modules/libraries/clean interfaces) then you will have even more problems managing the complexity of micro-services. Since micro-services include all the complexity of the monolith with added networking and deployment complexity.
marginalia_nu · 4 years ago
Lifecycle management is also a big part of it, I think.

My search engine is a hybrid architecture, with some monoliths and some microservices.

* Index - 5+ minute start time and uses 60% of system RAM.;

* Search Server (query parser, etc.) - 30 second start time due to having to load a bunch of term frequency data, low resources, ephemeral state

* Assistant Server (dictionary lookups, spell checks, etc.) - fast start, medium resources, stateless

* Crawler - only runs sometimes, high resources, statefull

* Archive Server- fast start, low resources, ephemeral state

* Public API gateway - 5 second start time, low resources, ephemeral state

+ a few others

A lot of the ways it's divided is along the lines of minimizing disruption when deploying changes. I don't have the server resources to run secondary instances of any of these, so I'm working with what I've got. If I patch the crawler, I don't want the search function to go down. If I patch the search function, I don't want to have to wait 5 minutes to restart the index.

It would certainly be cleaner to say break apart the Index service in terms of design, as it does several disparate things, but those things have a resource synergy which means I can't, not without buying another server and running a small 100 GbE network between them. Seems silly for a living room operation.