Readit News logoReadit News
shitloadofbooks · 8 years ago
I think "microservices" is so appealing because so many Developers love the idea of tearing down the "old" (written >12 months ago), "crusty" (using a language they don't like/isn't in vogue) and "bloated" (using a pattern/model they don't agree with) "monolith" and turning it into a swarm of microservices.

As an Infrastructure guy, the pattern I've seen time and time again is Developers thinking the previous generation had no idea what they were doing and they'll do it way better. They usually nail the first 80%, then hit a new edge case not well handled by their architecture/model (but was by the old system) and/or start adding swathes of new features during the rewrite.

In my opinion, only the extremely good developers seem to comprehend that they are almost always writing what will be considered the "technical debt" of 5 years from now when paradigms shift again.

dreamcompiler · 8 years ago
I call this the painting problem. Painting the walls of a room seems easy to an amateur: You just buy a few gallons at Home Depot and slap it on. But a professional knows that prep, trim, and cleanup are 80% of the job and they take skill. Anybody can slap paint onto the middle of a wall. What's difficult and time-consuming are making the edges sharp and keeping paint off the damn carpet.
cortesoft · 8 years ago
So you are saying edge and corner cases are the most difficult?
brisance · 8 years ago
This is Parkinson's Law of Triviality. https://en.wikipedia.org/wiki/Law_of_triviality
danieltillett · 8 years ago
My experience is a careful amateur painter is 1000% better than an average professional. Professionals are certainly a lot faster, but if you look carefully at their work it is in the main very shoddy.

If you want a good result don’t skimp on the tools. Buy good quality brushes, rollers, filler, throws and paint. Also buy an edger to cut in the walls and ceilings. One final tip buy some of the disposable plastic liners for the roller tray so you don’t have to spend time washing out the tray at the end of the day.

kuschku · 8 years ago
That may be a bad analogy, considering painting your own apartment is something a great deal of people do, and often with good success.

My parents (not in any way experts on that field) painted their entire house themselves, except for two rooms that were painted by a professional painter, and the professional painter left much worse corners than my parents. This was the paid-for result (ignore the dark corner at the bottom, that’s caused by the flash): https://i.imgur.com/s1VHV2W.jpg

markatkinson · 8 years ago
Great analogy. I tried to paint my own flat. Was a disaster.
madshiva · 8 years ago
A real hacker can fix is car, paint that wall, and can code too.

There is no problem painting for me and many others who know how to use their hand.

Is amazing when you talk to people and they are like: Did you do that? How do you know how to do that?

Learn, try and you can do anything. As people did at every step.

Dead Comment

Dead Comment

JamesBarney · 8 years ago
I really think microservices are a process win not a technical win. It's easier and better to have 5 teams of 10 managing 5 services. Then having one team of 50 managing a one super service.

When I see a team of 7 deciding to go with microservices for a new project I know they're gonna be in for a world of unnecessary pain.

brandonbloom · 8 years ago
I agree that it's _easier_ for teams to have their own little fiefdoms, but not necessarily _better_. Shipping the org-chart is often a symptom of a leadership problem. When natural service boundaries exist, good leadership may choose to ship the org-chart, but too often extrinsic factors such as the arrangement of dev's desks dictates the architecture.
zenonu · 8 years ago
The key to microservices is a framework and tooling around them to make them work for you. Release management, AuthN/AuthZ, compilation, composition, service lookup, etc. should all be "out-of-the-box" before microservices should ever be considered. Otherwise the O(n) gains you get in modularity turn into O(n) FML.
mgkimsal · 8 years ago
> When I see a team of 7 deciding to go with microservices for a new project I know they're gonna be in for a world of unnecessary pain.

Faced this a couple of years ago, and I was the lone dissenting voice suggesting this was not going to go well. Then I learned that "microservice" in reality just meant everything was going to be one nodejs process running endpoints, with ios and android clients hitting it, which... didn't really fit my understanding of "microservice"; that's just "service".

scarface74 · 8 years ago
I'm the dev lead for a largish company with a small development shop 4-9 people (contractors come and go). I went for a microservice like hub and spoke model where a bunch of small services integrate with a central Mongo database where all of the CRUD is managed via an API and validation is done via the API. Also cross cutting concerns like configuration (a wrapper around Consul), logging (structured logging via Serilog), and job scheduling (Nomad) is done via a common package.

I chose this approach because the developers who were already there were relatively new to C#, and I knew we were going to have to ramp up contractors relatively fast.

Our dev ops process revolves around creating build and release processes by simply cloning an existing build and release pipeline in Visual Studio Team Services - the hosted version of Team Foundation Services - and changing a variable. Every service is a separate repo. Each dev is responsible for releasing their own service.

The advantages:

1. All green field development for a new dev. They always start with an empty repo when creating a new service.

2. Maintenance is easier. You know going in all you have to do is use a few documented Postman calls to run your program if you need to make changes. Also, it's easy to see what the program does and if you make a mistake, it doesn't affect too many other people if you keep the interface the same.

3. The release process is fast. Once we get the necessary approvals, we can log on to VSTS from anywhere and press a button.

4. Bad code doesn't infest the entire system. The permanent junior employees are getting better by the month and we are all learning what works and doesn't work as we build out the system. Each service is taking our lessons learned into account. We aren't forced to keep living with bad decisions we made earlier and building on top of it.

A microservice strategy only works if you have the support system around it.

In our case, an easy to use continuous integration, continuous deployment system (VSTS), easy configuration (Consul), service discovery and recovery (Consul with watches), automated unit and integration tests, a method to standardize cross cutting concerns

And finally, Hashicorp's Nomad has been a god send for orchestration. Our "services" are really just a bunch of apps. Nomad works with shell scripts, batch files, Docker containers, and raw executables. It was much easier to set up and configure than kubernetes.

pjmlp · 8 years ago
They can manage those teams by writing libraries, no need for microservices.
beamatronic · 8 years ago
What if you expect the team of 7 to grow to 20 or more.. In a larger context almost any company would hire more developers if they could find them and hire them.
ohyes · 8 years ago
Jesus, 10 people on a team? What are you doing that you need 10 people to work on it? I manage like 8 services by myself.
meddlepal · 8 years ago
I think it has to do with control. I've written microservices and microservices tooling now for a long time... so long when I started we were just calling it something like (isolated-responsibility) SOA and we didn't have a fancy buzzword.

Developers want to own their thing. Microserivces desire springs up because of a lack of communication culture and desire for siloification in a companies organization to keep various interests from bothering the developers. Those almost always point to a failure of management in my mind rather than a technical failure.

bri3d · 8 years ago
Is this truly a management failure?

If a few things are true, I could see this as a win:

* I can isolate my developers from outside interests using microservices. * My developers are more effective in each dimension (quality, retention/happiness, velocity) because they are isolated from outside interests. * My software is easier to operate and more reliable because it is a microservice.

If any of these three things aren't true, then I agree. But I'm not sure that a "communication culture" can scale to a large organization and I'd like to see a truly large company (1000+ developers) successfully doing so. I've seen more success come from separation of concerns and well-deployed microservices seem to be fairly effective to this end.

tyingq · 8 years ago
I wouldn't dismiss the effect of résumé based development either. Even if subconscious, having the latest buzzwords on your CV is a motivator.

Deleted Comment

somberi · 8 years ago
" Microserivces desire springs up because of a lack of communication culture and desire for siloification" - Would have upvoted this more than once if possible.
raspie · 8 years ago
To put it in an even less flattening way, the real problem are developers, not the paradigm. Your company's code will be as good as your developers are regardless of the paradigm.

Microservices will not help you if your developers have the same level of skill and foresight as whoever wrote the monolith, which is probably true if those devs were selected by the same hiring process that your company has today, subject to the same organizational effectiveness, etc.

realityenigma · 8 years ago
I think you actually have hit the proverbial nail with the hammer. I've seen firsthand the terrible talent pool, at least here in the south-west US. We've done ourselves a disservice trying to get everyone and their brother to become a programmer because...economy! and more accurately, I want more money.
tabtab · 8 years ago
Re: ...the real problem are developers, not the paradigm. Your company's code will be as good as your developers are regardless of the paradigm.

Indeed! Good developers/architects spot repetition or weaknesses in current techniques and can often devise solutions that can be added to the stack or shop practices with minimal impact. You don't necessarily need paradigm or language overhauls to improve problem-areas spotted. Poor developers/architects will screw up the latest and greatest also.

qaq · 8 years ago
"Your company's code will be as good as your developers" as good as your companies nn% of worst developers
humanrebar · 8 years ago
> In my opinion, only the extremely good developers seem to comprehend that they are almost always writing what will be considered the "technical debt" of 5 years from now when paradigms shift again.

I've also seen really bad developers with that attitude: it's all crap, so just ship whatever already.

The good developers write code that can be replaced, rewritten, or rescaled later. Though, charitably, both monolithic service and microservice people are trying to do exactly that. It's just what sort of scale they're thinking about and what part of the software development lifecycle they think will be especially difficult going forward.

elgenie · 8 years ago
The critical distinction between "it's all crap, so just ship whatever already" and what the grandparent wrote is that "technical debt" doesn't reside in easily disposable code/components, but rather as may-need-to-be-rectified-but-maybe-not-right-now downsides in what is enormously useful and producing value.

Good developers create code that's prepared for the possibility of being modified repeatedly and becoming foundational; on the other hand, preparing for code/components to be thrown away is a no-op.

foo101 · 8 years ago
> The good developers write code that can be replaced, rewritten, or rescaled later.

You make a very good point.

Over the years, I learnt that almost nobody except the developer and maybe one or two peer-developers cares about good quality code. The management just wants to ship services/products. They don't care how good the code is. All they care is that they can meet their deadlines. Of course, good quality code can increase the chance of meeting deadlines, but working long hours can also increase the chance of meeting deadlines. Management does not understand code, but they understand working long hours.

If I ignore this and still care enough to write good quality code, in nearly all projects, I am not going to be the only one to work on the code. There is going to be a time, when someone else has to work on the code (because responsibilities change, or because I am busy with some other project). As per my anecdata, the number of people who do not care about code quality far exceeds those who do. So this new person would most likely start developing his features on the existing clean code in a chaotic manner without sufficient thought or design. So any carefully written code also tends to become ugly in the long term.

In many cases, you know that you yourself would move out the project/organization to another project/organization in a year or so, and the code would become ugly in the long term no matter what you do, so why bother writing good code in the first place!

It is very disappointing to me that the field of programming and software development that I once used to love so much out of passion has turned out to be such a commercial, artless, and dispassionate field. How do you retain your motivation to be a good developer in such a situation?

simias · 8 years ago
The #1 thing they should teach in any engineering school (or maybe any school period) is "you shouldn't remove/replace/change something until you've understood why it was done that way in the first place". Or maybe a somewhat equivalent version: "you're not as clever as you think you are and the people who came before you were not as dumb as you think they were".
dancek · 8 years ago
This principle is commonly known as Chesterton's fence. https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence
ZeroGravitas · 8 years ago
Would those graduates be rewarded or penalised for their professionalism? I would guess penalised in a depressing number of situations.
stult · 8 years ago
>"you shouldn't remove/replace/change something until you've understood why it was done that way in the first place".

That only works until you encounter something for which there was no rational reason in the first place.

kbuchanan · 8 years ago
A little over two years ago my little team of three (gasp!) succumbed to the allure of microservices. After six months of writing custom solutions for problems I'd created, I bagged it. The _one_ upside of our microservice architecture was how simple it was to consolidate back into a single app. Only took a couple months. I believe the theoretical benefits of microservices, namely, hard domain boundaries, remain compelling, but geez, I also learned its benefits are extremely circumstantial.
cookiecaper · 8 years ago
Props to you for realizing you were going down a bad path and cutting your losses. Many people are unwilling to honestly evaluate their projects until they've been in the rear view mirror for a long time. A great deal of software is driven by ego and fads, and it's great to see someone make a decision to change pace based on practicalities.
WalterBright · 8 years ago
I've been writing code since the 70's. The further back in time one looks, the worse my code is. The good news, I suppose, is that one is never finished learning how to write code better. Until they plant me, that is.
rtpg · 8 years ago
Do you legit think that people are all just doing this as a fad?

There's legitimate arguments for looking at these patterns, the big one being "isolation of concerns". The biggest counterargument is that the ops cost is much higher than assumed, of course.

The idea that the existing code base could have problems shouldn't be a surprise to anyone. Amazon almost fell over because of their code base. Twitter too. And its not even not doing it right, but simply that scales change. Or patterns change.

And in new companies, it _could_ be that people don't get it.

"Microservices as mass delusion" discounts a lot of people who are really thinking hard about how to handle the pros and the cons of things.

lmm · 8 years ago
> Do you legit think that people are all just doing this as a fad?

Yes. That's been my experience.

> The idea that the existing code base could have problems shouldn't be a surprise to anyone. Amazon almost fell over because of their code base. Twitter too.

Most organisations aren't Amazon. Most organisations aren't Twitter. And even these web-scale organisations aren't as all-in as the microservice advocates. (I worked at last.fm for a time and while we did many things that could be classed as "microservices" from a certain perspective, we didn't blindly "microservice all the things")

> "Microservices as mass delusion" discounts a lot of people who are really thinking hard about how to handle the pros and the cons of things.

Most fads start from a core of sensible design. The web really did revolutionise commerce, but many "x, but on the web" companies of the late '90s really were dumb.

nucleardog · 8 years ago
> Do you legit think that people are all just doing this as a fad?

As you say there are a lot of pros and cons to any architecture or paradigm, which is why we're still talking about it and saying things like "right tool for the job" and not just using the One True Method(TM).

I legit think that a lot of people using the new hotness as a form of cargo cult programming with no understanding of the methods they're considering or how they apply to the problems they're trying to solve.

It's not just microservices that are improperly applied. I've been in the industry long enough to see dozens of languages, technologies, paradigms, processes, and everything else hailed as the second coming of christ and applied inappropriately all over the place until the shinyness wore off.

And I mean... When we start talking about developing for Amazon scale, we're already talking about situations that don't apply to 99% of developers. Not a great argument that their cases aren't inappropriate applications of the pattern.

philwelch · 8 years ago
The ops cost of not using microservices is a lot higher than you'd think, too. At some point, when you have hundreds of engineers and your monolith is compiled together from libraries written by a dozen different teams and you have to try and make one heroic release per week, except half the time it fails and you have to go back and fix it, and absolutely no one in the company can ship a new feature because you're blocked.
rb808 · 8 years ago
> the pattern I've seen time and time again is Developers thinking the previous generation had no idea what they were doing and they'll do it way better. They usually nail the first 80%, then hit a new edge case not well handled by their architecture/model

Perfect, I've seen this happen many times as well. I think you're generous on the 80% part. Usually they nail the first 50%, but they time they get to 80% its getting just as messy and some of the developers are planning another rewrite again.

ibejoeb · 8 years ago
It's not exclusive to inheritance, though. There are lots of times that the original principal decides that he could have done it better and winds up in exactly the same place. It could be progress, too, if it trades on set of failure modes for another that is less severe or less frequent.

This is why a team needs access to good architect who's seen the paradigms shift, or even cycle. You're almost never starting from scratch, so you really need someone who's able to incorporate better or more suitable tech without throwing out the baby.

If you're microservices-based, that last part is easier, even if it falls into one of the described pitfalls, e.g., system-of-systems.

Cthulhu_ · 8 years ago
This is not wrong; this is I think also the reason behind the "framework of the month" craze in JS that seemed to be a thing last year / year before. Implementing business value is boring; working with new technology is cool and fun. I'm not impervious to that either. I mean I'm writing a HN comment while I should be building a feature :p
warrenm · 8 years ago
You need to be keeping a ledger on technical debt, the same way any and all other "debt" is tracked - http://www.hydrick.net/?p=2394

"Here’s the thing, most of the time we do something that incurs technical debt, we know it then. It’d be nice if there’s a way for us to log the decisions we made, and the debt we incurred so we can factor it into planning and regular development work, instead of trying to pay it off when there’s no other alternative."

mstade · 8 years ago
I was wondering how such a ledger might work, given that it should be about as difficult to quantify this kind of debt as it may be to estimate the time and effort required to implement certain changes in general. Later on in the post, there's this nugget of gold:

    So how would this ledger work? Well, for starters it has to
    track what we can’t do because of current technical debt.
    It should also be updatable to note any complications to 
    subsequent work or things you can’t do yet because for old 
    design decisions. At this point, you’re tracking the 
    “principle” (the original design decision causing technical 
    debt) and interest (the future work that was impacted by 
    the debt).
That's it, isn't it? You need to define the principle and the interest, and these two are actual tangible things in the form of specific decisions (principle) and things that are now adversely affected by them (interest.) If these are linkable, then it should become trivial to compute a number to this debt, whether it is simply just the number of things adversely affected or some other kind of aggregate like the combined estimated effort of those adversely affected things. This debt could probably be calculated in many different ways, but the fact that you can properly quantify it should make decisions on whether to tackle or ignore the debt much more informed.

This was an eye opener for me – thanks for sharing!

chillidoor · 8 years ago
While I'm not arguing with you, I have had a different experience working with developers and microservices, perhaps because the teams have been more seasoned/experienced and there is more of a collaborative environment.

I found that a lot of the time developers start moving towards microservices when they find that a monolithic app becomes too difficult to work on. For example, multiple teams working on the same codebase will often have accidental code conflicts. Plus, scaling a monolithic app because one part of it is under load isn't always cost effective or logical. So, teams will start to break off components into microservices to make development easier and less painful. Naturally this has to be weighed up as microservices bring a different set of challenges, 'gotchas', etc, etc but in my experience the teams have done a proper job discussing the pros and cons.

bonesss · 8 years ago
I think that kind of process, slow decomposition based on performance and requirements, is the only sane approach.

It's also reflected in how we manage code at the micro-level: collecting related logic into a module until it becomes unwieldy and then separating out independent sub-functionality into their own modules and dependencies as they grow...

There is no right size for a class. Smaller is better, but the ideal is "right sizing". What's right? Well, that's tricky, but whatever doesn't hurt is pretty ok.

There's no right size for a service. Smaller is better, but...

efa · 8 years ago
I 100% agree. I even catch myself thinking I didn't know what I was doing when looking at code I wrote years before. I'll jump into rewriting it and discover it was written that way for a reason. I've learned to trust my former self wasn't an idiot.
tboyd47 · 8 years ago
That's absolutely true, but it's not specific to microservices. The pendulum could very well swing back the other way

I've come to view microservices in the context of Conway's Law. If you have a team of developers working on a project who don't like to communicate or work with each other, do not understand version control, and all have different programming styles and technology choices, the only feasible architecture is one service per person.

I have no trouble believing that this is what's really behind Netflix's adoption of microservices. From what I've heard it's a sociopathogenic work culture, and if I worked there I would probably want to just disappear from everybody too.

corpMaverick · 8 years ago
You do have to use Conway's Law to your advantage. At least be very aware of its effects.
pweissbrod · 8 years ago
The monolith first approach has always served me well. Nascent projects benefit from portability because they need higher amounts of flux. As they mature let's assume they grow in scale and integrations and somewhere along the line it becomes sensible to break off pieces into services.

To me the big benefit of microservices is scaling out components into flexible independent release cadences but the trouble comes with employing them too early.

https://martinfowler.com/bliki/MonolithFirst.html

maga_2020 · 8 years ago
I think these are appealing for 2 reasons:

1. There is a believe, that component isolation (taken to extreme by microservices) enables better productivity of the development department.

That is more features, more prototypes, more people can be moved in and out a given role. So that those 5 crusty programmers are not a bottleneck for the 'next great idea' that a Product manager or CIO reads up on.

2. There is a constant battle for the crown of "I am modern" (eg data science, micro services, big data ) That is going on in every development or technology organization. Where the closer you are in your 'vision' to google or Netflix, the more 'modern' you are.

The rest of the folks is 'legacy'. So you get budgets, you get to hire, you get to 'lead'. Micro-services is the enabler to help to win this battle (although, probably, for a short term).

---

I personally, do not believe that microservices bring anything new compared to previously used methods of run-time modularization :

  Plugins
  Web services
  RPC
  N-tier architectures

I do not think they replace the standards like CORBA, although I think they will end up eventually replicating it, with better thought out standards and tools.

Deleted Comment

jlg23 · 8 years ago
I don't think microservices are loved by developers so much because they are technically superior but because they allow for quicker/safer decision-making on management level - just like "agile development" imho only helps trainee-devs but shines in providing clear communication strategies between management and developers (read: keeps mgmt off dev's back for at leat 6h a day).

Abstractly spoken, I don't care whether you call f(x) directly, via IPC, RPC or as a microservice. In my preferred programming languages there is not much of a difference anyway.

tehlike · 8 years ago
If something isn't broken, do not fix. I think part of it is that large companies can keep engineers happy by giving them rewrites. Otherwise, not enough projects to keep everyone entertained.
deepsun · 8 years ago
Well, the best architectured project I've ever worked on in Google was actually rewritten 4 times from scratch. I believe rewriting is always good for the project. Not always for business, though. Fortunately, Google had resources to allow rewrites to happen.

Rewrites also serve as thorough code review and security audit.

jrs95 · 8 years ago
This is usually true but I’ve recently been introduced to a legacy codebase which is so bad it can hardly be modified.

Mixed tabs and spaces, sometimes one space indentation or no indentation at all, 1000+ line Java methods, meaningless variable names, no comments or documentation. SQL transactions aren’t used, the database is just put into a bad state and hopefully the user finishes what they’re doing so it doesn’t stay that way. That’s just the server. The UI is just as bad and based on Flash (but compiled to HTML5 now)

tvanantwerp · 8 years ago
An important piece of advice I was given when embarking on my technical career:

> You don't solve problems. You take the problems you have, and exchange them for a different set of problems. If you're doing your job, the new problems won't be as bad as the old problems. That's all you can really do.

gadders · 8 years ago
fulafel · 8 years ago
The reinvention and re-discovery of the problems can be very good for the new developers who are taking over a slice of the monolith's functionality. And it an happen rapidly, on the new developers' own terms. Depends on the case.
deepGem · 8 years ago
I wonder why. Isn't it so unproductive to keep tearing down stuff and rebuilding them with the next shiniest tool ? I mean, you are making very little progress on product features.
sply · 8 years ago
But in fact, K8s provides more robustness than old good kinda monolith Pacemaker
amrx101 · 8 years ago
Damn, you described my manager.

Deleted Comment

Dead Comment

dvt · 8 years ago
Biggest issue with microservices: "Microservices can be monoliths in disguise" -- I'd omit the can and say 99% of the time are.

It's not a microservice if you have API dependencies. It's (probably) not a microservice if you access a global data store. A microservice should generally not have side effects. Microservices are supposed to be great not just because of the ease of deployment, but it's also supposed to make debugging easier. If you can't debug one (and only one) microservice at a time, then it's not really a microservice.

A lot of engineers think that just having a bunch of API endpoints written by different teams is a "microservice architecture" -- but they could't be more wrong.

nemothekid · 8 years ago
Once when starting a new gig I inherited a "microservices" architecture.

They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM. Of course if you were using microservices, you needed docker as well, so they had also developed a giant docker container containing all 12 microservices which they deployed to a single host (all managed by supervisord). Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance. Everything was provisioned manually by the way because there was no service discovery or container orchestration - just a docker container running on a host (an upgrade from running the production processes in a tmux instance). What they really had was a giant monolithic application with a complicated deployment process and an insane JVM overhead.

Moving to the larger instance likely solved the performance issues. In place they now had multiple over provisioned instances (for "HA"), and combined with other questionable decisions, were paying ~100k/year for a web backend that did no more than ~50 requests/minute at peak. But hey at least they were doing real devops like Netflix.

For me, I've become a bit more aware of cargo cult development. I can't say I'm completely immune to cargo cult driven development either (I once rewrote an entire Angular application in React because "Angular is dead") so it really opened my eyes how I could also implement "solutions" without truly understanding why they are useful.

eadmund · 8 years ago
> They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM.

I've dealt with an even worse system, with a dozen separate applications, each in its own repo, then with various repos containing shared code. But the whole thing was really one interconnected system, such that a change to one component often required changes to the shared code, which required updates to all the other services.

It was a nightmare. At least your folks had the good sense to use a single repository.

z3t4 · 8 years ago
You don't always have to know why but it's somewhat frightening that so many "engineers" don't have a clue why they are doing something (because Google does it). And I'm of course guilty of it myself jumping on the hype-train or uncritically taking advice from domain experts only to find out years later that much of it was BS. Most of the time though, you will not reach enlightenment. I guess it's in our nature to follow authority, hype, trends and group think.
thebeardedone · 8 years ago
I recently had a similar experience; our product at work is a monolith not in the greatest shape as it has technical debt which we inherited and our product is usually used condescendingly when talking to other teams working on different products. To our surprise when we started testing it with cloud deployments, it was really lightweight compared to just one of the 25 java micro-services from the other teams.

Their "microservices" suffered from the same JVM overhead and to remedy this they are joining their functionalities together (initially they had 30-40).

Clubber · 8 years ago
>They were having performance problems and "needed" to migrate to microservices. They developed 12 seperate applications, all in the same repo, deployed independently it's own JVM.

9 times out of 10 it's because developers don't know how to properly design and index the underlying RDBMS. I've noticed there is a severe lack of knowledge of that for the average developer.

giancarlostoro · 8 years ago
Sounds like they don't understand why it's called a microservice to begin with. They're not supposed to be solutions an entire piece of software, just dedicated bits at least that's what I'd figure with a name such as "micro". When we adopted microservices at my job (idk if Azure Functions count or not) we did it because we had 1 task we needed taken out of our main application for performance concerns and because we knew it would involve way more work to implement (.NET Framework codebase being ported to .NET Core which meant the dependencies from .NET Framework did not work anymore in .NET Core) but we eventually turned it into a WebAPI instead due to limitations of Azure Functions for what we wanted to do (process imagery of sorts).
merb · 8 years ago
> Of course since they had 12 different JVM applications, the services needed a host with at least 9GiB of RAM so they used a larger instance.

well experimentally oracle solved that problem, somewhat. you could now use CDS and *.so files for some parts of your application. it probably does not eliminate every problem, but yeah it helps a bit at least. but well it would've been easier to just use apache felix or so to start all the applications on a OSGI container. this would've probably saved like 5-7 GiB of RAM.

Dead Comment

friendly_chap · 8 years ago
> A microservice should generally not have side effects.

That's plainly wrong. I get the gist of what you are saying and I more or less agree with it but you expressed it poorly.

Having API dependencies is not an issue. As long as the microservices don't touch each others data and only communicate with each other through their API boundaries microservices can and should build on top of each other.

In fact that's one of the core promises of the open source microservices architecture we are building (https://github.com/1backend/1backend).

I think your bad experiences are due to microservice apps which are unnecessarily fragmented into a lot of services. Sometimes, even when you respect service boundaries that can be a problem - when you have to release a bunch of services to ship a feature that's a sign that you have a distributed monolith on your hands.

I like to think of services, even my services, as third party ones I can't touch. When I view them this way the urge to tailor them to the current feature I'm hacking on lessens and I identify the correct microservice the given modification belongs to easier.

dvt · 8 years ago
> That's plainly wrong. I get the gist of what you are saying and I more or less agree with it but you expressed it poorly.

I'm not sure what you think side effects are, but I'm using the standard computer science definition you can look up on Wikipedia. If you have a microservice that modifies, e.g. some hidden state, it's a disaster waiting to happen. Having multiple microservices that have database side-effects will almost always end up with a race condition somewhere. Have fun debugging that.

sk5t · 8 years ago
> A microservice should generally not have side effects

I gotta ask, how is this realistic? A salient feature of most of the software I've worked on is that it has useful side effects.

endorphone · 8 years ago
It isn't realistic, and borders on absurd gatekeeping.
dragonwriter · 8 years ago
I think that it is accurate to say that in a system composed of microservices, a microservice should not effect the state of other microservices in the system other than by consuming them.

Whether it should consume other microservices is less clear, and gets into the choreography vs. orchestration issue; choreography provides lower coupling, but may be less scalable.

dvt · 8 years ago
A microservice, imo, should just be a simple black box that takes in some input and returns some output (sometimes asynchronously). No side-effects necessary. No fiddling with database flags or global state, and definitely no hitting other microservices. See @CryoLogic's post for a good example. This means that you simply can't build some things using microservices -- like logging in a user -- and you'd be right.
hinkley · 8 years ago

    a bunch of API endpoints written by different teams is a "microservice architecture"
Or chaos, or madness, or Bedlam.

Most people have enough trouble getting three methods in the same file to use the same argument semantics. Every service is an adventure unto itself.

We have a couple services that use something in the vein of GraphQL but some of the fields are calculated from other fields. If you have the derived field but not the source field you get garbage output and they don’t see the problem with this

kumarvvr · 8 years ago
> It's not a micro-service if you have API dependencies

Just out of curiosity, what alternatives are there to avoid API dependencies? Is it really possible to make non-trivial apps while avoiding internal dependencies?

At some level, is it really possible to have a truly decoupled system?

corpMaverick · 8 years ago
> Just out of curiosity, what alternatives are there to avoid API dependencies?

Very important how the boundaries a drawn. Generally, the more fragmented the micro-services the more API dependencies.

Also, look at the Bounded Context concept.

https://martinfowler.com/bliki/BoundedContext.html

And Conway's Law certainly plays a role.

http://www.melconway.com/Home/Conways_Law.html

> At some level, is it really possible to have a truly decoupled system?

You cannot avoid all the API dependencies, but you can reduced their number.

brown9-2 · 8 years ago
I’m confused. If a microservice doesn’t call the api of any other microservices, then when is sending the requests to any of them?

A large purpose of service oriented architecture is encapsulation. If no other microservices can make requests to your microservice, then you really haven’t encapsulated much.

virmundi · 8 years ago
I tend to think that the job of invoking the services lies within a gateway. For example, you can have a microservice for recipes, but a web gateway that know all of the various integrations necessary to generate a page. So the web gateway is essentially a monolith.

If and when you need to support mobile devices independently of your web UI, you can have a mobile gateway. Same idea. This gateway is optimized to know how to handle mobile traffic realities like smaller download sizes, etc.

tunesmith · 8 years ago
I'm thinking this concept improperly conflates synchronous requests with eventually-consistent asynchrony.

No, you definitely don't want microservices making synchronous requests to other microservices and depending on them that way.

But it still may be necessary for your services to depend on each other, and that's where you can allow that communication through asynchronous eventually consistent communication. Actor communications, queue submission/consumption, caching, etc.

dreamfactored · 8 years ago
Microservices as the M in MVC?
outworlder · 8 years ago
I do have a case of monoliths-in-disguise-itis.

I just wish someone with "street cred" (or with a famous, recognizable name I could use for appeal to authority) could create a simple post saying "Hey, if you have a shared data store that all services depend on and are accessing directly, you are not doing microservices". "And you also don't have microservices if you have to update everything in one go as part of a "release"".

That way I could circulate it throughout the company and maybe get the point across. I've tried to argue unsuccessfully. After all, "we are doing K8s, so we have micro-services, each is a pod, duh!" No, you have a monolith, which happens to be running as multiple containers...

hibikir · 8 years ago
Microservices with a share datasource are just a Service Oriented Architecture, circa 2005. You might have a variety of middle tier services, deployed in their own boxes, or at the very least Java service containers, but ultimately talking to some giant Oracle DB behind it. Microservices that share a database are not deployed into a container running JBoss, and instead use something more language-agnostic, but it's ultimately the same thing. All you have to do is quote the many criticisms of that era, when any significant DB change was either impossible, or required dozens of teams to change things in unison.

The best imagery I know for this picture is a two-headed ogre. It might have multiple heads, but one digestive system. Doesn't matter which head is doing the eating, ultimately you have the same shit. I've head semi famous people talk about this in conferences, but few articles.

virmundi · 8 years ago
Martin Fowler on shared DB's: https://martinfowler.com/bliki/IntegrationDatabase.html

So yes, you now have an authority that says doing that is bad.

z3t4 · 8 years ago
yes! If you break something out into smaller parts, but they're still entangled, you have actually added complexity instead of reducing it.
franzwong · 8 years ago
> If you can't debug one (and only one) microservice at a time, then it's not really a microservice.

It depends on what you want to debug. It is like unit test vs integration test. If you are finding a bug related to integration between multiple services, you definitely need to debug on multiple services.

mathattack · 8 years ago
Do you have examples of micro services and good data warehouses working well side by side? Your point makes sense, but I keep hoping for a way to have One Data Source Truth working side by side with the services that across it.
ianamartin · 8 years ago
A data warehouse really should be completely orthogonal to any architecture choices. Good data warehouses are fed by data engineering pipelines that don’t care if you have a single rdbms or multiple document stores or people dropping CSVs in an FTP directory.

I hate to burst your bubble, but you shouldn’t and can’t have truth working along side systems that access it. Data is messy and tends toward dishonesty. The only way to get clean truth for your organization is by thoughtfully applying rules, cleaning and filtering as you go. The more micro your architecture is, the more this is true. Because there is no way 20different teams are all going to have the same understanding of the business rules around what constitutes good, clean input data. Even if your company is very clear and well-documented about business and data rules, if you hand the same spec sheet to 20 different teams, you are going to get 20 variations on that spec.

The only way to get usable data that can be agreed upon by an entire company (or even business unit) is by separating your truth from your transactional data. That’s kind of the definitional of a data warehouse.

If you let your transactional systems access and update data directly in your warehouse, you are in for a universe of pain.

bonesss · 8 years ago
You might want to look into Apache Kafka, with log compaction, which provides a model to accomplish exactly that while also handing message passing/data streaming.

Your data warehouse can suck facts from Kafka (with ETL on either side of the operation, or even integrated into Kafka if you so desire), and you can keep Kafka channels loaded with micro-"Truth"s (current accounts, current employees, etc). That way apps get basically real-time simplified access to the data warehouse while your data warehouse gets a client streaming story that's wicked scalable. And no coupling in between...

It's a different approach than some mainstream solutions, but IMO hits a nice goldilocks zone between application and service communication and making data warehousing in parallel realistic and digestible. YMMV, naturally :)

threatofrain · 8 years ago
You mind as well just say a library of static functions.
manigandham · 8 years ago
There is no such thing as "microservices", it's just services, otherwise known as a service-oriented architecture (SOA). A service is a logical grouping of functionality as it makes sense in your business domain. A small service for a large company can be bigger than the entire product of a startup; there is no standard unit of measure.

Computers also don't care how code is deployed and different services can be bounded by classes, or namespaces, or assemblies, or packages, or processes, or completely separate APIs reached over the internet on the other side of the planet.

Microservices can perhaps be defined as more of a deployment model but even then it's 99% about the team and organization structure. As companies get larger, there is a trend towards smaller teams in charge of separate functionalities that create, deploy, and operate their own service. This can be effective in managing complexity and creating efficiency, although definitely not absolutely necessary.

All that being said, outside of the major software companies, I have seen exactly 0 uses of microservices where the benefits were worth the effort, if any benefits even appeared at all.

lulmerchant · 8 years ago
"microservice" is certainly a buzzword, but it's not just service-oriented architecture. Mircoservice architecture is modular design, with all the advantages of effectively infinite scale, incredibly flexible orchestration, and the resilience provided by the public cloud providers. It has it's own set of challenges, and it isn't the right solution for every problem. But it can be fantastic when used properly. I've written plenty of well performing API endpoints in microservices, and I've also done a fair bit of business process automation with them.
manigandham · 8 years ago
All software is modular, whether those modules are individual functions or separate processes called over the network. Like I said, the computer doesn't care so it's really up to the dev team and organization to divide functionality as they see fit.

There's no inherent "infinite scale" that magically shows up; proper architecture and design does that. "Microservices" again just goes back to being a rather badly defined description of a certain way of deployment. Every time I've seen these used (in smaller companies), there's no benefit over just having separate assemblies talking in the same process instead.

hueving · 8 years ago
It has nothing to do with the resilience of cloud providers. If the cloud providers were resilient, then you wouldn't need all of the wonderful scheduling tooling from stuff like kubernetes to deal with unstable individual machines. You can also certainly have a microservice architecture without running anything on the cloud at all.

Microservices make it possible to simply deal with unstable environments. Cattle, not pets.

oblio · 8 years ago
And how is that not also service oriented architecture? That's also supposed to be modular and one of the reasons for splitting out service is to scale. You can then place those services in a cloud...
klodolph · 8 years ago
Background… I’ve been on good and bad projects that used microservices, and good and bad monolithic projects.

The madness is going away but the microservices are staying. There are some rationales for microservices that are conspicuously missing.

1. Fault isolation. Transcoder stuck in a crash loop? Upload service using too much RAM? With microservices, you don't even really have to figure out what's going on, you can often just roll back the affected component.

2. Data isolation. Only certain, privileged components can access certain types of data. Using a separate service for handling authentication is the classic example.

3. Better scheduling. A service made of microservices is easier to schedule using bin packing. Low priority components can be deprioritized by the scheduler very easily. This is important for services with large resource footprints.

The criticisms remind me of the problems with object-oriented programming. In some sense, the transition is similar: objects are self-contained code and data with references to other objects. The 90s saw an explosion of bad OO design and cargo cult architectures. It wasn't a problem with OO design itself. Eventually people figured out how to do it well. You don't have to make everything an object any more than you have to make everything a microservice.

maga_2020 · 8 years ago
WRT #2. Data isolation argument.

it is not clear to me, why data isolation is your view, is exclusive to microservices.

I have build non trival RBAC+ABAC authorization platforms, using PDP and embedeabble PEP, and did not find that it was useful by micro services only. And I did not feel that it can only be called via 'micro service' pipeline.

In a way the Authorization is a separate service, yes, but it should be offering an embeddable PEP (policy enforcement point) that one can embed (link or call out-of-process if needed), from pretty much anywhere (monolith, or any runtime component).

Authorization decisions require very very low latency, as you are authorizing pretty much every data or function interaction.

In fact, for data interaction, authorization engines offer SQL-rewriting/filtering -- so that the actual 'enforcement' happens at the layer of database you are using, not even at the layer of the component that's accessing the data.

klodolph · 8 years ago
I think you may have misread my comment. I said "authentication" and you are talking about "authorization".

Authentication can be very easily centralized in a separate service, authorization is a completely different beast. Authentication often involves access to high-value data such as hashed passwords, authorization does not.

andyfleming · 8 years ago
Authorization and authentication are two different discussions. Protecting the data necessary for authentication is valid rationale. That same service could provide read-only data to another service in a single response that would allow for all subsequent authorization logic to be done without any additional latency. Additionally that data necessary for authorization may not be sensitive like the other data used for authentication.
ingas · 8 years ago
But "OO design itself" had(still has) a major flaw: it was not clearly defined. Everybody had his/her own unique vision of "OO design".
bpicolo · 8 years ago
> Better scheduling. A service made of microservices is easier to schedule using bin packing. Low priority components can be deprioritized by the scheduler very easily. This is important for services with large resource footprints.

That's harder scheduling, not easier. With a monolith you just give it all the resources and threads will use resources as is necessary. After that it's a matter of load balancing appropriately.

klodolph · 8 years ago
> That's harder scheduling, not easier. With a monolith you just give it all the resources and threads will use resources as is necessary. After that it's a matter of load balancing appropriately.

The key was "with bin packing". If you "just give it all the resources" then you're not bin packing and you're barely scheduling. At that point, your scheduler is only capable of scheduling based on CPU and IO usage, and not (for example) based on RAM. That last one is tricky, because most runtime environments won't return memory to the operating system (e.g. free() won't munmap()), and we're currently in the middle of a RAM shortage. Your machines will almost always have a different shape from your processes, it's just something you have to live with.

A bin packing scheduler is not useful for all companies and all services. It depends on the size of your resource footprint, with very large services benefiting the most.

So, microservices give you better scheduling in the sense that you can use fewer machines to run the same set of services. However, this is not important to everyone.

This stuff is built into e.g. Kubernetes so it is actually quite easy. You just can't do it with monoliths.

wst_ · 8 years ago
I found out recently that people too often think about microservices in context of broader solution like it would be one app, just scattered around. I did that mistake in the past, either. The longer I work with microservices the clearer it is to me that teams implementing them should forget about the big product, just focus on the service, as it would be product itself. Assume that anyone can use it, for any purpose they like, as long as they stick to the contract that is, and you'll be fine.

I tend to have two layers of design, now. One - big picture, which treats services anonymously. Just black boxes that respond to input. The goal here is to build solution like kids are building stuff from building blocks.

Other layer depicts services, as separate beings. They treat all their clients anonymously. They have a contract to fulfill and whomever plays by the rules can be served all right. They should be treated as completely separate projects, have their own backlog, release strategies, etc.

Now, if you would have a product that utilize certain data, would you allow some anonymous guy from the internet tap to it directly? No need to answer, I guess.

Edit: typo

sooheon · 8 years ago
IOW, good, clean function composition.
mettamage · 8 years ago
While this is a simplification, I often catch myself thinking: isn't programming creating functions and functions of functions and that all way down? And on each layer we call them differently because of the context we're in.

Input --> Stuff happens --> Output

Again, it's a simplification, although to be fair, I sometimes don't see how -- other than that I'm feeling that I'm ignoring context too much (e.g. underlying hardware or networks or REST API endpoints).

mikekchar · 8 years ago
Just for the record, I'm one of the people who thought that putting a CORBA ORB inside GNOME was a fantastic idea. We're all young once!

Microservices are just another way for us to do premature subsystem decomposition -- because we always think that we can build components with stable APIs that will be small, clean and reusable. It's even more fun to put that subsystem into a different process because, who doesn't like a little bit of extra latency in their processing? I jest, but it's not such a silly idea. By making sure everything is in another processes and using the most inefficient IPC system available (TCP/IP), you ensure that nobody is going to do stupid things similar to what people tend to do with threads. The multi-processing aspect appeals to people because it helps them break down the problem into isolated chunks and reason about them.

The key here, though, is to realise that you almost never need multi-processing. The design challenge is actually the same whether you isolate your processing in different processes or not. However, it's much easier to refactor your code when you haven't put road blocks in your path first. If you are doing that, then it is easy to extract the functionality into a separate process if you need to (or even a thread if you happen to work in an OS that thinks that thread processing should be more efficient than process processing).

In short, don't practice "I must protect myself from the stupid programmers" programming and instead concentrate on writing good code with your coworkers.

CryoLogic · 8 years ago
Best use I've found for microservices is highly isolated and well-defined stateless functions which make a significant (read compute intense) change to some data and drop it somewhere else e.g. image compression.

Now you can use this microservice anywhere and just change a few params in how you call it and you have avatars, thumbnails, etc.

burlesona · 8 years ago
Yep, this has been my experience as well. When micro services work well they have a simple and well-defined API that is easy to keep backwards compatible as it evolves, and it tends to be something that evolves quite slowly compared to the primary application codebase.

For the posters who said "like a library," yeah that's exactly the idea, but consider if you have an operation that can be done by a library but that has very different scaling characteristics compared to the rest of your system. Eg. you have one highly compute intensive operation while the rest of your system is IO bound. If you can split these apart it's easier to deal with scaling.

napsterbr · 8 years ago
So basically a library? :)
philipkglass · 8 years ago
It's like a library that you can call regardless of language ecosystem.

I joined a company that had a large, old, and mature selection of services written in PHP. If I'd tried to rewrite that mature code in a different language I probably would have wasted a lot of time for little benefit. If I'd had to write new code in PHP just to access old code as libraries that would have been a problem too. But functionality was exposed over HTTP APIs that could be used from any language, any runtime.

crimsonalucard · 8 years ago
Well you don't want to call a processor intensive task with a library. A separate service for these types of tasks is a better architecture.

Although I'm not sure whether or not a task queue architecture with separate server workers executing code passed over from a central monolithic app server is still considered a "microservice" architecture.

Neeek · 8 years ago
Closer to command line tools in systems where you can just pipe output around in to whatever you want.
rtpg · 8 years ago
well a library + deploying it into its own space so that it doesn't bring down your main app due to compute time.

The operational part of microservices can end up being pretty important in these cases

zerokernel · 8 years ago
HTTP-as-an-ABI

Dead Comment

jbreckmckye · 8 years ago
Sounds like a candidate for serverless.
ChicagoDave · 8 years ago
I've delivered two major applications (400k users, critical internal apps) using micro-services in the cloud reducing cost and increasing continuous delivery capabilities.

There are definitely special cases, but overall, after 33 years building software, domain driven design, PaaS, micro-services, and continuous delivery is the most productive paradigm I have ever seen.

virmundi · 8 years ago
Please go on. Can you provide details?
cube2222 · 8 years ago
Not the OP but also having worked on multi-million user apps, off the top of my head: zero downtime deployments, small failure domains (if you make a bad update to a service, only related functionality suffers, the rest keeps working), frequent small deployments (like, several times a day), easy and quick integration testing (cause you only have to test the functionality of one service, not the whole system), easier debugging, cause if one functionality isn't working, it's easy to analyze only the logs of the service responsible for that functionality (and optionally move to others later, having identified some part of the cause). Also, every microservice is a new clean slate, you can quickly learn from your mistakes and try out new approaches. (Not diametrically different, but you have the ability to iterate more)

Edit: another one, is that if you keep the microservices actually small and well described by an API, you can easily, quickly and safely heavily refactor/rewrite old services.

ChicagoDave · 8 years ago
corpMaverick · 8 years ago
How big or small are your micro-services ? how many do you have ? How do you draw the boundaries ?
ChicagoDave · 8 years ago
I can only refer you to Eric Evans book (https://www.amazon.com/Domain-Driven-Design-Tackling-Complex...) and other domain driven design material.

Boundaries are by domain, and yes that's not a simple thing to define. Sometimes, domains have varying interfaces, which makes building micro-services more complex, especially when trying to adhere to REST/Swagger standards (something I'm not overly find of).

But keeping things as simple as possible is really the best approach.

All micro-services should be small. When I see someone say "big", then I'm guessing there are a lot of ad-hoc actions...those need to be broken down into their proper domain or relegated to a query service.