Readit News logoReadit News
ThalesX · 3 years ago
I completed my role as a CTO for a company several years ago. I chose a boring stack with plenty of developers available: MySQL, PHP with Laravel, and Angular 2. Since then, I've been a founding engineer at some startups using the latest, cutting-edge tech stacks. I'm proud to announce that the stack I managed as CTO is still running stable and smooth with minimal intervention, and it remains easy to modify. Meanwhile, the startups with flashy stacks and nearly impossible-to-debug architectures have gone out of business.

It amazes me how some companies still adopt the "build it and they will come" approach, and when confronted with harsh reality, they double down on building (and especially increasing architectural complexity). CEOs, if you can't attract customers with an idea or don't have a vertical that people are eager to pay for, cramming 1000 half-baked features into a microservice architecture and using the latest programming language / paradigm won't save your company!

goodrubyist · 3 years ago
Were those startups that went out of business did so because of their stacks/architecture, or are you confusing correlation with causation? And, there is a good reason people shy away from PHP, and it has nothing to do with trying to be "flashy." There should be a name for this kind of fallacy.
ThalesX · 3 years ago
> Were those startups that went out of business did so because of their stacks/architecture, or are you confusing correlation with causation?

As far as I am aware, and I can tell, it wasn't because of the stacks / architecture, but lack of product market fit. The fact that the response to lack of PMF was to double down on product features and tech (instead of a business pivot), in my opinion, was what made the investor money go poof and the startup die down. I've seen this happen' in too many places (as an employee or consultant) to be a coincidence.

> And, there is a good reason people shy away from PHP, and it has nothing to do with trying to be "flashy." There should be a name for this kind of fallacy.

Different stacks for different use cases. If I can't get money with a shitty PHP product offering, it's probably best to figure out why instead of attributing it to my tech stack.

Tabular-Iceberg · 3 years ago
> And, there is a good reason people shy away from PHP

I agree, I also strongly dislike Angular and MySQL in addition to PHP. But I also think a much better indicator for success is discipline. Angular, PHP and MySQL, to their credit, aren’t crazy Turing tarpits, you can write good, successful software with them.

I’ve never seen a dysfunctional project where any of the inherent flaws of those technologies even registered on the same scale as the sheer malpractice of everyone working on the product. And this is also the problem with trendy and flashy stacks, developers not learning the stacks properly and using them as intended. Probably due to commercial pressure to push prototypes to production instead of rebuilding properly after learning the stack. I think that should be the takeaway from the original comment, that CTOs need to either budget for building it twice, or stick to a stack everyone knows inside and out.

JackMorgan · 3 years ago
It's called survivorship bias (1).

The only people pointing to PHP as the right choice that kept them from ruin are those that used it successfully. Therefore, they were building something that fit well with PHP's benefits and didn't expose too many of it's weaknesses. But no one knows for sure exactly what about that project made it a good candidate for a PHP monolith, because we don't really have any science around software engineering, just hearsay and fashion.

Having rescued several 10+ person year PHP projects from ruin, none of them were on the edge because of the language itself. However, the people building those systems made an incredible mess in a short time. I think each language attracts a certain type of developer and comes with it's own community baggage. The PHP community has some highly questionable best practices. But this is true of all programming language specific communities. They tend to be insular and have a lot of "winning the last war" advice for working around the shortcomings of the language's previous major version that have been fixed in the current major version.

(1) https://en.wikipedia.org/wiki/Survivorship_bias

jaapbadlands · 3 years ago
The point wasn't that you should use PHP, it was to not jump on stack trends. There is a name for your kind of fallacy.
iLoveOncall · 3 years ago
> there is a good reason people shy away from PHP

Please share it. And don't rely on articles that are more than a decade and 3 or 4 versions of PHP old.

_fat_santa · 3 years ago
Personally I think the indicator of success is not necessarily X stack or Y stack, it's what stack are you most comfortable with? I think one of the problems with those startups you mentioned that used flashy stacks is they are not only fighting a battle to build the product, they are also fighting a battle to learn their way around the new stack they just adopted, and learning all the quirks with it.

I don't think one stack is necessarily better than the other, it's all about how well do the developers know the tech stack in question.

Cthulhu_ · 3 years ago
http://boringtechnology.club/ remains relevant, and I try to apply it wherever I go with... varying results.

One issue though is that it seems more difficult to find competent developers for boring stacks, because they're always looking for the next challenge, the next thing they consider interesting or what looks good on their profile (vs what is best for their employer).

maccard · 3 years ago
> it seems more difficult to find competent developers for boring stacks, because they're always looking for the next challenge, the next thing they consider interesting or what looks good on their profile (vs what is best for their employer).

The people who job hop every 18 months and have flipped from ML to web3 to LLM's in the past 4 years are not the people you want to hire. If someone has 5-6 years experience writing Java code, they'll be fine if your tech stack is C#, and vice versa. There might be a bit of a learning curve pulling people from python or rails onto typed languages (or vice versa), but in my experience that's something that can be overcome easily enough. If you're willing to give someone a month to ramp up with the expectation they'll stay longer than 2 years, that seems to be a nice tradeoff to make.

Volrath89 · 3 years ago
I'd love to work on a "boring" stack and just ship features and build stuff but... companies are like a herd, and they all ask for the same microservices' technologies when interviewing, no matter what they are building.

So if I stay with a boring stack, I'd lose competitive advantage and face the potential of being "un-hirable" in the not-so-distant future.

cryptos · 3 years ago
Maybe the word "boring" scare some people away, but it is a good contrast to exciting new technology anyway. "Mature" would probably the more appropriate term. Many times the exciting stuff is the problem domain itself. But developers tend to be more excited about new technologies than about customer needs.
Nextgrid · 3 years ago
Maybe you just aren't paying enough money for a developer to actually stay put and be happy with their role, so the only ones that apply are those who merely use the role as a playground to polish up their resume at your expense?
theK · 3 years ago
I agree with you that choosing a boring stack and focusing on down to earth, understandable and pragmatic architectural principles offers good probabilities to get a maintainable product.

I would like to interject, though, that stack and architecture do not weigh equally on these probabilities. My observation has been that tech organizations fail when they loose their grip on pragmatism, regardless the stack.

That doesn't mean that you should force the latest, unproven, JS framework on 200 engineers but if you have the culture for it one can easily succeed with react+JVM, elm+Haskell, etc.

One interesting thing also is access to devs as each tech stack will attract different groups of the Dev society which will in turn affect your success chances.

o_m · 3 years ago
There is a middle ground to this. Sooner or later you have to upgrade. There might be a security upgrade or the old system doesn't play nice with other systems. So you are forced to upgrade to something more non-boring, even if you are using the same tech. It also get harded to hire developers. I doubt there is many devs that want to work on Angular 2 these days.
ThalesX · 3 years ago
The middle ground is that once the tech becomes the actual problem, it might be time to investigate possibilities, until then, customers don't care, business people doesn't care and the bottom line doesn't care. There are not that many companies in the world where the tech stack is their competitive advantage.

> I doubt there is many devs that want to work on Angular 2 these days.

From my experience, a product that makes money and is able to pay people at market rates, will find said people. Not everyone is an EMACS wielding, Linux wizard that refuses to touch tech out of idealistic concerns.

The problem, in my experience, is more often lack of PMF than tech stack (I've seen customers give insane leeway to actually valuable products).

yread · 3 years ago
I think it's because it's fun to work at a company that is tech-first (as opposed to just using tech as a tool for growing business) but it leads to doing things in complicated and risky ways "because we're a tech company"
ThalesX · 3 years ago
I guess it also depends on the person. I get off on providing customer value by addressing business needs. I scratch the other itch late at night with unspeakable horrors that never see the light of day.
jossclimb · 3 years ago
That sounds more like survivorship bias, startups typically fail over things such as product market fit, running out of cash, not how well the stack operates.
papito · 3 years ago
One way to run out of cash is to hire 50 expensive devs to build a microservices hairball with Kafkas and all the latest toys, as opposed to having 5 devs just knock it out, faster and better, with a single Python app and a Postgres database.

We used to build real-scale systems with a stack of this kind when we had fewer features and less CPU power.

iamleppert · 3 years ago
If you have a predictable workload (i.e. we ingest 100,000 videos a month with n size) you should be looking at it from that perspective -- how much compute you are going to need, and when. Serverless works well for predictably unpredictable, infrequent workloads that don't need to have perfect output ("good enough").

The big mistake I see people make is trying to be overly clever and predict the future workload needs. It never works out like that. Design for your current workload now, if your architecture can handle 10x of that great! Each time your scale from 10x workload you will likely need to redesign the system anyway, so you shouldn't pay that tax unless you absolutely have to.

There are a lot of limitations of serverless, the big one I experienced was inability to control the host environment and limitations on the slice of memory/CPU you get, such that you must take that into consideration when designing your atomic work units. Also paying the distributed computing tax is real and occurs at development and runtime -- things like job tracking and monitoring are important when you have 10,000 processes running -- you start to get into the territory of things that basically never happen on a single machine or small cluster, but become problems with thousands of disparate machines / processes.

goostavos · 3 years ago
>Design for your current workload

Please be my friend.

The bulk of my job these days is sitting in design reviews trying to convince people that just making up scenarios where you need to scale to 100x isn't actually engineering. It's exhausting self-indulgence. Nothing is easier than inventing scenarios where you'll need to "scale". It's borderline impossible to get software folks to just live in the world that exists in front of their eyes.

"Scale" should go back to its original meaning: change relative to some starting point. Slap a trend line on what you know. Do some estimation for what you don't. Design accordingly.

kamray23 · 3 years ago
Believe me, our startup offering a coffee marketplace for fans of TANO*C HARDCORE music who want to contribute to Azeri poverty elimination efforts is going to need to scale from hundreds of users to hundreds of millions in a matter of months and you need to design ahead for that.
Cthulhu_ · 3 years ago
This has been a problem in a few places I've worked at that decided to build "microservices" (read: simple apps that moved the complexity to a higher architectural level, i.e. by having every service talk to every other service over a REST API or event bus), not because it solved a problem they had, but because it MIGHT solve a problem they'd LIKE to have. Cargo cult, in a nutshell.

My current employer is going all-in on serverless because it solved a problem they had with performance. The problem wasn't solved by serverless, but by moving away from an old fashioned and unscalable solution. The real problem or bottleneck - a centralized SAP database - has not been solved yet. They would have achieved the same results if they rebuilt their API in a generic Java monolith.

Here's my prediction: when the crack team of consultants that powered through building a serverless architecture leaves (because they will, one because they're consultants and two because they get bored when the problem has been solved), they won't be able to find competent developers to take over and the whole thing will be rebuilt again in something they CAN find developers for. I mean it's just NodeJS, but the architecture is distributed and much harder to manage than in a simple monolithic app.

ShroudedNight · 3 years ago
> just making up scenarios where you need to scale to 100x isn't actually engineering.

Even for "peak" Amazon's concern seemed limited to about ~5x daily TPS maximums unless one had extraordinary evidence.

The counter-balance to limiting resource balooning to 5x scale is introducing Erlang-B modelling. Depending on how many 9s you require, you may need way more service availability than expected.

The 100x calculations are probably doubly wrong (both too large and too small), providing negative value false confidence.

iamleppert · 3 years ago
With few exceptions, the motivation for scale that isn't driven by a real need in the real world is because someone wants to experiment with distributed systems and is looking for an excuse. I think it comes from a good place -- wanting to learn something new, or add a new skill to your toolbox.

However, what happens is most of the time the product or company will never even reach that scale so that effort will be wasted. If you're not able to close the loop and actually support a running system at scale, with a real workload (not simulated or ab stats) it actually does the opposite: you don't learn what really works and what doesn't. It's easy to convince yourself you have built a system that can handle such scale based off back of the envelope calculations, synthetic testing, or cargo-culting.

Most of what actually happens with scale in the real world has to do with old & boring things like appropriate choice of data structures, cache design, load balancing, locality, etc. and has nothing to do with distributed computing, serverless, etc. which are just tools that have specific characteristics that might make them good choices or not.

jpgvm · 3 years ago
Yeah unless you are designing the replacement for a system that has already reached it's scalability limits you shouldn't be worrying too much other than not doing very silly things architecturally.

When you are designing replacements though you need to have an idea of what your scalability runway is before the next re-design will be possible. Sometimes that is just 2x, often times it's 10x, occasionally it's 100x but it's all situational.

acdha · 3 years ago
> The big mistake I see people make is trying to be overly clever and predict the future workload needs. It never works out like that. Design for your current workload now, if your architecture can handle 10x of that great! Each time your scale from 10x workload you will likely need to redesign the system anyway, so you shouldn't pay that tax unless you absolutely have to.

I’d also add that the problem you hit first is almost certainly going to be something you aren’t expecting. I’ve seen so many times where people spent time on the cool sexy problems and then years dealing with the scaling problems in their algorithms or other services instead.

valleyjo · 3 years ago
“For every order of magnitude change everything breaks.” An oversimplified rule of thumb - but I think it applies here.
bhauer · 3 years ago
Obviously, the article is microservice apologia, but...

> They were able to re-use most of their working code by combining it into a single long running microservice that is horizontally scaled using ECS...

No, it was no longer a microservice; it became a plain service, as in SOA. It was no longer micro. That's the whole point.

They could have saved time and money had they just made the thing a plain service to begin with. This thing was not sufficiently complicated to warrant the complexity of starting with a bunch of microservices.

The article says many hot takes have missed the point, but I think what we're seeing here is an example of the two sides talking past one another since the author hasn't appreciated the opposition's arguments at all.

klabb3 · 3 years ago
Yes, and that’s not the only example of microservice apologia:

> They state in the blog that this was quick to build, which is the point. When you are exploring how to construct something, building a prototype in a few days or weeks is a good approach.

First, it’s a huge stretch to say its simpler to use microservices. Anything distributed has to deal with consistency, dropped messages, serialization, propagation latency etc. If you choose to ignore those aspects, that doesn’t make it simpler, it just leaves a wrapped gift of complexity to your future coworkers who will have to maintain it.

Secondly, this wasn’t the case of building something exploratory for future unpredictable workloads. All the requirements were already available. This tells you an important story: Amazon engineers were not able to estimate upfront how much “serverless” resources were needed, and how this “microservice mesh” would perform. This isn’t surprising, because the more complex a system is, the harder it is to predict how it’ll work under some workload. And it doesn’t help that the microservice preachers have been actively discouraging developers to think about infrastructure.

I don’t have a horse in this race. I often hold off with judgment until I have seen the defending side speak. In this case, the defense only strengthens the cause for concern.

potamic · 3 years ago
What's the difference between a microservice and an SOA service?
m_mueller · 3 years ago
One thing to note is that the transition from smaller to larger services tends to be straight forward, while cutting into smaller can be tricky. Thus IMO there is some merit to keeping them small in the beginning until you can analyse them under production workloads.

That being said, in this case here I think some very simple performance / cost modelling would have shown the issues with serverless already in the beginning. I do find serverless architecture useful, but not for such a case with heavy base load. Furthermore, data locality is also an important aspect to consider in anything with strong latency or throughput requirements - serverless or not.

mirekrusin · 3 years ago
It's the other way around - start with monolith (because it's easier to change things, it's just single pr addressing all places at once) and then, possibly few years later, whatever has crystalized and has clear boundaries with little to no changes coming in or changes contained within this boundary - can be potentially extracted.

Just look how over-represented RoR is/was as bootstrap tech in known, successful companies.

Microservices is not yes/no - it's a slider. You may find sweet spot at ie. 50/50 split like ie. github does, have more or less microservices while keeping core in monolith or services under single versioned monorepo.

Microservices are good for satellite services like system integration, pre/post-processing, gateways etc.

As a rule of thumb whatever can fit single (tech lead + team)'s "head" - can be monolithic (single monolith or set of services under single versioned monorepo managed by that team). Their job is to provide stable apis/uis - otherwise it can be seen as black box by other teams.

This is natural way things evolve (teams are created around naturally bounded concepts) and the suggestion is - don't break it by creating mismatches, keep it in harmony. If something creates measurable problems, slowly form team around it with new tech lead from existing team and let it grow on it's own - it'll extract itself to separate subsystem by itself. It doesn't have to be done overnight.

It's astounding how many people use "scale" as chupacabra to scare everybody in the meeting room without clearly defining what they mean by "scale". To make decision around changing architecture to scale you need to precisely define what it means, have metrics, have benchmarks showing current limits, proof it's a problem now or near future and focus just on those actual issues, if they even exist.

vaidhy · 3 years ago
I see a lot of about architecture, service design and complexity. One of the key missing piece is around Amazon culture. Promo-driven design is a real deal at Amazon. I am sure this design got someone promoted to principal engineer or higher. If you cannot build complexity into the design, how can you prove your caliber and get promoted.
klabb3 · 3 years ago
Definitely a real phenomenon. Obscure overengineering also serves another purpose: if you want to build a mini-empire, people leave you alone because they don’t have time to fully grok your design, so they can’t provide meaningful feedback. In these companies the burden-of-proof falls on the critic of a design, not the designer. Simply saying “this looks too complex” is treated as “oh, they couldn’t understand this sophisticated piece of engineering”. This is a cultural issue. If your peers don’t grok it quickly, you should return to the drawing board, or at least provide evidence for the extraordinary circumstances that warrants all that complexity.
ownagefool · 3 years ago
The problem with statement like "this looks too complex" is that it lacks substance and is used as a stick to beat people with, and often has alternative meanings.

To combat this, you need to justify the complaint.

In the case of lambda, I think being tied into an ecosystem of things that you can't run locally is a net loss, but Amazon has done a terrific job at making a bunch of people not super comfortable with compute, see just running a share-nothing process as the way to bridge the gap, like PHP never existed.

Same thing happens with IT orgs that end up owning cloud. You get azure, regardless of whether you want such a thing.

theK · 3 years ago
The article is more about bundling together lambda functions into horizontally scalable containers. Paraphrasing the author:

> I think the prime video team's presentation should have been called "Moving from lambda to container microservices"

There is nothing particularly bad about this move. Prototype something in a FaaS, move it to a container based microservices when you know which features you want and how they perform.

Not doing much FaaS prototyping myself but I agree with the author that FaaS beats containers in time to production (even if the difference might be negligible in some cases)

disruptiveink · 3 years ago
Exactly. The article is stating that there is nothing wrong with the move Prime Video did, and that it was a "serverless to containers" refactoring that somehow the Internet misrepresented as "microservices to monolith" and kicked up viral "even Amazon admits that microservices is overengineering!" meme discussions. Which was obvious to anyone who had read the original post that was not the case, but most people were commenting on what they thought the (incorrect) title meant.

I am disappointed that the top comment and all of the discussion is once again not related to the article at hand in the slightest and just focused on the easy "complex is bad, guys!" dunk.

amluto · 3 years ago
> they had some overly chatty calls between AWS lambda functions and S3. They were able to re-use most of their working code by combining it into a single long running microservice that is horizontally scaled using ECS, and which is invoked via a lambda function. This is only one of many microservices that make up the Prime Video application. The problem is that they called this refactoring a microservice to monolith transition, when it’s clearly a microservice refactoring step

If I understood the post at all, I disagree.

One can spend hundreds of HN comments discussing technology stacks, monoliths, etc, and this is important: it affects maintainability, developer hours, and money spent on orchestration. For some applications, almost the entire workload is orchestration, and this discussion makes sense.

But for this workload, actual work is being done, and it can be quantified and priced. So let’s get to the basics: an uncompressed 1080p video stream is something like 3 Gbps. It costs a certain amount of CPU to decompress. It costs a certain amount of CPU to analyze.

These have price tags. You can pay for that CPU at EC2 rates or on-prem rates or Lambda rates or whatever. You can calculate this! And you pay for that bandwidth. You can pay for 3Gbps per stream using S3: you pay S3 and you pay EC2 (or whatever) because that uses 1/10th of the practical maximum EC2 <-> S3 bandwidth per instance. Or you pay for the fraction of main memory bandwidth used if you decode and analyze on the same EC2 instance (or Lambda function or whatever). Or you pay for the fraction of cache bandwidth used if you decide partial frames and analyze without ever sending to memory.

And you’ll find that S3 is not “chatty”. It’s catastrophic. In fact, any use of microservices here is likely catastrophic if not carefully bound to the same machine.

jiggawatts · 3 years ago
This is called mechanical sympathy and most architects and developers do not have it.

Computer architectures encompass 13 orders of magnitude of performance! That's roughly the difference between something like a trivial function call processing data in L1 cache to a remote call out to something in another region.

People often make relatively "small" mistakes of 3 or 4 orders of magnitude, which is still crazy if you think about it, but that's considered to be a minor sin relative to making the architecture diagram look pretty.

jayd16 · 3 years ago
So it didn't make sense to separate some functionality. So what? No one who advocates microservices would say you can cut a service along any arbitrary line and expect good results.
AndrewKemendo · 3 years ago
Look, the point is simple, AWS marketing says you can do everything all the time with microservices and their 'DevOps' infrastructure to "abstract away" all the complex you know - engineering - that you cannot automate effectively and have a long running robust system.

So it's humorous to see an Amazon team that is doing things correct and holistically in contrast to the sales and marketing bullshit that AWS and Cloud evangelists spew.

Us nerds have a great way of missing the point and bikeshedding on the technicals (which is also great sometimes!)

namaria · 3 years ago
It's amazing how cloud companies managed to convince so many people in software engineering that software engineering wasn't a core competence in producing software.
atulatul · 3 years ago
I think the difference was simple vs easy. They made a few of the things very easy.
AndrewKemendo · 3 years ago
Yeah cause engineers aren’t ones typically doing the buying
scarface74 · 3 years ago
> Look, the point is simple, AWS marketing says you can do everything all the time with microservices and their 'DevOps' infrastructure to "abstract away" all the complex you know - engineering - that you cannot automate effectively and have a long running robust system.

As someone who knows a little bit about what AWS recommends to customers when first hand, I can absolutely guarantee you that AWS does not recommend serverless to everyone.

The vast majority of projects that consultants do by revenue at any cloud consulting company (including AWS’s own internal company) is regular old VMs.

wiether · 3 years ago
Thank you.

It seems that most of the takes against this article are coming from people that have opinions about AWS based on nothing.

Sure, AWS is selling Lambda/Serverless services. Sure, they are making articles/videos on microservices architecture. But they are also selling EC2 instances. They are also making articles/videos on monolithic architecture.

I have never come across any content from AWS telling that this service/architecture/mindset is the best and every company should use/implement it.

Anyone who went through the Well-Architected Framework would be completely aware of that.

But I guess it's easier to mock some folks over the Internet than taking time to grab some knowledge and reflect.

mlhpdx · 3 years ago
I am so baffled by this whole discussion. Isn’t serverless (aka event-driven distributed systems made up of small components, with no reserved capacity) just one of many options? I write code for 8-bit controllers that I’d never run on Lambda (even though I’m a Lambda/Step Functions freak). I need 90s server software that can only run in a VM.

It’s a complicated world and your favorite tool/model just isn’t going to cut it everywhere. Move on maybe?

zwischenzug · 3 years ago
I still don't really get it. There are so many simple, easy to build, cheap to run three tier frameworks out there so why bother with all the hassle of porting at all? If your system gets super busy then you can repurpose the code with higher performance and well tuned DBs later. I spent 15 years doing that for some of the busiest systems in the world, but those started on a single cpu.
gadflyinyoureye · 3 years ago
At my client, we go Serverless first in a bastardized state, NestJS. The benefit is quick development at a low cost. We have a series of calculations. Each read from a common S3 and write to it. This allows the teams to do their work with independent deployments.

Each one costs about $5 a month. That’s rather hard to get in the AWS EC2 world. They are also easier to manage. We don’t have to manage keys or certain. There is no redeploy after 90s that comes with EC2.

However these are low access, slowish apps. Maybe a 1,000 calls a day. They can take a minute to do their thing. Seems like a nice match for lambdas.

Benefit of NestJS is if we ever needed to move out of lambda we have a tradition all Express app.

zwischenzug · 3 years ago
Useful insight, but whether it's 5$ or 50$ a month, compared to dev costs that's a rounding error pretty much everywhere.
jeremycarter · 3 years ago
If you write your NestJS application with nice boundaries (SOA or DDD) you can split it up as it scales. It's the same approach a lot of .NET guys take, write it modularly and scale out just the bits that need the throughput.
gregoriol · 3 years ago
The point is: managers and keywords