Readit News logoReadit News
abiro · 6 years ago
PSA: porting an existing application one-to-one to serverless almost never goes as expected. Couple of points that stand out from the article:

1. Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

2. Use managed services whenever possible. You should never handle a login event in Lambda, there is Cognito for that.

3. Think in events instead of REST actions. Think about which events have to hit your API, what can be directly processed by managed services or handled by you at the edge. Eg. never upload an image through a Lamdba function, instead upload it directly to S3 via a signed URL and then have S3 emit a change event to trigger downstream processing.

4. Use GraphQL to pool API requests from the front end.

5. Websockets are cheaper for high throughput APIs.

6. Make extensive use of caching. A request that can be served from cache should never hit Lambda.

7. Always factor in labor savings, especially devops.

foxtr0t · 6 years ago
So, to summarize, you should:

1. not use the programming language that works best for your problem, but the programming language that works best with your orchestration system

2. lock yourself into managed services wherever possible

3. choose your api design style based on your orchestration system instead of your application.

4. Use a specific frontend rpc library because why not.

...

I've hacked a few lambdas together but never dug deep, so I have very little experience, but these points seem somewhat ridiculous.

Maybe I'm behind the times but I always thought these sort of decisions should be made based on your use case.

EDIT: line breaks.

iamsb · 6 years ago
The way read above comment is - If you can live with following limitations, then use lambda/serverless, works great. I have got to a point where for any internal systems used by internal users, lambda is my defacto standard. Very low cost of operation and speed to market. For anything that is external facing I prefer not to use lambda, specially if growth of usage is unpredictable.
Bombthecat · 6 years ago
You are not wrong. But it is all about saving money on labor. The rest are just the constraints of the system you use. (Aka requirements) its like complaining about the need to use posix for Linux.
VvR-Ox · 6 years ago
That's so true and I'm happy people begin to realize that.

The worst for me is the vendor lock in, directly followed by the costs.

dlanouette · 6 years ago
I think the comment is exactly opposite of what you are suggesting.

The comment is saying that Lambda has limitations and works best when considering those limitations. If those limitations don't fit your use case, you shouldn't be using Lambdas - or, at least, don't expect it to be an optimal solution.

abiro · 6 years ago
Think about serverless as framework-as-a-service. It has a learning curve, but if you buy in, it is an amazing productivity boost.

(If Reddit’s video hosting being built and operated on a serverless stack by a single a engineer won’t convince you, I don’t know what will.)

hallman76 · 6 years ago
> I've hacked a few lambdas together but never dug deep

Then why comment? You clearly don't understand the use-case that AWS fits.

I've had jobs that took 18 hours to run on single machine finish in 12 minutes on Lambda. I could run that process 4 times a month and still stay within AWS's free tier limits.

For the right workloads it is 100% worth realigning your code to fit the stack.

frenchman99 · 6 years ago
I can't support point 7. enough. People often forget about the cost of labor.

We migrated our company webapp to Heroku last year. We pay about 8 times what a dedicated server would cost, even though a dedicated server would do the job just fine. And often times, people tell me "Heroku is so expensive, why don't you do it yourself? Why pay twice the price of AWS for a VM?"

But the Heroku servers are auto-patched, I get HA without any extra work, the firewall is setup automatically, I can scale up or down as needed for load testing, I get some metrics out of the box, easy access to addons with a single bill, I can upgrade my app language version as needed, I can combine multiple buildpacks to take care of all the components in our stack, build artifacts are cached the right way, it integrates with our CI tool, etc, etc.

If I had to do all of this by hand, I would spend hours, which would cost my company way more. In fact, I'd probably need to setup a Kubernetes cluster if I wanted similar flexibility. By that point, I'd probably be working full-time on devops.

rogem002 · 6 years ago
Once you factor in the learning time for AWS per a developer the cost is even higher.

At my previous company we had project with an AWS deploy process that only two developers could confidently use. Teaching a new developer & keeping them up to date was a big time sink.

For comparison we had a Rails app setup on heroku that on day one junior devs were happily deploying to (plus we had Review apps for each PR!)

dlanouette · 6 years ago
I'm curious. Did you look into Googles AppEngine? It seems to have a lot of the benefits that Heroku offers, but is much cheaper.

Granted that it does impose some limitations, and therefore isn't right for all apps. But it does seem like it would work for a large percentage of web apps and REST api's.

aflag · 6 years ago
The cost you're talking about is really hard to measure. Were they able to reduce the team sizes and get rid of positions after the change? Did the payroll reduce at all?
guzik · 6 years ago
Same for us.

- Corrupted build? Reverse button for the rescue.

- SSL? They got you.

- Adding new apps in less than 1m?

and so on ...

zaro · 6 years ago
I also feel the same about point 7.

The big difference we are migrating away from Heroku to Kubernetes for the same reason.

cameroncf · 6 years ago
> PSA: porting an existing application one-to-one to serverless almost never goes as expected.

Came here to post this and agree 100%. Moving to Serverless requires evaluating the entire stack, including the server side language choice, and how the client handles API calls.

Often a move to serverless is better accomplished gradually in stages than the quick "lift and shift" that AWS like to talk about so much. Sometimes you can simply plop your existing app down in Lambdas and it runs just fine, but this is the exception not the rule.

staticassertion · 6 years ago
> The only valid options for performance sensitive functions are JS, Python and Go.

With custom runtimes that's not the case anymore. I write my lambdas in Rust.

Can't stress (7) enough, would also add 'morale' savings. It can be really stressful for developers to deal with gratuitous ops work.

joelthelion · 6 years ago
> Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

Shouldn't this not be a problem if you're doing 10 million requests a day? If you have enough requests, your lambdas should stay hot most if not all the time.

sweeneyrod · 6 years ago
If the lambdas are always hot, what is the advantage over having a server? I thought the big selling point of serverless was not having to pay for long stretches of time where you don't have any requests.
abiro · 6 years ago
If you have 10m requests uniformly distributed, then yes it’s less of a problem, but that’s unlikely. (Even then lambda containers will be recycled multiple times throughout the day, so there is still a small penalty.)
CosmicShadow · 6 years ago
I built an azure function that runs for free that just pings my .NET MVC pages periodically so they are always hot on my cheap hosting.
mnutt · 6 years ago
Having used serverless a bit, I’ve run into many of the same issues and generally agree with the advice but depending on your use case it may not be worth contorting your architecture to fit serverless. Instead, I’d look at it as a set of criteria for if serverless would fit your use case.

At this point in time the only places I’d use lambda are for low-volume services with few dependencies where I don’t particularly care about p99 latency, or DevOps-related internal triggered jobs.

missosoup · 6 years ago
That's more steps than 'just use containers and ignore the serverless meme'.
abiro · 6 years ago
I don’t think anybody advocates for rewriting all existing projects as serverless. But if you’re starting a startup, going all in on serverless will let you deliver better products faster. If Paul Graham’s Beating the Averages would be written today, the secret weapon would be serverless, not Lisp.
k__ · 6 years ago
The community is to blame for this.

If "serverless heros" are running around promoting Lambda, newcomers will use it without thinking twice...

xwdv · 6 years ago
In tech you either die a hero or live long enough to become the villain.
oliora · 6 years ago
You forget C++. It’s a great choice for Lambda due to startup times. Python startup time is actually terrible and should be avoided if the call rate is really high. Actually, Lambda instance is reusable and after spinning up it will be used to handle multiple requests (if ones are coming often enough).
_ivvf · 6 years ago
I measured startup of the runtimes a long time ago, and back in the days of node.js 010.x at least, Python 2's startup time was twice as fast as Node.js's, and Java's wasn't much worse than Node.js. I don't know how .NET fares compared to Java, but I imagine it's about the same.

Furthermore, eople like to compare runtime startup times, but this tells a very small portion of the story. For most applications, the dominant startup cost isn't the startup of the runtime itself, but the cost of loading code the app code into the runtime. Your node.js runtime has to load, parse, compile, and execute every single line of code used in your app, for instance, including all third-party dependencies.

Compare, for instance, the startup cost of a "hello world" node.js function with one that includes the AWS SDK. At least, six years ago, the Node.js AWS SDK wasn't optimized at all for startup and it caused a huge (10x?) spike in startup time because it loaded the entire library.

I would argue that the only languages that are a really good fit for Lambda are ones that compile to native code, like GoLang, Rust, and C/C++. The cost to load code for these applications is a single mmap() call by the OS per binary and shared library, followed by the time to actually load the bytes from disk. It doesn't get much faster than that.

Once you've switched to native code, your next problem is that Lambda has to download your code zip file as part of startup. I don't know how good Lambda has gotten at speeding that part up.

peterwwillis · 6 years ago
On "7) Always factor in labor savings, especially devops":

DevOps is not a synonym for "ops" or "sysadmin". It's not a position. DevOps is like Agile or Lean: it's a general method with lots of different pieces you use to improve the process of developing and supporting products among the different members of multiple teams. DevOps helps you save money, not the reverse.

You don't even need an "ops person" to do DevOps.

atroche · 6 years ago
The Rust runtime has a fast start time as well, FWIW.
zeawee · 6 years ago
Because Rust doesn't have a runtime initialization.
chrshawkes · 6 years ago
GraphQL makes caching a real bitch.
ClumsyPilot · 6 years ago
It might do, bu let for some APIs caching doesn't even make sense.
fogetti · 6 years ago
I haven't thought about step 3 before, but makes sense. Maybe I should show this to the guy who used Google Cloud Functions to upload images in our previous project :)

I guess the reasoning would be that this way the actual time spent in serverless code is shorter and by proxy the service becomes cheaper?

abiro · 6 years ago
Saves time and money by writing and executing less code + S3 is optimized for this task, so it will always perform better than an ad hoc serverless function.
jroper · 6 years ago
Number 3 - thinking in events instead of REST actions, can't be stressed enough. Of course, some things must be actions (or another for that is commands), and in those situations, you need something that will turn a command into an event, this is one of the features of CloudState (https://cloudstate.io), which offers serverless event sourcing - you handle commands, and output events that can be further processed downstream by other functions.
xchaotic · 6 years ago
As general rules these sound great at first sight, but don’t really address the main culprit from TFA - like for like API Gateway costs a lot more to process n number of requests.
abiro · 6 years ago
Well, given the feature set of API Gateway compared to a Load Balancer I think it should be expected that it costs more. But that’s also beside the point which is to use managed services to do the heavy lifting. Eg. if you need a PubSub service for IoT, that shouldn’t go through API Gateway and Lambda, there is a specific AWS service for that.
matchagaucho · 6 years ago
RE: #3. This still requires a Lambda to pre-sign the URL. No?

Granted, this approach is much lighter than uploading an image directly.

abiro · 6 years ago
If you use Cognito for identity management, then there isn’t even need for that. You can just assign users the appropriate IAM role and you can upload directly from the front end.
wodenokoto · 6 years ago
> Below is a report for one request, you can see we're using 3.50ms of compute time and being billed for 100ms, which seems like a big waste.

Doesn't sound like your point number 1 is valid at all, quite the opposite.

abacate · 6 years ago
> The only valid options for performance sensitive functions are JS, Python and Go.

I can think of a number of other languages that would probably easily surpass these, especially on latency.

dalanmiller · 6 years ago
> 4. Use GraphQL to pool API requests from the front end.

What does this look like in practice? Doesn't this increase response time for the initial requester?

abiro · 6 years ago
These are usually the read N items from a database type of queries that GraphQL makes trivial to batch together. Will barely increase response time, but will provide a better experience for users on bad connections.

Deleted Comment

microcolonel · 6 years ago
> 1. Don’t use .NET, it has terrible startup time. Lambda is all about zero-cost horizontal scaling, but that doesn’t work if your runtime takes 100 ms+ to initialize. The only valid options for performance sensitive functions are JS, Python and Go.

I always sorta assumed that Amazon pre-initialized runtimes and processes and started the lambdas from essentially a core dump. Is there some reason they don't do this, aside from laziness and a desire to bill you for the time spent starting a JVM or CLR? Does anyone else do this?

claudiusd · 6 years ago
I did the same experiment as OP and ran into the same issues, but eventually realized that I was "doing serverless" wrong.

"Serverless" is not a replacement for cloud VMs/containers. Migrating your Rails/Express/Flask/.Net/whatever stack over to Lambda/API Gateway is not going to improve performance or costs.

You really have to architect your app from the ground-up for serverless by designing single-responsibility microservices that run in separate lambdas, building a heavy javascript front-end in your favorite framework (React/Ember/Amber/etc), and taking advantage of every service you can (Cognito, AppSync, S3, Cloudfront, API Gateway, etc) to eliminate the need for a web framework.

I have been experimenting with this approach lately and have been having some success with it, deploying relatively complex, reliable, scalable web services that I can support as a one-man show.

0xbadcafebee · 6 years ago
> You really have to architect your app from the ground-up for serverless by designing single-responsibility microservices that run in separate lambdas, building a heavy javascript front-end in your favorite framework (React/Ember/Amber/etc), and taking advantage of every service you can (Cognito, AppSync, S3, Cloudfront, API Gateway, etc) to eliminate the need for a web framework.

At least I don't have to learn that complex "systems admin" stuff.

ClumsyPilot · 6 years ago
I am similarly, reading this list and wondering
ehsankia · 6 years ago
Exactly, it's like saying to someone running a restaurant that buying their bottled water from a convenient store is more expensive than buying it in bulk from Costco.

It's entirely missing the point. At the end of the day, you have to look at your specific usage pattern and pick the best option for you. Obviously, as with any other technology, anyone who forces a specific option in every possible situation is most likely wrong.

teacpde · 6 years ago
To eliminate the need of a web framework? I don’t understand the rationale, if I can get all what’s mentioned done with a good web framework, I will be more than happy to do that.
gchamonlive · 6 years ago
With your own server and Web framework, you do all the work in provisioning the machine, configuring services, installing dependencies, building deployment and integration pipelines and, worst of all, maintaining all that when updates are released / something breaks. It is also harder to scale.

A serverless solution that eliminates the Web framework (and thus the stack in which is being run) does most of that for you, at the expense of extra cost or infrastructure deployment complexity, but once it is done, scaling and maintenance are easier.

Illniyar · 6 years ago
Firebase now makes most of these painless. They've done a really good job. If your starting from the grounds up and can stomach using a google product Firebase is the easiest to work with by far.
redisman · 6 years ago
Do you have some more to read about that? Sounds interesting but I'm now confused as to what FireBase is/does.
rdsubhas · 6 years ago
This is how a conversation with a colleague who were enthusiastic about Serverless, and who's company was mostly on Java/JVM stack went:

Colleague: Lambda is awesome, we can scale down to zero and lower costs! We love it! We use cool tech!!

Me: What did you do about JVM warm up?

Colleague: We solved it by having a keepalive daemon which pings the service to keep it always warmed up.

... Me thinking: Uhh, but what about scale down to zero?

Me: What do you do about pricing when your service grows?

Colleague: We use it only for small services.

... Me thinking: Uhh, start small and STAY SMALL?

Me: How was performance?

Colleague: It was on average 100ms slower than a regular service, but it was OK since it was a small service anyway.

... Me thinking: Uhh, but what about services who _depend_ on this small service, who now have additional 100ms times to comprehend with?

Overall, I think his answers were self explanatory. Lambda seems to be a fast prototyping tool. When your service grows, it's time to think how to get out.

johnfactorial · 6 years ago
> Lambda seems to be a fast prototyping tool.

My thoughts EXACTLY. The great power in "serverless" architecture (i.e. AWS Lambda + AWS RDS + AWS Gateway) is how it empowers prototyping a new product.

Counterintuitively, it's future-proofing. You should know in advance that it's too slow & expensive. But you get to spin up a prototype backend very rapidly, pay only for what you're using while prototyping, and Lambda's inherent limitations force devs to build modularly, start simple, & stay focused on the product's main goals.

When the time comes to need scale, either at launch or even later when user count goes up, your "serverless" backend can be relatively easily replaced with servers. Then, just like that, in response to scale need your product's costs and response time go down instead of up.

It's a nice way to build a software product: rapid prototyping plus easy future cost decreases built-in.

0xDEFC0DE · 6 years ago
I don't understand the prototyping angle.

Can't you just do something on your local machine?

There's stuff like dotnet new for .NET where I can just run that and have a skeleton project for a backend and I can start writing code immediately. I assume there's template creators for other languages as well.

k__ · 6 years ago
The problem is mainly that people think "Cool I can build everything with FaaS and it will be cheaper and scale well"

Which is wrong and can be attributed to bad serverless evangelism in the past.

Serverless is building your system with managed services and only drop-in a FaaS here and there when you need some special custom behavior.

See how far you come with AppSync, Firestore or FaunaDB. Throw in 0Auth or Cognito and then when you hit a wall, make it work with FaaS.

jjeaff · 6 years ago
For me, the absolute best use cases for serverless is for really infrequent, small tasks.

For example, I have a few data scrapers written in JavaScript but my regular stack is lamp.

So I don't have any need to run a node server 24x7 just for those once a day tasks.

But I have even found myself not needing serverless for that because everything is running in a kubernetes cluster. So I can just setup a cron to run them which launches the needed node containers.

So I guess in effect, I am just using a sort of self-managed "serverless".

noobiemcfoob · 6 years ago
It's the same argument for Python over C development. Prototype in python and migrate portions to C as performance is needed. You'll often find that large portions of your codebase will never need to migrate out of the "prototype" stage.
CapmCrackaWaka · 6 years ago
> ... Me thinking: Uhh, start small and STAY SMALL?

This does happen. We have a serverless API forwarding service on Azure that was designed to simply format and forward calls from a vendor. We know the volume, there will not be any surprises, and it is immensely profitable over the old solution to the tune of thousands of dollars per day. Our use case is probably pretty uncommon, however.

k__ · 6 years ago
It's a good sign that people who only talk about FaaS when they say "serverless" didn't understand serverless at all. And I see this as a failure on the serverless proponents side.

The serverless proponents are selling their paradigm as simple solution, which leads many people to believe simple means FaaS.

Throwing Lambda on all backend problems is a setup for failure. Often transfer and simple transform of data can be done serverless without a Lambda, which cuts costs AND leads to better performance.

tybit · 6 years ago
The keep alive is still practically scale down to zero, you’re paying for 100ms every 5 minutes.

I’d be curious about how much memory/cpu was allocated in your experience and the OPs, there’s nothing magical about lambda to make it slow.

staticassertion · 6 years ago
Keeping all of your feedback to yourself sounds like a great way to maintain a bias.
ec109685 · 6 years ago
Keep-Alive daemon doesn’t work during scale up. If you go from 1 simultaneous request to 3, it will have to slowly spin up those 2 lambda’s in response to a user a request.
acdha · 6 years ago
It’s useful for services which fit its design: using an extremely heavy environment like Java will rarely be a good fit but for even Python/Node it works much better, without even considering things like Go/Rust.
tracer4201 · 6 years ago
Your “thoughts” are applying his solution to the wrong problem. “Start small and stay small” I’m not sure what that even means. Are you saying every service has to grow to some size or required amount of compute? LOL

The 100ms extra time is nothing. I mean - are you trying to solve at Google or Amazon scale?

I run simple Lambdas that read from some SNS topics, apply some transforms and add metadata to the message, and route it somewhere else. I get bursts of traffic at specific peak times. That’s the use case and it works well. The annoying part is Cloud Formation templates but that’s another topic.

modoc · 6 years ago
100ms of unneeded latency IS NOT nothing (except for some limited use cases). Anything user facing shouldn't be slower than it needs to be.
pavlov · 6 years ago
Something about the Lambda/FaaS/serverless hype reminds me of early 2000s enterprise Java, when everyone was trying to replace code with XML configuration.

It's obviously at a different point in the stack, but the promise is similar: "Just say what you want and it magically happens" — and indeed that's the case for FaaS when your problem is small enough. But XML-configured frameworks also offered that convenience for their "happy problem space". You could get things done quickly as long as you were within the guard rails, but as soon as you stepped outside, the difficulty exploded.

I'm not convinced AWS Lambda is all that different from a web framework, it's just on a higher level. Instead of threads responding to requests, you have these opaque execution instances that hopefully will be spun up in time. Instead of XML files, you have a dense forest of AWS-specific APIs that hold your configuration. That's how it looks from the outside anyway.

avip · 6 years ago
This is indeed a Pavlovic response :)

The promise of serverless is pretty simple, and pretty useful for the right use case - be it unpredictable load, or just very low load, or very frequent deployments, or pricing segmentation, or you don't have anyone as DevOps, and so on and so forth.

I don't recall anyone saying there's any magic involved. The premise is exactly same as cloud compute - you (possibly, depends on ABC) don't need to provision and babysit a server to perform some action in response to http request (or in case of aws lambda, other triggers as well).

holografix · 6 years ago
Disclaimer: I work for Salesforce, Heroku’s parent organisation.

I have had so many conversations with devops managers and developers who are individual contributors and the Lambda hype reached frothing levels at one point.

Contradictory requirements of scale down to zero, scale up infinitely with no cold starts, be cheap and no vendor lock in seemed to all be solved at the same time by Lambda.

Testability? Framework adoption? Stability? Industry Skills? Proven Architectures...? Are some of the other question marks I never heard a good answer for.

scarface74 · 6 years ago
You’re always locked into your infrastructure. People don’t willy nilly change their infrastructure once they reach a certain size any more than companies get rid of their six figure Oracle infrastructure just because a bushy tailed developer used the “repository pattern” and avoided using Oracle specific syntax.

And the “lock-in” in lambda is over exaggerated. If you’re using lambda to respond to AWS events, you’re already locked in. If you are using it for APIs, just use one of the officially supported packages that let you add a few lines of code and deploy your standard C#/Web API, Javascript/Node Express, Python/Flask/Django... app as a lambda.

Testability? Framework adoption? Stability? Industry Skills? Proven Architectures...? Are some of the other question marks I never heard a good answer for.

If you haven’t heard the “right answers” for those questions you haven’t been listening to the right people.

Lambdas are just as easy to test as your standard Controller action in your framework of choice.

x86_64Ubuntu · 6 years ago
Do you have any resources on testing a Lambda? When I was fooling around with it, the only thing I ran into was that AWS Sam-client or whatever. Thing looked like an absolute nightmare to get up and running.
wolco · 6 years ago
Heroku is owned by salesforce? You learn something everyday.
anderspitman · 6 years ago
Yeah I actually didn't know that either. Interesting
sideral · 6 years ago
Salesforce has stock in many companies through Salesforce Ventures, including Optimizely, Twilio, Box, Dropbox and Stripe.

Dead Comment

scoot · 6 years ago
/disclaimer/disclosure/

When you disclose something, it's a disclosure.

Angostura · 6 years ago
Unless its a disclaimer, because the poster knows that there is likely to be bias in their post, is aware of that and doesn't want to fix that.
nailer · 6 years ago
> Testability?

Serverless is specifically a stateless paradigm, making testing easier than persistent paradigms.

> Framework adoption?

Generally we use our own frameworks - I do wish people knew there was more than serverless.com. AWS throw up https://arc.codes at re:Invent, which is what I'm using and I generally like it.

> Stability? Industry Skills? Proven Architectures...?

These are all excellent questions. GAE, the original serverless platform, was around 2010 (edit: looks like 2008 https://en.wikipedia.org/wiki/Serverless_computing). Serverless isn't much younger than say, node.js and Rust are. There are patterns (like sharding longer jobs, backgrounding and assuming async operations, keeping lambdas warm without being charged etc) that need more attention. Come ask me to speak at your conference!

LunaSea · 6 years ago
> Serverless is specifically a stateless paradigm, making testing easier than persistent paradigms.

No because Lambdas are proprietary which means you can't run it in a CI or locally. Also, it becomes stateful if it pulls data from a database, S3 or anywhere else on AWS which it almost always does.

> Serverless isn't much younger than say, node.js and Rust are.

AWS Lambdas which I consider to be the first widely used Lambda service was was released in April 2015 which is 6 years after the release of Node.js. Also, Node.js is way more popular and mature than Lambda solutions.

Overall Lambdas are only useful for small, infrequent tasks like calling a remote procedure every day.

Otherwise, things like scheduling, logging, resource usage, volume and cost make Lambdas a bad choice compared to traditional VPSs / EC2.

dullgiulio · 6 years ago
Just to be clear: lambdas are not stateless if you, for example, connect to a DB or use any other external service.

State could be somewhere else, but if you are not also "pure", you don't have any improvement over a normal service.

neuronic · 6 years ago
It's concerning how typical the hype machine is in IT. I believe Serverless has its place and value. So does Kubernetes or many other products that are often discussed on HN.

But let's be clear, we are talking about commercial products and there is a capital interest in selling these services to all of us devs and engineers.

So while use cases exists and benefits wait to be reaped, as a consultant I strongly feel that we should be MUCH more insistent in pointing out when a product does not make sense instead of jumping onto the hype train.

I am basically surrounded by "LETS TRANSFORM EVERYTHING TO KUBERNETES THIS WEEK!" exclamations, conferences are basically "DID YOU ALREADY HEAR ABOUT KUBERNETES?" and so on ...

It reminds me of Ruby on Rails, a mature and well-developed framework used by global tech firms (correct me if wrong: Airbnb, ~Stack Overflow~, Github) to handle parts of their backend in 2019. But for half a decade now even tiny companies have been screaming around about FancyHTTPTechThisYear (tm) because scale while reporting 1/500th of the traffic of some famous Rails users.

This is not engineering with objectives in mind, it's more akin to the gaming community yelling for a new console.

Hokusai · 6 years ago
> It's concerning how typical the hype machine is in IT.

Software engineering is still a young discipline. Probably half of the people have less than 4 years of experience. They learned from other employees that also had less than 4 years of experience. And that can be repeated for 10 generations of developers.

We are learning, and re-learning the same lessons again and again. Everything seems new and better and then we are surprised when it's not a silver bullet.

Software and cloud vendors do not help. They are the first ones to hype their new language, framework or platform. Technology lock-in is a goal for tech companies offering hardware or services.

> This is not engineering with objectives in mind

Curriculum driven development is a thing. And I cannot blame developers for it when the industry hires you and sets your salary based on it.

We need to be more mature and, as you suggest, think about our technical and business goals when choosing a technology instead of letting providers lock us in their latest tech.

neuronic · 6 years ago
> Everything seems new and better and then we are surprised when it's not a silver bullet.

We need to live and breathe a culture that makes even young developers aware that this mistake has been done over and over until a growing collective of developers has recognized the pattern behind this.

After all, in the analogue trades even apprentices are taught many "don'ts" right from day one. Software engineering should not be any different.

DougBTX · 6 years ago
> correct me if wrong

Stack Overflow is built on ASP.NET: https://stackoverflow.blog/2008/09/21/what-was-stack-overflo...

neuronic · 6 years ago
Thank you!
bsenftner · 6 years ago
It's a mindless horde, an entire industry of half experienced idiots and half burned out savant engineers conning and rescuing one another in a cluster fuck of confusion. For the old school greys, the cloud itself is a con, as running a server oneself is actually quite easy, and exponentially better in every respect than slices of someone else's oversold hardware.
jedberg · 6 years ago
They're doing over 100rps if they're doing 10M requests a day. That's not a good use case for Lambda. If you're going to be that heavily utilized it makes more sense to run your API on EC2 or ECS/Fargate/etc.

Lambda is a good use case for when you have lots of not-often-used APIs. Lambda is a great solution for an API that's called a few times a day. It's also great for when you're first starting out and don't know when or where you'll need to scale.

But once you get to 100rps for an API, it's time to move to a more static setup with autoscaling.

crypteasy · 6 years ago
I've always found Troy Hunt's tech stack of haveibeenpwned.com interesting. The API does 5M requests a day with Azure Functions and Cloudflare caching. Ultimately only costing him 2.6c per day.

https://www.troyhunt.com/serverless-to-the-max-doing-big-thi...

ec109685 · 6 years ago
It’s highly cacheable.
aserafini · 6 years ago
I think the problem is that moving from Lambda/FaaS to a container-centric alternative (ECS and friends) requires a complete re-architect of your stack. Whereas starting with a simple, single container solution and organically evolving that into container-orchestration is much simpler - because the fundamental building block hasn't changed. It's all just containers.

Personally I'd like to see the industry coalesce on "serverless" containers rather than FaaS which are organised around functions being the fundamental blocks. Please just run my Docker container(s) like a magic block box that is always available, scales as necessary and dies when no longer needed.

guiriduro · 6 years ago
Aren't there abstractions to reduce the impedance mismatch between serverless offerings, e.g. Serverless (javascript framework)[1], which should allow easier portability to self-hosted solutions - openfaas or openwhisk etc - including running in containers on more traditional infrastructure, which is cheaper for this particular scale and use-case?

Sure, they're still FaaS which seems to be the unit of deployment for the serverless movement. For (hidden) managed server container deployment, Fargate is the offered solution I believe.

[1] https://serverless.com

scarface74 · 6 years ago
I think the problem is that moving from Lambda/FaaS to a container-centric alternative (ECS and friends) requires a complete re-architect of your stack.

Not really. I converted a Node/Express API running in lambda using the proxy integration to a Docker/Fargate implementation in less than a couple of hours by following a Fargate tutorial. Most of that time was spent learning enough Docker to do it.

The only difference between the Docker implementation of the API and the lambda implementation was calling a different startup module.

There is nothing magical about any lambda, from the programming standpoint you just add one function that accepts a JSON request and a lambda context.

Converting it to a standalone service (outside of APIs) is usual a matter of wrapping your lambda in something that runs all of the time and routing whatever event you’re using to trigger to a queue.

einaregilsson · 6 years ago
> requires a complete re-architect of your stack.

Not necessarily. The reason I decided to try this was exactly because I found a tutorial showing you could easily host a bog-standard ASP.NET web app on Lambda with the serverless framework. I had to add a small startup class, and one config file to our existing app and I was up and running.

mtrovo · 6 years ago
Agree with you, Lambda would make a lot more sense if it was kind of a scaffold for your application.

Say you only have 1k users and don't want to spend too much time on infrastructure, lambda is a perfect fit for it. Your application is taking off and now you have 100k: just click a button and migrate it to a Fargate/ECS. That would be the perfect world.

AFAIK the only framework that supports this kind of mindset is Zappa (https://www.zappa.io). I use it in production but never had to migrate out of AWS Lambda so I'm not sure about the pain points related to it.

joelthelion · 6 years ago
Is there any fundamental reason for this, apart from AWS' pricing model? It seems to me that ideally, serverless should scale from extremely small to very big without too many problems.
deif · 6 years ago
You're basically hiring an AWS devops position since you don't need to manage anything yourself. Great for the small startup but not so great for the already established enterprise that has some devops guys anyway.
danenania · 6 years ago
"It's also great for when you're first starting out and don't know when or where you'll need to scale."

To me this is probably the most significant benefit, and one that many folks in this discussion strangely seem to be ignoring.

If you launch a startup and it has some success, it's likely you'll run into scaling problems. This is a big, stressful distraction and a serious threat to your customers' confidence when reliability and uptime suffer. Avoiding all that so you can focus on your product and your business is worth paying a premium for.

Infrastructure costs aren't going to bankrupt you as a startup, but infrastructure that keeps falling over, requires constant fiddling, slows you down, and stresses you out just when you're starting to claw your way to early traction very well might.

johnnyfaehell · 6 years ago
I thought the point of Lambda wasn't for not so often used APIs but for APIs where you need instant autoscaling where you may need 100 servers in 2 minutes and only need them for 20 minutes.
ajhurliman · 6 years ago
I've had problems with that level of use due to cold-starts. It was mitigated by pinging it every 10 seconds with CloudFront.
tedk-42 · 6 years ago
I disagree with all the comments posted so far.

This should be a perfect use case for lambda, not "oh you're API is receiving more than 10M req/day? Use Elastic Beanstalk instead with an EC2 instance and ELB". This kind of comment is just to abuse the free tier that AWS provide you with.

The whole idea of serverless is so you don't have to manage infrastructure. OS patching, mucking around with Docker and and port fowarding rules are all removed once you go down the serverless route. Is it worth 8x the cost and 15% reduction in performance? The article argues no. If anything, AWS should price API GW and Serverless more competitively.

It's nice to see though that the Firecracker implementation savings has been passed down to AWS Customers.

If the article saw a 1.5-2 times increase in cost, the debate would be much more interesting.

I'm in the container camp because serverless isn't cheap and you can't have long running processes.

ordinaryperson · 6 years ago
Exactly. Like anything, right tool for the right job.

I was hosting my personal website on EC2 with an RDS instance and it cost around $20/month. But since the site gets little-to-no traffic and I got sick of EC2 upkeep, I switched S3 & Cloudfront: now my bill is $0.80/month.

Another example: at work we inherited an old EC2 instance that was set up solely to run cron jobs on other servers. Occasionally it would lock up or go down, for whatever reason, creating chaos for the servers that needed specific jobs run frequently.

We switched to AWS Lambda functions for those cron jobs and they've never failed once.

If you're running a games website that gets pinged 24/7, don't use Serverless. But the notion that it's slow and overpriced is misguided if you're applying the wrong use case.

anteatersa · 6 years ago
I wonder if it's a generational thing but it seems to me that people nowadays have forgotten that you can host more than one site or database per server. Failing that shared hosting still offers amazing value if you don't need anything to weird e.g. someone like Bluehost or Hostgator offers unlimited* sites + databases for about $5/month
neuland · 6 years ago
I can also vouch for this. I run a low traffic website on S3 and Coudfront. And Lambda is perfect for handling the small amount of dynamic content or form processing that it requires. It costs cents a month to run and doesn't need any maintenance.
ramraj07 · 6 years ago
How did you replace a site that needed rds with s3? Was it a blog that you moved to Jekyll or something?
inimino · 6 years ago
Some tools are't the right one for any jobs we actually have.
discordance · 6 years ago
>The whole idea of serverless is so you don't have to manage infrastructure

That's one of the ideas. Serverless shines for burst-y traffic where the traffic timing is unknown. If I had known static high loads I wouldn't use serverless.

orasis · 6 years ago
Serverless works great for my apps. My traffic is driven by hourly notifications so I get 100x more traffic on the hour than the rest of the hour. I could write a bunch of ML to figure out load scaling but that is code I don't want to focus on.
cuu508 · 6 years ago
Bursty or not, if the aggregated number of requests per day is in millions not hundreds, AWS API Gateway & Lambda is significantly more expensive than the more traditional options.
Jack000 · 6 years ago
isn't that just a limitation of the current incarnation of serverless systems? ie. serverless is only effective for burst traffic because it's expensive.

You would think that after you tailor the code to the platform and make it stateless, it would cost less for the cloud provider and not more.

tedk-42 · 6 years ago
Burst traffic with high writes? --> Use a stream. Burst traffic with high reads? --> Use a cache.

Large compute and memory intense stuff isn't suitable for serverless

ilimilku · 6 years ago
Exactly. This guy nails it. The serverless paradigm is a managed service, so you are paying extra for that management. Serverless also is built for automated connectivity within the AWS ecosystem, i.e. as a nerve center for all of AWS's nifty gizmos. This is why Amazon places such an emphasis on training certified solutions architects who know what works best for each situation.
sl1ck731 · 6 years ago
Amazon puts emphasis on "certifying" solutions architects to turn them into marketing mouthpieces and AWS proponents.

Source: Seen it happen, am certified.

hdra · 6 years ago
>The whole idea of serverless is so you don't have to manage infrastructure

Is this even achievable with Lambda though? Even with Lambda, you still have to configure your "infrastructure", just that instead of ELB and EC2, you now have to manage APIGW and Lambda, and any other "wiring" that you need to put in to do what you needed.

All in all, I can't really say Lambda is really all that "easy" considering options like AWS EB/ECS can be setup relatively easily.

pbalau · 6 years ago
> you now have to manage APIGW and Lambda, and any other "wiring" that you need to put in to do what you needed.

You know those url.py files in a django install? Those are the equivalent of ELB/APIGW thing. You don't manage those, you make them once.

Not sure where you personally draw the line between app development and infra work, but for me "request that comes on this path, calls this function", either via ELB -> backend or APIGW -> lambda, or you know, whatever you use for your django app -> a function defined in a view.py, is app, not infra.

einaregilsson · 6 years ago
There are definitely still things you need to think about with Lambda, like how much memory you want to allocate etc.
lucasverra · 6 years ago
Agreed, let me add my 2cts :

>The whole idea of serverless is so you don't have to manage infrastructure.

...when you are validating your product/market (100 request/day is a success here).

Not everyone on HN is a core dev, tech is getting democratised. So 1 week of time dealing with servers and accounts and infra is a week not asking the right questions.

nirvdrum · 6 years ago
It really depends what you're doing. I've been trying out serverless on-and-off for weeks now and still don't have a basic two tier application going the way I want. I'd have been substantially better off dropping $5/month on a DigitalOcean droplet and running PostgreSQL and Rails on it. DigitalOcean even has really helpful templates to start up basic servers.

This may be a specific criticism of AWS Amplify, but all I've been doing is trading off some arcane knowledge in open source software for arcane knowledge in some vendor-specific system. Most of my time has been spent searching for answers to Amazon-specific issues that haven't been solved in any of their SDKs. Or I wind up walking through their SDK code trying to figure out how to metaprogram my way around it.

Perhaps if I used the web UI to stand everything up, it would have gone a lot smoother. I'll admit I'm a sucker for infrastructure as code. But, as it stands, I can't see how this really saves anyone any time. It's still an incredibly manual process without much in way of help or support the moment you veer off the one path Amazon has decided you should be on.

(To make things more concrete, I still don't have Cognito working with optional TOTP 2FA. The withAuthenticator component apparently only renders the TOTP setup if 2FA is required. I've spent hours trying to work this one out, including spending time on the Amplify Gitter channel.)

hdra · 6 years ago
Counterpoint to that though: "Serverless" isn't all that much "easier" to setup compared to traditional server setup. You still need to know how to wireup your request gateway to the correct service, you need to write your app for a serverless setup, setup security groups, etc. None of that is particularly "accessible" for someone trying to get running ASAP.

I agree tech is getting democratised, but when talking about products that allows it, I'm thinking of products like Firebase or the recent darklang, not AWS Lambda. From all the Lambda usage that I've encountered so far, they are still very much in the domain of a "sysadmin".

james_s_tayler · 6 years ago
I think this is an important consideration that a lot of people miss. One of the trends of both serverless and containers it that compared to older models they are allowing smaller and smaller teams to do bigger and bigger things.

So, in certain respects it's an enabler. Context always matters.

wolco · 6 years ago
Learning how to setup everything will be too much for you. Use Hostgator and you can start marketing immediately.
EwanToo · 6 years ago
If they had used an ALB instead of API Gateway, the cost might well have been a 2x increase, especially given that would replace the ELB they're already using. The ALB option has it's own caveats though...
einaregilsson · 6 years ago
Yeah, that would have been a much better fit. I just didn't know you could put ALB in front. (Also, a "load balancer" in front of serverless... , and yes I know load balancer do more than load balance.)

I mainly went with API Gateway since that's the default setup the serverless framework uses.

primitivesuave · 6 years ago
One frustrating caveat of the ALB approach is there is a hard 1 MB limit on both the request and response size, while APIGW has a hard 6 MB limit.
rhizome · 6 years ago
The whole idea of serverless is so you don't have to manage infrastructure

That's the selling point of AWS in general, but I wouldn't be surprised if (non-tiny) companies are still spending at least as much on AWS/etc. as they would on in-house infra. That is, they're paying for it anyway, and possibly at a premium.

skrebbel · 6 years ago
> I'm in the container camp

Can anyone explain to me why there's "camps" to this debate?

tedk-42 · 6 years ago
I'd say modern API deployments are either containerised or go the serverless route.

On one hand you have a docker image with your source code/static binary baked in. The other you have your source code/binary encapsulated in a serverless framework (with an Event handler of sorts as your input).

The container doesn't lock you into a particular ecosystem, but hasn't solved how you deploy and run your container (Heroku, managed k8s, AWS ECS/EKS/Fargate). Serverless on the other hand will give you a runtime for your API and allow you to go live (assuming you've setup an account or whatever).

To migrate something that already works to serverless is a bad idea. Maybe if you're pretty comfortable with serverless frameworks and you like to hack a lot of projects up quickly, sure serverless might be a good fit. But why not instead start off with a container? At least you have the flexibility to run that on whatever you damn want - Even a t2.small instance with a docker daemon running.

inimino · 6 years ago
Because most people with strong opinions aren't experienced enough to justify them from direct experience. Instead we have religions, tribal affiliations, camps.
scarface74 · 6 years ago
Docker can also be serverless and gets rid of all the limitations of lambda - Fargate.
dragonelite · 6 years ago
I tried using google cloud run, with a rust web server. I think google only counts request processing time and rounds up to the nearest 100ms.

All i had to do was provide a container with a web service that reads a environment variable to listen to a specific port.

mywittyname · 6 years ago
Fargate is not a replacement for Lambda. While there's some capability overlap, each have their own respective niches.
pas · 6 years ago
There's already serverless where you don't have to do anything with OS/servers/infra: PaaS, like Heroku.

But no, that's not Amazon, so no one cares either way (pro/contra, cheap/expensive, fast/slow, easy/hard) :|

scarface74 · 6 years ago
Heroku is not Serverless in the modern definition of the word and doesn’t Heroku run on AWS?
dunk010 · 6 years ago
> I'm in the container camp because serverless isn't cheap and you can't have long running processes.

Do you think you'll have the same opinion in five years time? Or how about three years?

tedk-42 · 6 years ago
Containers will be around for at least another 5 years.

If you look at where AWS makes most of their money, it's selling EC2 hardware.

peterwwillis · 6 years ago
> The whole idea of serverless is so you don't have to manage infrastructure. OS patching, mucking around with Docker and and port fowarding rules are all removed once you go down the serverless route.

If this were the reason to use Serverless, it doesn't buy you much. Ports? You set that up once and you're done forever. OS patching? You already have to manage patches for your app and its libraries, so OS (really container) patching is just another part of your patching strategy.

The reasons to use Serverless is the same as everything else in the cloud: 1) fast scaling, 2) sometimes it is cheaper depending on X, and finally 3) you don't have to think about where to run it.

jrochkind1 · 6 years ago
> You already have to manage patches for your app and its libraries, so OS (really container) patching is just another part of your patching strategy.

You can consider it so theoretically, but for many teams, especially small teams, OS-level systems admin is a different skillset and non-trivial burden that they are glad to pay a premium for, which is why offerings like heroku are successful.

It may work that way for you, but it definitely does not for many many teams.

"You already have to walk to the store, so hiking the appalachian trail is just another part of your walking strategy" -- this is an exageration, the difference between walking to the store and hiking the trail is probably larger than adding OS-level maintenance, but it shows "you already have to do X so doing anything else that I can also call 'X' should be trivial" is a flawed form of analysis, it tells us nothing that you can call all of it "walking" or all of it "patching strategy".

coldtea · 6 years ago
>If this were the reason to use Serverless, it doesn't buy you much. Ports? You set that up once and you're done forever. OS patching? You already have to manage patches for your app and its libraries, so OS (really container) patching is just another part of your patching strategy.

This is "you'll always have to do X, so might as well do 10x" argument...