I remember vividly my intro to "serverless". I had been (ab)using Javascript and Node.js for a while, breaking the internals of Express.js and messing around with many things. Someone told me that serverless was "like Node.js but without server, you just write code for an endpoint" and it sounded cool. Just I didn't need it at the moment since I was happy with my servers (I'm the creator of npm's `server`).
I kept hearing about it in multiple places, and after many months/years I decided to dig deeper. I found out that "serverless" was just an instance of Express all along! My disappointment was immeasurable, I thought that you really wrote a JS function, executed by "3rd party servers" (ofc there's a server _somewhere_), or in worst case you just wrote a function with the signature of Express' middleware and export it, then to be run by their instance. But no, you actually had to set up the whole server and listen to a port and all, it was just more ephemeral than Heroku's already ephemeral servers.
It wasn't until next generation that "true serverless" came, with Cloudflare Workers and the such, where you truly write a plain JS function that takes a Request and returns a Response as I expected almost a decade earlier.
And we are still waiting on decent frameworks for this method of writing serverless services. I have 2 projects running like this with a cobbled together group of utils/helpers to make life easier. No routing layer or anything, that’s the job of SST/CDK/SF/etc, but things like a handler wrapper to handle errors uniformly, and other tools I regularly need (permissions checking, payload validation, etc).
I know some of that work can be done at the API Gateway/Lambda layer before it hits my function but I’ve yet to go down that rabbit hole, it always felt too limiting/rigid, I could and do move way faster in JS (TS).
I feel like we are on the cusp of it getting so much nicer and I have a blast working in this space. I absolutely love knowing my backend will “auto-scale” to whatever I need all the way down to $0 and even on the high end (at my scale) well under $10. My projects are for events (think festivals or “food week”) and so they are extremely bursty. They go months at a time with little to no traffic so “Serverless” and managed services that scale to $0 or near $0 are awesome for my needs.
This sales pitch ignores cloudflare workers, which sound pretty similar to me, with both webassembly and key-value storage.
The distinguishing part of the articles option seems to be that webassembly functions are uploaded in an OCI container, which I don't think anyone else supports, and I don't think was designed with this use case in mind?
> The distinguishing part of the articles option seems to be that webassembly functions are uploaded in an OCI container, which I don't think anyone else supports, and I don't think was designed with this use case in mind?
OCI is just the storage and registry format, and is being used for all sorts of things (e.g. you can store OPA policies for conftest in it). I'm not familiar with Fermyon's internal workings, but my read of it was that they simply use the storage format, not that there's an intermediate container layer.
Oh no they didn't ignore them. They included them with the slow, locked in, poor developer experience first generation providers but also with a "more limited fashion" disclaimer.
I dont think they are going to run the web-assembly in docker. That would not change the status-quo at all. Already now aws lambda, which executes container has a longer cold start then the zipped code.
Google App Engine came out in 2008, several years before AWS Lambda. I'd also argue the developer experience (especially for its time) was pretty fantastic.
If you're wanting 'serverless' compute these days you'd probably deploy to something like Cloud Run – containers being the ultimate hedge against vendor lock-in.
I would love to use Google's cloud, but I just can't risk my email, map, and browser services being cut off because some AI determined that my application looks suspicious, with no human customer support to contact, as has been reported multiple times.
Actually I find their support to be pretty good. Granted I am at a large tech company who likely pays for premium support, but I can always get through to a human who knows what they're talking about. They've even helped with issues inside my app that were my fault.
I tried moving off Gmail, I really did. But all the EU-based alternatives just suck so much. So now I'm back to Gmail, but I pay for it (Google Workspace). This way I at least have a commercial relationship with Google which gives me various contractual rights, and access to phone support. I also have a contingency plan: I have my email address on my own domain, and I backup my mailbox once in a while, so that I can migrate off Gmail should I need to do that at some point.
I don't know when Heroku launched, but that's where GAE fits in the summary to me - it's 'serverless' in the 'not managing a server' sense, but there's some conflation (at least in the article, but generally too I think) with a 'function as a service' model, which is inherently a subset of serverless I suppose, and where Lambda sits.
Add a Dockerfile
Create a service and link to GitHub
Push to main
Done
Sure there are all the usual risks of using Google services (might go away, might get locked out) but that container makes it reasonably quick to get back up and running.
I see that there's some skepticism about running WebAssembly in containers and how it constitutes a next-gen serverless solution. It's important to note that the use of WebAssembly here is not just about the runtime environment but also about the features it brings. WebAssembly binaries can start up significantly faster than traditional VMs or containers. They also have a strong isolation model and security sandbox that allows running multiple tenants in the same supervisor, which can lead to reduced costs and better utilization of resources
I know I am being sold something, but this is a very convincing post IMO. I enjoyed the historical perspective, and all the mentioned downsides of serverless up until now really resonate with my own experience.
Seems like these people understand their problem space very well. Good stuff!
I've been dealing with serverless since 2018 and have a company with a 100% serverless and open-source product (webiny.com). I'm not sure I fully agree with your 4 issues and the fact that Web Assembly is the answer.
"Serverless functions are slow" -> not really, only if designed poorly
"DX serverless functions is sub-par" -> where's your proof, again you'll have bad experience as a developer only if you don't know what you're doing. Which I see mainly from people trying to approach building serverless applications by having a container-like mindset and that leads them to bad design choices.
"Serverless functions come with vendor lock in" -> I think most of us are beyond the point of that vendor locking is bad choice. Worse choice is picking a sub-optimal technology with lower performance, higher cost and lesser reliability.
"Cost eventually gets in the way" -> Again, only if you don't know what you're doing and make bad design choices.
When it comes to Web Assembly, I don't see how this is a better choice of technology vs something like Node. In node I have a much wider support of the technology than WA (talk about vendor-locking), I have a proven eco system of libraries, knowledge and a much bigger talent pool to source from. The cold start issue you mentioned on your website, I can tell you first hand, the cold start is not really that big of a problem, not big enough that you would want to switch to a different technology and there are many ways to mitigate the cold start problem.
Just saying, I'm far from convinced that there is a benefit in switching. I would love to see more detailed benchmarks and examples I would be able to replicate than just statements in a blog post.
If you have Node apps running in Lambda and are happy with the architecture, cost, and operating model: great! You've done good work and/or are very lucky and there's no need for you to rip everything out and start over.
Heck, even if you're curious about WASM and want to try some experiments with (say) fast Rust crypto libraries or embedded database engines w/o the risk of flaky native code crashing your V8 runtime: again, you can just run WASM from a Node worker thread and keep cruisin'.
For those of us who _don't_ have a huge investment in Node, have hard requirements around e.g. memory usage, cold start times, or even just plain old _cost_ (which can become a major factor when you consider the AWS lock-in) that Lambda doesn't meet really benefit from another option.
Your good fortune in finding a stack that works well doesn't mean that folks who have different needs or constraints are dumb, ignorant, or lazy.
As an aside, I think you also might be underestimating the depth of experience and knowledge of the Fermyon crew when it comes to containers, cloud runtimes, and serverless development. This is substantially the same team that built Helm, and a lot of other Kubernetes and cloud-native ecosystem projects along the way.
This pivots from serverless applications hosting to... a database?
I mean just sell me on the wasm in the cloud premise, that's sounds awesome enough.
The k/v store angle is just confusing. If devs don't want to manage their Dev environment, you can spin one on any cloud. Especially serverless services, which are billed per api usage, are not going to break bank if each developer has it's own Dev backend ok the cloud.
>and then invoked the CGI program directly. There was no security sandbox, and CGI was definitely not safe for multi-tenancy
This isn't true. Linux is the security sandbox. Multitenacy is safe using a user for each site.
>Like CGI, PHP was never multi-tenant safe.
This is isn't a problem with PHP. The following story about the author's site on a shared host getting hacked was a problem of shared hosts not caring about security.
I'd argue it is, since PHP's common runtimes expect you to cross user boundaries all the time:
- as Apache2 module, PHP runs as the same user as the web server, so the web server needs to have write access across all tenants.
- FPM recommends running as a TCP socket, letting tenants freely access other tenants' PHP processes. Unix sockets can solve that issue with carefully permissions, but the documentation barely mentions that use case.
>since PHP's common runtimes expect you to cross user boundaries all the time:
No they don't. Just because a default installation is single tenant that doesn't mean shared hosts can't configure it to be multitenant. It is simple to set up FPM to have a pool for each site use its designated user and have it chroot.
>Containers are the minimum security boundary.
Whose security boundary is also the kernel. It's not that much different.
"We have things like protected properties. We have abstract methods. We have all this stuff that your computer science teacher told you you should be using. I don't care about this crap at all." -Rasmus Lerdorf
"I really don't like programming. I built this tool to program less so that I could just reuse code." -Rasmus Lerdorf
"I was really, really bad at writing parsers. I still am really bad at writing parsers." -Rasmus Lerdorf
"I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests." -Rasmus Lerdorf
"I don't know how to stop it, there was never any intent to write a programming language [...] I have absolutely no idea how to write a programming language, I just kept adding the next logical step on the way." -Rasmus Lerdorf
"For all the folks getting excited about my quotes. Here is another - Yes, I am a terrible coder, but I am probably still better than you :)" -Rasmus Lerdorf
"PHP is just a hammer. Nobody has ever gotten rich making hammers." -Rasmus Lerdorf
>I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.
>You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.
>You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.
>You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.
>And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.
>Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.
I mean, I get that you don't like PHP. Which is fine, we need people to keep the other communities humming as well.
IMO, PHP is better than any other language I have used, so I actually respect it.
The fact that the creator is open about his figuring it out along the way is pretty cool in my eyes - after all, he started before there was anything, [Alta Vista, Lycos, and a lot of friendly geeks trying to figure out how this was better than usenet?] and he made something which is very popular and really works very well.
Imagine what would happen if Vitalik was as honest as Rasmus ;)
I kept hearing about it in multiple places, and after many months/years I decided to dig deeper. I found out that "serverless" was just an instance of Express all along! My disappointment was immeasurable, I thought that you really wrote a JS function, executed by "3rd party servers" (ofc there's a server _somewhere_), or in worst case you just wrote a function with the signature of Express' middleware and export it, then to be run by their instance. But no, you actually had to set up the whole server and listen to a port and all, it was just more ephemeral than Heroku's already ephemeral servers.
It wasn't until next generation that "true serverless" came, with Cloudflare Workers and the such, where you truly write a plain JS function that takes a Request and returns a Response as I expected almost a decade earlier.
I know some of that work can be done at the API Gateway/Lambda layer before it hits my function but I’ve yet to go down that rabbit hole, it always felt too limiting/rigid, I could and do move way faster in JS (TS).
I feel like we are on the cusp of it getting so much nicer and I have a blast working in this space. I absolutely love knowing my backend will “auto-scale” to whatever I need all the way down to $0 and even on the high end (at my scale) well under $10. My projects are for events (think festivals or “food week”) and so they are extremely bursty. They go months at a time with little to no traffic so “Serverless” and managed services that scale to $0 or near $0 are awesome for my needs.
"What if this stuff was actually as good as it first sounded? Want to make it?"
The distinguishing part of the articles option seems to be that webassembly functions are uploaded in an OCI container, which I don't think anyone else supports, and I don't think was designed with this use case in mind?
OCI is just the storage and registry format, and is being used for all sorts of things (e.g. you can store OPA policies for conftest in it). I'm not familiar with Fermyon's internal workings, but my read of it was that they simply use the storage format, not that there's an intermediate container layer.
Edit: can confirm, there are no containers: https://news.ycombinator.com/item?id=36352869
If you're wanting 'serverless' compute these days you'd probably deploy to something like Cloud Run – containers being the ultimate hedge against vendor lock-in.
Add a Dockerfile Create a service and link to GitHub Push to main Done
Sure there are all the usual risks of using Google services (might go away, might get locked out) but that container makes it reasonably quick to get back up and running.
Source: Someone who runs two dozen Tomcat containers.
Seems like these people understand their problem space very well. Good stuff!
Don't get me wrong, I bitch about them every day, but credit where credit is due :p
"Serverless functions are slow" -> not really, only if designed poorly
"DX serverless functions is sub-par" -> where's your proof, again you'll have bad experience as a developer only if you don't know what you're doing. Which I see mainly from people trying to approach building serverless applications by having a container-like mindset and that leads them to bad design choices.
"Serverless functions come with vendor lock in" -> I think most of us are beyond the point of that vendor locking is bad choice. Worse choice is picking a sub-optimal technology with lower performance, higher cost and lesser reliability.
"Cost eventually gets in the way" -> Again, only if you don't know what you're doing and make bad design choices.
When it comes to Web Assembly, I don't see how this is a better choice of technology vs something like Node. In node I have a much wider support of the technology than WA (talk about vendor-locking), I have a proven eco system of libraries, knowledge and a much bigger talent pool to source from. The cold start issue you mentioned on your website, I can tell you first hand, the cold start is not really that big of a problem, not big enough that you would want to switch to a different technology and there are many ways to mitigate the cold start problem.
Just saying, I'm far from convinced that there is a benefit in switching. I would love to see more detailed benchmarks and examples I would be able to replicate than just statements in a blog post.
Heck, even if you're curious about WASM and want to try some experiments with (say) fast Rust crypto libraries or embedded database engines w/o the risk of flaky native code crashing your V8 runtime: again, you can just run WASM from a Node worker thread and keep cruisin'.
For those of us who _don't_ have a huge investment in Node, have hard requirements around e.g. memory usage, cold start times, or even just plain old _cost_ (which can become a major factor when you consider the AWS lock-in) that Lambda doesn't meet really benefit from another option.
Your good fortune in finding a stack that works well doesn't mean that folks who have different needs or constraints are dumb, ignorant, or lazy.
As an aside, I think you also might be underestimating the depth of experience and knowledge of the Fermyon crew when it comes to containers, cloud runtimes, and serverless development. This is substantially the same team that built Helm, and a lot of other Kubernetes and cloud-native ecosystem projects along the way.
I mean just sell me on the wasm in the cloud premise, that's sounds awesome enough.
The k/v store angle is just confusing. If devs don't want to manage their Dev environment, you can spin one on any cloud. Especially serverless services, which are billed per api usage, are not going to break bank if each developer has it's own Dev backend ok the cloud.
This isn't true. Linux is the security sandbox. Multitenacy is safe using a user for each site.
>Like CGI, PHP was never multi-tenant safe.
This is isn't a problem with PHP. The following story about the author's site on a shared host getting hacked was a problem of shared hosts not caring about security.
I'd argue it is, since PHP's common runtimes expect you to cross user boundaries all the time:
- as Apache2 module, PHP runs as the same user as the web server, so the web server needs to have write access across all tenants.
- FPM recommends running as a TCP socket, letting tenants freely access other tenants' PHP processes. Unix sockets can solve that issue with carefully permissions, but the documentation barely mentions that use case.
Containers are the minimum security boundary.
No they don't. Just because a default installation is single tenant that doesn't mean shared hosts can't configure it to be multitenant. It is simple to set up FPM to have a pool for each site use its designated user and have it chroot.
>Containers are the minimum security boundary.
Whose security boundary is also the kernel. It's not that much different.
"We have things like protected properties. We have abstract methods. We have all this stuff that your computer science teacher told you you should be using. I don't care about this crap at all." -Rasmus Lerdorf
"I really don't like programming. I built this tool to program less so that I could just reuse code." -Rasmus Lerdorf
"I was really, really bad at writing parsers. I still am really bad at writing parsers." -Rasmus Lerdorf
"I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests." -Rasmus Lerdorf
"I don't know how to stop it, there was never any intent to write a programming language [...] I have absolutely no idea how to write a programming language, I just kept adding the next logical step on the way." -Rasmus Lerdorf
"For all the folks getting excited about my quotes. Here is another - Yes, I am a terrible coder, but I am probably still better than you :)" -Rasmus Lerdorf
"PHP is just a hammer. Nobody has ever gotten rich making hammers." -Rasmus Lerdorf
Ian Baker's PHP Hammer:
https://blog.codinghorror.com/the-php-singularity/
PHP: a fractal of bad design:
https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/
>I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.
>You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.
>You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.
>You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.
>And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.
>Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.
>That’s what’s wrong with PHP.
IMO, PHP is better than any other language I have used, so I actually respect it.
The fact that the creator is open about his figuring it out along the way is pretty cool in my eyes - after all, he started before there was anything, [Alta Vista, Lycos, and a lot of friendly geeks trying to figure out how this was better than usenet?] and he made something which is very popular and really works very well.
Imagine what would happen if Vitalik was as honest as Rasmus ;)
My 2c.