I'm so glad Fly exists. Every other edge focused thing out there I'm aware of are "serverless" which these days basically means they charge per request.
That's fine for a lot of use cases, but the unit economics of the per request pricing model means it's really hard to operate a predictably sustainable (i.e. profitable) business without also charging our customers on a usage basis, or enforcing limits on usage, neither of which is ideal for maximizing engagement as incentives are no longer aligned.
Fly just gives us compute at the edge for a predictable price per unit of actual compute resources as opposed to requests, and gives us freedom to serve as much traffic as we can min-max onto those resources, like we could with traditional cloud compute.
This provides much better unit economics for many kinds of applications, at the cost of having to manage the scaling ourselves. But because the option exists, we can make this tradeoff on a case-by-case basis, which is so much better than if all we had was "serverless" stuff at the edge and had to choose between just low latency across the globe vs good unit economics.
For a company that says it makes it super easy to deploy a container image and mentions that all you need to do "Write your code, package it into a Docker image, deploy it to Fly's platform"[0], they sure have a dearth of documentation on how to deploy an existing docker container.
I am not sure if I'm missing something or what, but here's where I looked:
* googled 'docker fly' and a blog post that references docker but as far as I can see doesn't have instructions on deploying docker images shows up[1].
* their getting started guide[2], called a 'speed run' which has all kinds of CLI commands but doesn't actually show how I'd pick a docker image.
* their quickstart docs[3], which outline how to deploy all manner of applications, except for, you guessed it, an existing docker image.
* scanned the menu of their docs, and didn't see anything.
I really want to like this service, as we have (at $CURJOB) an app packaged as a docker image that it'd be awesome to set up to run on Fly.io, especially with the multi-region postgresql.
What the heck am I missing? Can I just not read? Do I just need to install the CLI and all shall be made clear?
You know, you're not really missing anything, we just don't connect the dots very well.
Install CLI, run `fly launch` in a directory with a Dockerfile, and it should just work.
Most of our users don't start with a Docker image though, they start with Phoenix. What you're seeing is a little bit of indecisiveness in how we target the docs.
You're right, they should make that much more clear. I'd expect it to be on the left side menu, just like they have all the "Run a [language] app".
Based on your first link [0], I saw,
> You can run most applications with a Dockerfile using the flyctl command.
With that, I looked over the left-side menu, and clicked `flyctl`[1], since it seems that's what you'd need to use to deploy an existing app with Docker. After that, I clicked on "Launch an App" [2], which shows help for the `flyctl launch` subcommand, including a parameter `--dockerfile`. I think that's how you would deploy an existing app with docker?
> Every other edge focused thing out there I'm aware of are "serverless" which these days basically means they charge per request.
Workers Unbound is $1 for ~6.6M requests (+ runtime at $0.0000015 per sec / $12.5 for 1M GB-sec). That's super cheap considering free egress, which brings me to...
> Fly just gives us compute at the edge for a predictable price per unit of actual compute resources as opposed to requests...
Fly.io may not meter requests anymore, but they continue to meter egress, even between your VMs in the same region [0]. Usage-based price model is everywhere, like it or not.
We don't bill for internal Postgres bandwidth anymore. Bandwidth in general is a good point though. No one _actually_ offers unlimited bandwidth. We haven't figured out how to be transparent about bandwidth costs AND check that box for people yet, though.
This is true, but we aren't going to bill per request. We really can't, since we support arbitrary UDP and TCP services.
I don't really want to promise anything we haven't shipped yet. But my perfect cloud service (a) charges me when VMs are alive and (b) gives me the tools I need to create/remove/stop/start/suspend/resume VMs based on either network activity or metrics.
One flaw of Fly.io right now is that it's relatively expensive to run a side project in 10+ regions. Most apps benefit from 10+ regions, but $50/mo to try it out is prohibitive. We want to make this more accessible.
> it's really hard to operate a predictably sustainable (i.e. profitable) business [... with other providers]. Fly just gives us compute at the edge for a predictable price per unit of actual compute resources as opposed to requests, and gives us freedom to serve as much traffic as we can min-max onto those resources, like we could with traditional cloud compute
The praise on the subject of predictability is interesting, given perennial concerns about uncapped vs capped usage charges (with any cloud provider), but esp. in light of past Fly-specific comments that "putting work into features specifically to minimize how much people spend seems like a good way to fail a company".
We ended up shipping a "cap your costs" feature, we just did it with prepaid credits. If you buy prepaid credits instead of adding a credit card to your account, we can't bill you more than you've prepaid. The downside is that you have to prepay $25 to use anything at all.
My comment could have been better. Our business model is predicated on making it cheap to use services and easy to incrementally scale up. We probably won't build features to cap usage of things directly, but it makes total sense to deactivate apps when credits run out.
Serverless pricing model doesn't have to be transferred to your customers.
If you know them well and their usage patterns, you can predict with confidence how much each customer will cost you. With granularity to the level of a feature or even a particular action.
This allows for extremely precise and safe unit economics planning. I couldn't be happier with this benefit from serverless.
In a server-based infra you have many fixed costs: servers themselves, unused capacity, and your time to maintain it, which is certainly expensive, since it competes with attention to the product or maybe sales and customer support.
Kubernetes on DigitalOcean scales pretty well.. fixed monthly cost. If I run out of capacity I just add an extra node or switch to more powerful nodes. Kubernetes takes care of provisioning the nodes.
Google APIs OTOH... I went from $0/month to $450 the next because of their stupid per-unit pricing and hidden API calls.
If your server administrator doubles as your sales person maybe but for most companies these are different roles and you need someone to manage your fly.io account/docker image or your aws account. Sales, customer support, legal, marketing should be affected. If you are a one man show that's a different conversation.
They’ve got Chris McCord there, who has already improved their Elixir deployment story. I tweeted a few weeks ago about improving the default Rails deployment experience to not require DB provisioning and configuration of env vars like SECRET_BASE_KEY and they said it would likely ship within the next 3 months.
My hot take is they’re setting themselves up to ride the server-side rendered HTML renaissance we are experiencing with LiveView and Hotwire. It will become much more important to deploy applications geographically closer to customers to lower latency, which Fly makes sane for the rest of us.
We're betting the same, and Fly.io doesn't know it yet, but part of our solution will eventually be made possible (much easier) thanks to their architecture. :)
Fly isn’t really edge in the sense that they are colocating with lots of ISPs like Netflix might for video at the edge.
Fly is edge in the sense they have hosts in N regions and manage the anycast for you so traffic seamlessly routes to those regions. They don’t publish which data centers they operate in only the regions (https://fly.io/docs/reference/regions/) but most of those regions line up with equinix data centers which are all physically secured professional facilities.
It’s a great question though and one they should have a document about.
> We've had a free tier since we launched ten years ago (in 2020).
I’m not a Fly.io customer (although more and more I am thinking I should be), but I eagerly read every new blog post because I’m so entertained by their tone. These people are clearly having fun at work.
This is a sort of content-free comment, but I'll say it anyways because it's been gnawing at me for months --- there is a lot of content queued up for us to write about; just a freaking avalanche of stuff we've been working on. I keep looking for places to break it off and start writing about it, and the work keeps growing and foiling our attempts.
I also like the frankness and simplicity of their communication style.
No fancy buzzwords, to the point and speaks to things we all know are true but are typically not addressed or are wrapped up in spin. The first and last paragraphs are great examples.
The authenticity of it leaves me with a strong sense of trust and respect.
Yeah, that's the important part for me. The "fun" part about the sandwich app is nice, but the part that differentiate it from other content is stuff like:
> The free VMs themselves are just a bit of memory and some idling CPU, but the state is obnoxious for us to manage.
> But here's what happens if you give people freemium full access to a hosting platform: lots and lots of free VMs mining for cryptocurrencies.
If I had to put words on it, I would call it "transparency" and "treating their public as adults/with respect". I know that free storage is hard to find because disk space is harder than CPU and RAM to manage for them. I know that I need to put my credit card because people will abuse the free tiers for crypto if they don't ask for a credit card (sourcehut had the same problem with their free CI). I don't feel like they're trying to hide something from me, even if they still wrap it in some fun. This is the behavior I would expect from a good collegue.
Really enjoying watching Fly develop. Not sure if I have read the strategy right, but my read is that they are going for an edge 'OS' which is compatible with the way software is already built for the centralised deployment model, where other players are going with a strategy of inventing a new 'OS' and saying, pretty much, 'all this edge goodness is available, but you have to rewrite your software to run on our OS first'.
I'm so glad to be seeing someone do this, because for a while it was looking like nobody would - and as long as nobody is doing it, nobody would have to.
Now with Fly increasing in popularity, you have to expect that others, Cloudflare in particular, must be seriously looking at integrating more 'centralised deployment' tools like postgres into their edge platform too (if, to be fair, they didn't already have this on their roadmap), providing more options and competition.
> Even for our free services, we require a credit card number. We know that's the worst and it gives you heartburn. It's not because we plan to charge you.
> But here's what happens if you give people freemium full access to a hosting platform: lots and lots of free VMs mining for cryptocurrencies.
> We could tell you we want to prevent crypto mining because we care about the planet, and that would be true. We also have a capitalism nerve that hurts when people spend our money gambling. Your credit card number is the thin plastic line between us and chaos.
I don't really have another alternative to offer here, but i appreciate the transparency and honesty of saying this, regardless of whether they're right or not.
I wish more companies out there actually explained their reasoning behind decisions, instead of essentially just going like: "We're doing this because of undisclosed reasons, please accept that this is how things are going to be."
Of course, in many cases you can come up with a few feasible reasons for why companies make many of their decisions, but being given first hand context for these things feels nice!
It's not just miners, you also have people using your free or even cheap tiers for DDOS/PortScan/SSH Bruteforce. These ones burn your IP address so it's better to prevent it than to try to catch it later. A credit card check goes a long way as a barrier to entry.
As long as it is free miners will keep trying to get around it by for instance running tons of different free accounts each with a obfuscated mining system.
Even if you catch all just having them try will be a huge waste of resources.
Sure ... look for high CPU then anonymous miners cap their CPU usage. And a few legitimate users (and potential customers) get locked for reasons they don't understand.
Repeat with every other trick you use to detect abuse.
To the Fly.io team here, other than the "daily storage-based snapshots of each of your provisioned volumes"[0], is there a plan to offer something similar to Heroku WAL based restore system [1] where you can (from the control panel) restore a db to a moment in time?
Also do you have any plans for a managed upgrading/patching of Postgres, again similar to Heroku?
For me personally the fully managed nature of Postgress on Heroku is brilliant and what I would love to see on Fly.io. It seems that on Fly its a little more hands on or am I missing something?
We will be able to do a WAL backup / point in time restore feature when we ship object storage.
Managed upgrades are almost in. If you run `fly image show` it'll tell you if you need one, then `fly image update` will upgrade your Postgres. We don't do this automatically yet. It won't be difficult though.
I tried to deploy an application I currently have running on Digital Ocean App Platform (their Heroku competitor) also on fly.io. The app is using Python/Django and Postgres, ideal to test.
I must say the deployment experience was great, stuff just worked, brilliant.
One question in case someone tried that: How can you customize the postgres config? I want to enable pg_stat_statements, and one needs to add `shared_preload_libraries = 'pg_stat_statements'` to the config for this. One could also make this the default maybe? It is the default on most cloud providers (AWS RDS, Digital Ocean).
> Fly Postgres clusters are just regular Fly applications. If you need to customize Postgres in any way, you may fork this repo and deploy using normal Fly deployment procedures. You won't be able to use fly postgres commands with custom clusters. But it's a great way to experiment and potentially contribute back useful features!
That's fine for a lot of use cases, but the unit economics of the per request pricing model means it's really hard to operate a predictably sustainable (i.e. profitable) business without also charging our customers on a usage basis, or enforcing limits on usage, neither of which is ideal for maximizing engagement as incentives are no longer aligned.
Fly just gives us compute at the edge for a predictable price per unit of actual compute resources as opposed to requests, and gives us freedom to serve as much traffic as we can min-max onto those resources, like we could with traditional cloud compute.
This provides much better unit economics for many kinds of applications, at the cost of having to manage the scaling ourselves. But because the option exists, we can make this tradeoff on a case-by-case basis, which is so much better than if all we had was "serverless" stuff at the edge and had to choose between just low latency across the globe vs good unit economics.
I am not sure if I'm missing something or what, but here's where I looked:
I really want to like this service, as we have (at $CURJOB) an app packaged as a docker image that it'd be awesome to set up to run on Fly.io, especially with the multi-region postgresql.What the heck am I missing? Can I just not read? Do I just need to install the CLI and all shall be made clear?
0: https://fly.io/docs/introduction/
1: https://fly.io/blog/docker-without-docker/
2: https://fly.io/docs/speedrun/
3: https://fly.io/docs/getting-started/
Install CLI, run `fly launch` in a directory with a Dockerfile, and it should just work.
Most of our users don't start with a Docker image though, they start with Phoenix. What you're seeing is a little bit of indecisiveness in how we target the docs.
Based on your first link [0], I saw,
> You can run most applications with a Dockerfile using the flyctl command.
With that, I looked over the left-side menu, and clicked `flyctl`[1], since it seems that's what you'd need to use to deploy an existing app with Docker. After that, I clicked on "Launch an App" [2], which shows help for the `flyctl launch` subcommand, including a parameter `--dockerfile`. I think that's how you would deploy an existing app with docker?
[0] https://fly.io/docs/introduction/ [1] https://fly.io/docs/flyctl/ [2] https://fly.io/docs/flyctl/launch/
Workers Unbound is $1 for ~6.6M requests (+ runtime at $0.0000015 per sec / $12.5 for 1M GB-sec). That's super cheap considering free egress, which brings me to...
> Fly just gives us compute at the edge for a predictable price per unit of actual compute resources as opposed to requests...
Fly.io may not meter requests anymore, but they continue to meter egress, even between your VMs in the same region [0]. Usage-based price model is everywhere, like it or not.
[0] https://community.fly.io/t/do-traffic-over-6pn-within-the-sa...
I don't really want to promise anything we haven't shipped yet. But my perfect cloud service (a) charges me when VMs are alive and (b) gives me the tools I need to create/remove/stop/start/suspend/resume VMs based on either network activity or metrics.
One flaw of Fly.io right now is that it's relatively expensive to run a side project in 10+ regions. Most apps benefit from 10+ regions, but $50/mo to try it out is prohibitive. We want to make this more accessible.
The praise on the subject of predictability is interesting, given perennial concerns about uncapped vs capped usage charges (with any cloud provider), but esp. in light of past Fly-specific comments that "putting work into features specifically to minimize how much people spend seems like a good way to fail a company".
<https://news.ycombinator.com/item?id=24699221>
My comment could have been better. Our business model is predicated on making it cheap to use services and easy to incrementally scale up. We probably won't build features to cap usage of things directly, but it makes total sense to deactivate apps when credits run out.
If you know them well and their usage patterns, you can predict with confidence how much each customer will cost you. With granularity to the level of a feature or even a particular action.
This allows for extremely precise and safe unit economics planning. I couldn't be happier with this benefit from serverless.
In a server-based infra you have many fixed costs: servers themselves, unused capacity, and your time to maintain it, which is certainly expensive, since it competes with attention to the product or maybe sales and customer support.
Google APIs OTOH... I went from $0/month to $450 the next because of their stupid per-unit pricing and hidden API calls.
They’ve got Chris McCord there, who has already improved their Elixir deployment story. I tweeted a few weeks ago about improving the default Rails deployment experience to not require DB provisioning and configuration of env vars like SECRET_BASE_KEY and they said it would likely ship within the next 3 months.
My hot take is they’re setting themselves up to ride the server-side rendered HTML renaissance we are experiencing with LiveView and Hotwire. It will become much more important to deploy applications geographically closer to customers to lower latency, which Fly makes sane for the rest of us.
But we aren't yet ready for that piece.
Fly.io was still very beta when My startup was launching. if we were creating out deployment now, fly.io would be top of my list.
Is some shady ISP colocation as safe as highly secured AWS and Azure data centers?
Fly is edge in the sense they have hosts in N regions and manage the anycast for you so traffic seamlessly routes to those regions. They don’t publish which data centers they operate in only the regions (https://fly.io/docs/reference/regions/) but most of those regions line up with equinix data centers which are all physically secured professional facilities.
It’s a great question though and one they should have a document about.
Would love to understand better how to deploy Rails with fly and still benefit from multi-region/failover/scaling
p.s. horrible nitpick, but I think it’s SECRET_KEY_BASE :)
We have a Gem to make it almost transparent: https://github.com/superfly/fly-ruby
I’m not a Fly.io customer (although more and more I am thinking I should be), but I eagerly read every new blog post because I’m so entertained by their tone. These people are clearly having fun at work.
Have you thought of hiring a technical writer? Think of it as a work-stealing algorithm if that helps :-)
No fancy buzzwords, to the point and speaks to things we all know are true but are typically not addressed or are wrapped up in spin. The first and last paragraphs are great examples.
The authenticity of it leaves me with a strong sense of trust and respect.
> The free VMs themselves are just a bit of memory and some idling CPU, but the state is obnoxious for us to manage.
> But here's what happens if you give people freemium full access to a hosting platform: lots and lots of free VMs mining for cryptocurrencies.
If I had to put words on it, I would call it "transparency" and "treating their public as adults/with respect". I know that free storage is hard to find because disk space is harder than CPU and RAM to manage for them. I know that I need to put my credit card because people will abuse the free tiers for crypto if they don't ask for a credit card (sourcehut had the same problem with their free CI). I don't feel like they're trying to hide something from me, even if they still wrap it in some fun. This is the behavior I would expect from a good collegue.
2010?
I'm so glad to be seeing someone do this, because for a while it was looking like nobody would - and as long as nobody is doing it, nobody would have to.
Now with Fly increasing in popularity, you have to expect that others, Cloudflare in particular, must be seriously looking at integrating more 'centralised deployment' tools like postgres into their edge platform too (if, to be fair, they didn't already have this on their roadmap), providing more options and competition.
> Even for our free services, we require a credit card number. We know that's the worst and it gives you heartburn. It's not because we plan to charge you.
> But here's what happens if you give people freemium full access to a hosting platform: lots and lots of free VMs mining for cryptocurrencies.
> We could tell you we want to prevent crypto mining because we care about the planet, and that would be true. We also have a capitalism nerve that hurts when people spend our money gambling. Your credit card number is the thin plastic line between us and chaos.
I don't really have another alternative to offer here, but i appreciate the transparency and honesty of saying this, regardless of whether they're right or not.
I wish more companies out there actually explained their reasoning behind decisions, instead of essentially just going like: "We're doing this because of undisclosed reasons, please accept that this is how things are going to be."
Of course, in many cases you can come up with a few feasible reasons for why companies make many of their decisions, but being given first hand context for these things feels nice!
Even if you catch all just having them try will be a huge waste of resources.
Repeat with every other trick you use to detect abuse.
Very earnest.
Also do you have any plans for a managed upgrading/patching of Postgres, again similar to Heroku?
For me personally the fully managed nature of Postgress on Heroku is brilliant and what I would love to see on Fly.io. It seems that on Fly its a little more hands on or am I missing something?
0: https://fly.io/docs/getting-started/multi-region-databases/#...
1: https://devcenter.heroku.com/articles/heroku-postgres-data-s...
We will be able to do a WAL backup / point in time restore feature when we ship object storage.
Managed upgrades are almost in. If you run `fly image show` it'll tell you if you need one, then `fly image update` will upgrade your Postgres. We don't do this automatically yet. It won't be difficult though.
Brilliant! I have only had to use it once on Heroku, but that one time has me convinced that I couldn't live without it.
I must say the deployment experience was great, stuff just worked, brilliant.
One question in case someone tried that: How can you customize the postgres config? I want to enable pg_stat_statements, and one needs to add `shared_preload_libraries = 'pg_stat_statements'` to the config for this. One could also make this the default maybe? It is the default on most cloud providers (AWS RDS, Digital Ocean).
Check this out:
> Fly Postgres clusters are just regular Fly applications. If you need to customize Postgres in any way, you may fork this repo and deploy using normal Fly deployment procedures. You won't be able to use fly postgres commands with custom clusters. But it's a great way to experiment and potentially contribute back useful features!
https://github.com/fly-apps/postgres-ha