I dislike those black and white takes a lot. It's absolutely true that most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients.
That being said, the cloud does have a lot of advantages:
- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.
- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]
- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM
- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers
Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].
Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You don't actually need any of those things until you no longer have a "project", but a business which will allow you to pay for the things you require.
You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.
On this site, I've seen these kind of takes repeatedly over the past years, so I went ahead and built a little forum that consists of a single Rust binary and SQLite. The binary runs on a Mac Mini in my bedroom with Cloudflare tunnels. I get continuous backups with Litestream, and testing backups is as trivial as running `litestream restore` on my development machine and then running the binary.
Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.
It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.
Exactly! I've been self hosting for about two years now, on a NAS with Cloudflare in front of it. I need the NAS anyway, and Cloudflare is free, so the marginal cost is zero. (And even if the CDN weren't free it probably wouldn't cost much.)
I had two projects reach the front page of HN last year, everything worked like a charm.
It's unlikely I'll ever go back to professional hosting, "cloud" or not.
1. For small stuff, AWS et al aren't that much more expensive than Hetzner, mostly in the same ballpark, maybe 2x in my experience.
2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.
I absolutely prefer self hosting on root servers, it has always been my go to approach for my own companies, big and small stuff. But for people that can't or don't want to mess with their infrastructure themselves, I do recommend the cloud route even with all the current anti hype.
> 2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.
If you're at an early/smaller stage you're not doing anything too fancy either way. Even self hosted, it will probably be easy enough to understand that you're just deploying a rails instance for example.
It only becomes trickier if you're handling a ton of traffic or apply a ton of optimizations and end up already in a state where a team of sysadmin should be needed while you're doing it alone and ad-hoc. IMHO the important part would be to properly realize when things will get complicated and move on to a proper org or stack before you're stuck.
One way of solving for this is to just use K3s or even just plain docker. It is then just kuberneters/containers and you can hire alot of people who understand that.
>most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients. That being said, the cloud does have a lot of advantages:
When did Linode and DO got dropped and not being part of the cloud ?
What used to separate VPS and Cloud was resources at per second billing. Which DO and Linode along with a lot of 2nd tier hosting also offer. They are part of cloud.
Scaling used to be an issue, because buying and installing your hardware or sending it to DC to be installed and ready takes too much time. Dedicated Servers solution weren't big enough at the time. And the highest Core count at the time was 8 core Xeon CPU in 2010. Today we have EPYC Zen 6c at 256 Core and likely double the IPC. Scaling issues that requires a Rack of server can now be done with 1 single server and fit everything inside RAM.
Managed database? PlanetScale or Neon.
A lot of issues for medium to large size project that "Cloud" managed to solve are no longer an issue in 2025. Unless you are top 5-10% of project that requires these sort of flexibilities.
> But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
Agreed.
These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.
And making sure you're not making a security configuration mistake that will accidentally leak private data to the open internet because of a detail of AWS you were unaware of.
Figuring out how to do db backups _can_ also be fairly time consuming.
There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)
I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.
> You're getting a lot of services readily available. Need offsite backups? A few clicks
I think it is a lot safer for backups to be with an entirely different provider. It protects you in case of account compromise, account closure, disputes.
If using cloud and you want to be safe, you should be multi-cloud. People have been saved from disaster by multi-cloud setups.
> You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware)
Not true for VPSes or rented dedicated servers either.
> Peak-heavy loads can be a lot cheaper.
they have to be very spiky indeed though. LLMs might fit but a lot of compute heavy spiky loads do not. I saved a client money on video transcoding that only happened once per upload and only over a month or two an year by renting a dedi all ear round rather than using the AWS transcoding service.
> Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
You have to do work to ensure things run across multiple availability zones (and preferably regions) anyway.
> But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You have more forced upgrades.
An unmanaged database will only need a lot of work if operating at large scale. If you are then its probably well worth employing a DBA anyway as an AWS or similar managed DB is not going to do all the optimising and tuning a DBA will do.
any serious business will(might?) have hundreds of Tbs of data. I store that in our DC and with a 2nd DC backup for about 1/10 the price of what it would cost in S3.
In my case we have a B2B SaaS where access patterns are occasional, revenue per customer is high, general server load is low. Cloud bills just don’t spike much. Labor is 100x the cost of our servers so saving a piddly amount of money on server costs while taking on even just a fraction of one technical employee’s worth of labor costs makes no sense.
I think compliance is one of the key advantages of cloud. When you go through SOC2 or ISO27001, you can just tick off entire categories of questions by saying 'we host on AWS/GCP/Azure'.
It's really shitty that we all need to pay this tax, but I've been just asked about whether our company has armed guards and redundant HVAC systems in our DC, and I wouldn't know how to do that apart from saying that 'our cloud provider has all of those'.
In my experience you still have to provide an awful lot of "evidence". I guess the advsntage of AWS/GCP/Cloud is that they are so ubiquitous you could literally ask an LLM to generate fake evidence to speed up the process.
I don’t feel like anything really changed? Fairly certain the prices haven’t changed. It’s honestly been pleasantly stable. I figured I’d have to move after a few months, but we’re a few years into the acquisition and everything still works.
Akamai has some really good infrastructure, and an extremely competent global cdn and interconnects. I was skeptical when linode was acquired, but I value their top-tier peering and decent DDoS mitigation which is rolled into the cost.
> That being said, the cloud does have a lot of advantages:
Another advantage is that if you aim to provide a global service consumed throughout the world then cloud providers allow you to deploy your services in a multitude of locations in separate continents. This alone greatly improves performance. And you can do that with a couple of clicks.
It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.
AWS has a bunch of startup credits you can use, if you're smart.
But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example.
Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing!
LEVEL 2:
And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever.
PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though.
LEVEL 3:
Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers.
>It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.
You can easily scale hard drive space independently of RAM by buying block storage separately and then mounting it on your Linode.
To me DO is a cloud. It is pricey (for performance) and convenient. It is possibly a wiser bet than AWS for a startup that wants to spend less developer (read expensive!) time on infra.
I mean there are many places that sell multi AZ, hourly billed VPS/Bare Metal/GPU at a fraction of the cost of AWS.
I would personally have an account at one of those places and back up to there with everything ready to spin up instances and failover if you lose your rack, and use them for any bursty loads.
This is all correct. I've been running my own servers for many years now, keeping things simple and saving a lot of money and time (setting anything up in AWS or Azure is horribly complicated and the UIs are terrible).
One thing to remember is that you do need to treat your servers as "cloud servers" in the sense that you should be able to re-generate your entire setup from your configuration at any time, given a bunch of IPs with freshly-installed base OSs. That means ansible or something similar.
If you insist on using cloud (virtual) servers, do yourself a favor and use DigitalOcean, it is simple and direct and will let you keep your sanity. I use DO as a third-tier disaster recovery scenario, with terraform for bringing up the cluster and the same ansible setup for setting everything up.
I am amused by the section about not making friends saying this :-) — most developer communities tend to develop a herd mentality, where something is either all the rage, or is "dead", and people are afraid to use their brains to experiment and make rational decisions.
Me, I'm rather happy that my competitors fight with AWS/Azure access rights management systems, pay a lot of money for hosting and network bandwith, and then waste time on Kubernetes because it's all the rage. I'll stick to my boring JVM-hosted Clojure monolith, deployed via ansible to a cluster of physical servers and live well off the revenue from my business.
I was a guy that built server clusters during the early 00's, for my own and others' web and other projects. When AWS really took off, it was like a spend all your money mania, and devs and companies treated my skills like dirt. I got a job writing facial recognition edge servers, with high performance many claim are impossible numbers (25M face compares per second per core) and my employer found itself a leader in the industry. But customers could not wrap their heads around just a single box capable of our numbers (800M face compares per second, plus ingestion of 32 video streams) and to get sales the company ended up moving everything into AWS because customers did not trust anything else.
Without targeting you directly, you wrote: "800M face compares per second". I creeped out when I read that. What is the real world application for something like that? My guess: Real-time, large scale surveillance.
Do you have any blog posts or something you could share on how facial recognition works? Specifically what a "face compare" is. That sounds interesting!
> If you insist on using cloud (virtual) servers, do yourself a favor and use DigitalOcean, it is simple and direct and will let you keep your sanity. I use DO as a third-tier disaster recovery scenario, with terraform for bringing up the cluster and the same ansible setup for setting everything up.
This is the general direction in which society is going. There is no dialogue. You are either with us or against us.
Sad.
Pre-cloud pets vs cattle approach. We ran a few pets, but had DC's worth of cattle thanks to PXE, TFPT and Ansible. No Terraform required, as there was no need to control the state of multiples of cloud cruft. Good times. Except when something would pack up in the middle of winter in the wee hours and it was a bollock-cracking motorbike ride to the DC to spit on the offending black box.
This is all true. But... But if you manage your own server, as the author advice, you need to figure out a lot of stuff and remember about a lot of stuff.
Are ulimits set correctly?
Shall I turn on syn cookies or turn them off because of performance?
What are the things I should know but I don't and Chat GPT has not told me them, as this is more than some intro tutorial on how to run VPS on DO, so it was never indexed by Chat GPT and alikes.
Is all of my software on the server up to date? Is there any library I use exploited, zero day attacks are on me too, blocking bots, etc. What if I do some update but it will turn out that my Postgres version is not working correctly anymore? This is all my problem.
What if I need to send emails? These days doing this ourselves is a dark art by itself (IP/domain address warming up, checking if my domain has not ended on some spam list, etc.).
What if I need to follow some regulations, like European Union GDPR compliance? Have I done everything what is needed to store personal data as GDPR requires? Is my DB password stored in a compliant way or I will face a fine up to 10% of my incomes.
This is not black/white situation as the author tries to present and those who use cloud services are not dumbards who are buying some IT version of snake oil.
Setting up the email server is the only thing I couldn't do with my own home hosted setup because you're at the mercy of your internet provider to give you the PTR record in their network, and lately many providers outright refuse to do it for "your own and their own safety" reasons. This thing alone could be the difference between deciding to host yourself or use a cloud service.
>What if I need to send emails? These days doing this ourselves is a dark art by itself (IP/domain address warming up, checking if my domain has not ended on some spam list, etc.).
AFAIK, everyone sending automated emails just uses one of the paid services, like sendmail.
>What if I need to follow some regulations, like European Union GDPR compliance? Have I done everything what is needed to store personal data as GDPR requires? Is my DB password stored in a compliant way or I will face a fine up to 10% of my incomes.
What does this have to do with cloud vs non-cloud? You'll need to manage your data correctly either way.
All of this is true both for dedicated servers and cloud-hosted VMs.
This list looks like FUD, to be honest, trying to scare people. Yes, you should be scared of these things, but none of them are magically solved by hosting your stuff in AWS/Azure/Google or any other could provider du jour.
Yes, you will need to employ someone with basic system administration competence. That's a given.
Cloud infra is touted as obviating the need to hire system administrators, but that's a marketing fabrication. Trying to manage infrastructure without the necessary in-house skills is a recipe for disaster, whether it's in the cloud or on-prem.
I'm fully aware this is pedantic, but you can't save 10x. You can pay 1/10. You can save 90%. Your previous costs could have been 10x your current costs. But 10x is more by definition, not less. You can't save it.
In English, x or time(s) after a number marks a "unit" used by various verbs. A 10x increase. Increase by 10x. Go up 10x. Some of these verbs are negative like decrease or save. "Save 10x" is the same as "divide by 10". Four times less, 5 times smaller etc. are long attested.
No, x literally means multiply. It doesn't somehow also mean divide. They should use the percent sign, it's what it is for. 10x my costs means 10 x mycost, it's literally an equation
You can save 10x, but you need to have an x first. So if you've got formula A which costs 500 EUR, and formula B which costs 480, you're saving 20 EUR. If formula C costs 300, you could save 200, which is 10x what you would save using B. But just as people don't understand when to use 'fewer' or misuse 'literally' to mean 'figuratively' (which is rather the opposite meaning of the word they use); words don't mean anything any more, it's all about the feelings man.
People commonly use this expression in everyday conversation, such as, "you could save 10 times as much if you would just shop at Costco." So I agree with OP, their comment is correct but pedantic.
Cost of item = 10
First discounted cost of item = 9
=> First saving = 1
Second discounted cost of item = 6
=> Second saving = 4
Second saving is 4x first saving.
(Edit - formatting)
But that's 4x the savings compared to another saving. I suppose you've upped the pedantry and are technically correct, but that's a pretty narrow use case and not the one used in the article.
hmm.. if you reduce latency from one second to a hundred milliseconds, could you celebrate that you've made it 10x faster, or would you have the same quibble there too?
Edit: Thinking about this some more: You could say you are saving 9x [of the new cost], and it would be a correct statement. I believe the error is assuming the reference frame is the previous cost vs the new cost, but since it is not specified, it could be either.
> if you reduce latency from one second to a hundred milliseconds, could you celebrate that you've made it 10x faster
Yes you can, because speed has units of inverse time and latency has units of time. So it could be correct to say that cutting latency to 1/10 of its original value is equivalent to making it 10x the original speed - that's how inverses work.
Savings are not, to my knowledge, measured in units of inverse dollars.
Consider it as getting 10x the resources for the same price - that is, the resource-to-price ratio is 10x. Except you don't need 10x the resources so you choose to get 1x the resources for 0.1x the price instead.
I took care of IT for a startup hedge fund once. I was the quant's right-hand man, data engineer, visualization dashboard guy, everything. The quant needed to run a monolithic C++ program daily to chew through stock data and we decided a dual-Xeon server with 512 GB RAM would be great. OVH MG-512, for those curious.
Quant happy, boss happy, all good. Then the boss goes for lunch with someone and comes back slightly disturbed. We were not buzzword compliant. Apparently the other guy made him feel that he was using outdated tech by not being on AWS, using auto-scaling etc;
Here I am, from a background where my first language was 8086 assembly, and compactness was important to me. I remember thinking, "This whole thing could run on a powerful calculator, except for the RAM requirement".
It was a good lesson for me. Most CTOs know this bias and have unnecessarily huge and wasteful budgets but make sure they keep the business heads happy in the comfort that the firm is buzzword compliant. Efficiency and compactness are a marketing liability for IT heads!
I would think a quant would understand arithmetic.
Did you try crunching some of the numbers with him? I would hope a quant could also understand following the common wisdom can sometimes cost you more.
The author touches on it briefly, but I'd argue that the cloud is immensely helpful for building (and tearing down) an MVP or proving an early market for a new company using startup credits or free tiers offered by all vendors. Once a business model has been proven, individual components and the underlying infrastructure can be moved out of the cloud as soon as cost becomes a concern.
This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner, and developers must stay disciplined to keep components portable from day one, but you can get a lot of mileage out of free credits without burning dollars on any infrastructure. The biggest challenge becomes finding the time to perform these migrations among other competing priorities, such as new feature development, especially if you're growing fast.
Our startup is mostly built on Google Cloud, but I don't think our sales rep is very happy with how little we spend or that we're unwilling to "commit" to spending. The ability to move off of the cloud, or even just to another cloud, provides a lot of leverage in the negotiating seat.
Cloud vendors can also lead to an easier risk/SLA conversation for downstream customers. Depending on your business, enterprise users like to see SLAs and data privacy laws respected around the globe, and cloud providers make it easy to say "not my problem" if things are structured correctly.
Seems like nowadays people seem less concerned with vendor lockin than they were 15 years ago. One of the reason to want to avoid lockin is to be able to move when the price gouging gets just a little bit too greedy that the move is worth the cost. One of the drawbacks of all these built in services at AWS is the expense of trying to recreate the architecture elsewhere.
> This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner
Right. But none of the cloud providers encourage that mode of thinking, since they all have complete different frontends, API's, different versions of the same services (load balancers, storage) etc. Even if you standardize on k8s, the implementation can be chalk and cheese between two cloud providers. The lock in is way worse with cloud providers.
I'd be more interested to understand (from folk who were there) what the conditions were that made AWS et al such a runaway hit. What did folks gain, and have those conditions meaningfully changed in some way that makes it less of a slam dunk?
My recollection from working at a tech company in the early 2010s is that renting rack space and building servers was expensive and time consuming, estimating what the right hardware configuration would be for your business was tricky, and scaling different services independently was impossible. also having multi regional redundancy was rare (remember when squarespace was manually carrying buckets of petrol for generators up many flights of stairs to keeps servers online post sandy?[1]).
AWS fixed much of that. But maybe things have changed in ways that meaningfully changes the calculus?
You're falling into the false dichotomy that always comes up with these topics: as if the choice is between the cloud and renting rack space while applying your own thermal paste on the CPUs.
In reality, for most people, renting dedicated servers is the goldilocks solution (not colocation with your own hardware).
You get an incredible amount of power for a very reasonable price, but you don't need to drive to a datacenter to swap out a faulty PSU, the on site engineers take care of that for you.
I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Using their installer I had Ubuntu 24.04 LTS up and running, and with some Ansible playbooks to finish configuration, all in all from the moment of ordering to fully operational was about 10 minutes tops. If I no longer need the server I just cancel it, the billing is per hour these days.
Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be.
I find it a blissful way to work.
I’d add this. Servers used to be viewed as pets; the system admins spent a lot time on snow flake configurations and managing each one. When we started standing up tens of servers to host the nodes of our app (early 2000s); the simple admin overhead was huge. One thing I have not seen mentioned here was how powerful ansible and similar tools were at simplifying server management. Iirc being able to provision and standup servers simply with known configurations was a huge win aws provided.
> all in all from the moment of ordering to fully operational was about 10 minutes tops.
I think this is an important point. It's quick.
When cloud got popular, doing what you did could take upwards of 3 months in an organisation, with some being closer to 8 months. The organisational bureaucracy meant that any asset purchase was a long procedure.
So, yeah, the choices were:
1. Wait 6 months to spend out of capex budget
Or
2. Use the opex budget and get something in 10m.
We are no longer in that phase, so cloud services makes very little sense now because you can still use the opex budget to get a VPS and have in going in minutes with automation.
True, but I think you're touching on something important regarding value. Value is different depending on the consumer: for you, you're willing and able to manage more of the infrastructure than someone who has a more narrow skillset.
Being able to move the responsibility for areas of the service on to the provider is what we're paying for, and for some, paying more money to offload more of the responsibility actually results in more value for the organization/consumer
AWS also made huge inroads in big companies because engineering teams could run their own account off of their budget and didn’t have to go through to IT to requisition servers, which was often red tape hell. In my experience it was just as much about internal politics as the technical benefits.
Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers.
The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc.
And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months.
Absolutely. At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card. With AWS, leadership suddenly accepted that Ops/Dev could provision what we thought was right. It isn’t logically compelling, but that’s why the cloud gained traction so quickly: it removed friction.
Computing power (compute, memory, storage) has increased 100x or more since 2005, but AWS prices are not proportionally cheaper. So where you were getting a reasonable value in ~2012, that value is no longer reasonable, and by an increasing margin.
In 2006 when the first EC2 instances showed up they were on par with an ok laptop and would take 24 months to pay enough in rent to cover the cost of hardware.
Today the smallest instance is a joke and the medium instances are the size of a 5 year old phone. It takes between 3 to 6 months to pay enough in rent to cover the cost of the hardware.
What was a great deal in 2006 is a terrible one today.
raises hand I ran a small SaaS business in the early 2000s, pre-AWS.
Renting dedicated servers was really expensive. To the extent that it was cheaper for us to buy a 1U server and host it in a local datacenter. Maintaining that was a real pain. Getting the train to London to replace a hard drive was so much fun. CDNs were "call for pricing". EC2 was a revelation when it launched. It let us expand as needed without buying servers or paying for rack space, and try experiments without shoving everything onto one server and fiddling with Apache configs in production. Lambda made things even easier (at the expense of needing new architectures).
The thing that has changed is that renting bare metal is orders of magnitude cheaper, and comparable in price to shared hosting in the bad old days.
> But maybe things have changed in ways that meaningfully changes the calculus?
I'd argue that Docker has done that in a LOT of ways. The huge draw to AWS, from what I recall with my own experiences, was that it was cheaper than on-prem VMware licenses and hardware. So instead of virtualizing on proprietary hypervisors, firms outsourced their various technical and legal responsibilities to AWS. Now that Docker is more mature, largely open source, way less resource intensive, and can run on almost any compute hardware made in the last 15 years (or longer), the cost/benefit analysis starts to favor moving off AWS.
Also AWS used to give out free credits like free candy. I bet most of this is vendor lock in and a lot of institutional brain drain.
The free credits... what a WILD time! Just show up to a hackathon booth, ask nicely, and you'd get months/years worth of "startup level" credits. Nothing super powerful - basically the equivalent of a few quad core boxes in a broom closet. But still for "free".
> although actually many people on here are American so I guess for you aws is legally a person...
Corporate legal personhood is actually older than Christianity, and it being applied to businesses (which were late to the game of being allowed to be corporations) is still significantly older than the US (starting with the British East India Company), not a unique quirk of American law.
I don't think that's a hard and fast rule? I think et al is for named, specific entities of any kind. You might say "palm trees, evergreens trees, etc" but "General Sherman, Grand Oak, et al"
The problem it really solved was your sysadmins were still operating by SSHing into the physical servers and running commands meticulously typed out in a releaae doc or stored on a local mediawiki instance, and acquiring new compute resources involved a battle with finance for the capex which would delay pretty much any project for weeks, while cloud vendors let engineers at many companies sidestep both processes.
Everything else was just reference material for how to sell it to your management.
TL;DR version - its about money and business balance sheets, not about technology.
For businesses past a certain size, going to cloud is a decision ALWAYS made by business, not by technology.
From a business perspective, having a "relatively fixed" ongoing cost (which is an operational expense ie OpEx ) even if it is significantly higher than what it would cost to do things with internal buy and build out (which is a capital expense cost ie CapEx), make financial planning, taxes and managing EBITDA much easier.
Note that no one on the business really cares what the tech implications are as long at "tech still sorta runs mostly OK".
It also, via financial wizardry, makes tech cost "much less" on a quarter over quarter and year over year basis.
While I love a good cloud bashing, it's really not black and white. If you're really small, it probably doesn't matter much if you're using Hetzner or AWS, but co-location might be a bit to expensive. If you run an absolutely massive company, cloud vs. self-hosted comes down to whether or not you can build tooling as good as AWS, GCP or Azure, with all the billing infrastructure and reporting.
The issues are mostly in the SME segment and where it really depends on what your business is. Do you need completely separate system for each customer? In that case, AWS is going to be easier and probably cheaper. Are you running a constant low 24/7? Then you should consider buying your own servers.
It's really hard to apply a blanket conclusion to all industries, in regards to cloud cost and whether or not it's worth it. My criticism in regards to opting for cloud is that people want all the benefits of the cloud, but not use any of the features, because that would lock them into e.g. AWS. If you're using AWS as a virtual machine platform only, there's always going to be cheaper (and maybe better) options.
If you have high volume traffic depending on time of month, ie finance around ultimo/primo, you might need to scale your performance to 5-10x your normal idle load.
If running on your own data center, or renting physical/virtual machines from ie Hetzner, you will pay for that capability overhead for 30.5 days per month, when in reality you only need it for 2-3 days.
With the cloud you can simply scale dynamically, and while you end up paying more for the capacity, you only pay when you use it, meaning you save money for most of the month.
> If running on your own data center, or renting physical/virtual machines from ie Hetzner, you will pay for that capability overhead for 30.5 days per month, when in reality you only need it for 2-3 days.
I keep seeing this take on here and it just shows most people don't actually know what you can do off the cloud. Hertzner allows you to rent servers by the hour, so you can just do that and only pay for the 2-3 days you need them.
Its not actually that hard to get your own server racked up in a data centre, I have done it. Since it was only one box that I built and installed at home I just shipped it and they racked it in the shared area and plugged the power and network in and gave me the IP address. It was cheaper than renting from someone like hetzner, was about £15 a month at the time for 1A and 5TB a month of traffic at 1gbps. Also had a one off install fee of £75.
At the time I did this no one had good gaming CPUs in the cloud, they are still a bit rare really especially in VPS offerings and I was hosting a gaming community and server. So I made a pair of machines in a 1U with dual machines in there and had a public and private server with raid 1 drives on both and redundant power. Ran that for a gaming server for many years until it was obsolete. It wasn't difficult and I think the machine was about £1200 in all, which for 2 computers running game servers wasn't too terrible.
I didn't do this because it was necessarily cheaper, I did it because I couldn't find a cloud server to rent with a high clockspeed CPU in it. I tested numerous cloud providers, sent emails asking for specs and after months of chasing it down I didn't feel like I had much choice. Turned out to be quite easy and over the years it saved a fortune.
We did this a lot in the early 2000's. At the time I worked for a company with offices in Bellevue and we put our own hardware in full sized racks at a datacenter in the komo4 building in Seattle.
Because of proximity it was easy to run over and service the systems physically if needed, and we also used modem based KVM systems if we really needed to reboot a locked up system quickly (not sure that ever actually happened!).
I'm sure customer owner hardware place in a datacenter rack is still a major business
"Remote hands" is the DC term for exactly what it sounds like. You write a list of instructions and someone hired by the DC will go over to your rack and do the thing.
the problem isn't setup. its maintaining it. Its not an easy job to do that some times. Im not trying to dissuade people from running there own servers, but its something to consider.
That being said, the cloud does have a lot of advantages:
- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.
- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]
- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM
- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers
Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].
Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.
Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.
It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.
I had two projects reach the front page of HN last year, everything worked like a charm.
It's unlikely I'll ever go back to professional hosting, "cloud" or not.
1. For small stuff, AWS et al aren't that much more expensive than Hetzner, mostly in the same ballpark, maybe 2x in my experience.
2. What's easy to underestimate for _developers_ is that your self hosted setup is most likely harder to get third party support for. If you run software on AWS, you can hire someone familiar with AWS and as long as you're not doing anything too weird, they'll figure it out and modify it in no time.
I absolutely prefer self hosting on root servers, it has always been my go to approach for my own companies, big and small stuff. But for people that can't or don't want to mess with their infrastructure themselves, I do recommend the cloud route even with all the current anti hype.
If you're at an early/smaller stage you're not doing anything too fancy either way. Even self hosted, it will probably be easy enough to understand that you're just deploying a rails instance for example.
It only becomes trickier if you're handling a ton of traffic or apply a ton of optimizations and end up already in a state where a team of sysadmin should be needed while you're doing it alone and ad-hoc. IMHO the important part would be to properly realize when things will get complicated and move on to a proper org or stack before you're stuck.
2x is the same ballpark???
When did Linode and DO got dropped and not being part of the cloud ?
What used to separate VPS and Cloud was resources at per second billing. Which DO and Linode along with a lot of 2nd tier hosting also offer. They are part of cloud.
Scaling used to be an issue, because buying and installing your hardware or sending it to DC to be installed and ready takes too much time. Dedicated Servers solution weren't big enough at the time. And the highest Core count at the time was 8 core Xeon CPU in 2010. Today we have EPYC Zen 6c at 256 Core and likely double the IPC. Scaling issues that requires a Rack of server can now be done with 1 single server and fit everything inside RAM.
Managed database? PlanetScale or Neon.
A lot of issues for medium to large size project that "Cloud" managed to solve are no longer an issue in 2025. Unless you are top 5-10% of project that requires these sort of flexibilities.
I had someone on this site arguing that Cloudflare isn't a cloud provider...
Agreed. These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.
Getting through AWS documentation can be fairly time consuming.
There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)
I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.
I think it is a lot safer for backups to be with an entirely different provider. It protects you in case of account compromise, account closure, disputes.
If using cloud and you want to be safe, you should be multi-cloud. People have been saved from disaster by multi-cloud setups.
> You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware)
Not true for VPSes or rented dedicated servers either.
> Peak-heavy loads can be a lot cheaper.
they have to be very spiky indeed though. LLMs might fit but a lot of compute heavy spiky loads do not. I saved a client money on video transcoding that only happened once per upload and only over a month or two an year by renting a dedi all ear round rather than using the AWS transcoding service.
> Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
You have to do work to ensure things run across multiple availability zones (and preferably regions) anyway.
> But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You have more forced upgrades.
An unmanaged database will only need a lot of work if operating at large scale. If you are then its probably well worth employing a DBA anyway as an AWS or similar managed DB is not going to do all the optimising and tuning a DBA will do.
When does the cloud start making sense ?
It's really shitty that we all need to pay this tax, but I've been just asked about whether our company has armed guards and redundant HVAC systems in our DC, and I wouldn't know how to do that apart from saying that 'our cloud provider has all of those'.
Another advantage is that if you aim to provide a global service consumed throughout the world then cloud providers allow you to deploy your services in a multitude of locations in separate continents. This alone greatly improves performance. And you can do that with a couple of clicks.
It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.
AWS has a bunch of startup credits you can use, if you're smart.
But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example.
Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing!
LEVEL 2:
And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever.
Oh do you want to do distributed inference? Wasmcloud: https://wasmcloud.com/blog/2025-01-15-running-distributed-ml... ... but I'd recommend just paying Google for AI workloads
Want livestreaming that's peer to peer? We've got that too: https://github.com/Qbix/Media/blob/main/web/js/WebRTC.js
PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though.
LEVEL 3:
Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers.
https://pears.com/news/building-apocalypse-proof-application...
You can easily scale hard drive space independently of RAM by buying block storage separately and then mounting it on your Linode.
I would personally have an account at one of those places and back up to there with everything ready to spin up instances and failover if you lose your rack, and use them for any bursty loads.
Dead Comment
Dead Comment
One thing to remember is that you do need to treat your servers as "cloud servers" in the sense that you should be able to re-generate your entire setup from your configuration at any time, given a bunch of IPs with freshly-installed base OSs. That means ansible or something similar.
If you insist on using cloud (virtual) servers, do yourself a favor and use DigitalOcean, it is simple and direct and will let you keep your sanity. I use DO as a third-tier disaster recovery scenario, with terraform for bringing up the cluster and the same ansible setup for setting everything up.
I am amused by the section about not making friends saying this :-) — most developer communities tend to develop a herd mentality, where something is either all the rage, or is "dead", and people are afraid to use their brains to experiment and make rational decisions.
Me, I'm rather happy that my competitors fight with AWS/Azure access rights management systems, pay a lot of money for hosting and network bandwith, and then waste time on Kubernetes because it's all the rage. I'll stick to my boring JVM-hosted Clojure monolith, deployed via ansible to a cluster of physical servers and live well off the revenue from my business.
This is a hidden cost of self-hosting for many in b2b. It's not just convincing management, it's convincing your clients.
This is the general direction in which society is going. There is no dialogue. You are either with us or against us. Sad.
Are ulimits set correctly?
Shall I turn on syn cookies or turn them off because of performance?
What are the things I should know but I don't and Chat GPT has not told me them, as this is more than some intro tutorial on how to run VPS on DO, so it was never indexed by Chat GPT and alikes.
Is all of my software on the server up to date? Is there any library I use exploited, zero day attacks are on me too, blocking bots, etc. What if I do some update but it will turn out that my Postgres version is not working correctly anymore? This is all my problem.
What if I need to send emails? These days doing this ourselves is a dark art by itself (IP/domain address warming up, checking if my domain has not ended on some spam list, etc.).
What if I need to follow some regulations, like European Union GDPR compliance? Have I done everything what is needed to store personal data as GDPR requires? Is my DB password stored in a compliant way or I will face a fine up to 10% of my incomes.
This is not black/white situation as the author tries to present and those who use cloud services are not dumbards who are buying some IT version of snake oil.
AFAIK, everyone sending automated emails just uses one of the paid services, like sendmail.
>What if I need to follow some regulations, like European Union GDPR compliance? Have I done everything what is needed to store personal data as GDPR requires? Is my DB password stored in a compliant way or I will face a fine up to 10% of my incomes.
What does this have to do with cloud vs non-cloud? You'll need to manage your data correctly either way.
This list looks like FUD, to be honest, trying to scare people. Yes, you should be scared of these things, but none of them are magically solved by hosting your stuff in AWS/Azure/Google or any other could provider du jour.
Cloud infra is touted as obviating the need to hire system administrators, but that's a marketing fabrication. Trying to manage infrastructure without the necessary in-house skills is a recipe for disaster, whether it's in the cloud or on-prem.
Saved 10x would imply there was an amount being saved that they multipled.
Clarity of expression is a superpower
I don’t feel it’s pedantic at all.
Being pedantic about words means you think effective communication is somehow wrong. Be precise, don’t be pedantic.
What is saving? _Spending less_, that's all. Saving generates no income, it makes you go broke slower.
Independent of the price or the product, you can never save more than factor 1.0 (or 100%).
Wasn't there a guy on TV who wanted to make prices go down 1500%? Same BS, different flavor.
Edit: Thinking about this some more: You could say you are saving 9x [of the new cost], and it would be a correct statement. I believe the error is assuming the reference frame is the previous cost vs the new cost, but since it is not specified, it could be either.
Yes you can, because speed has units of inverse time and latency has units of time. So it could be correct to say that cutting latency to 1/10 of its original value is equivalent to making it 10x the original speed - that's how inverses work.
Savings are not, to my knowledge, measured in units of inverse dollars.
Deleted Comment
Quant happy, boss happy, all good. Then the boss goes for lunch with someone and comes back slightly disturbed. We were not buzzword compliant. Apparently the other guy made him feel that he was using outdated tech by not being on AWS, using auto-scaling etc;
Here I am, from a background where my first language was 8086 assembly, and compactness was important to me. I remember thinking, "This whole thing could run on a powerful calculator, except for the RAM requirement".
It was a good lesson for me. Most CTOs know this bias and have unnecessarily huge and wasteful budgets but make sure they keep the business heads happy in the comfort that the firm is buzzword compliant. Efficiency and compactness are a marketing liability for IT heads!
Did you try crunching some of the numbers with him? I would hope a quant could also understand following the common wisdom can sometimes cost you more.
Looks about right if you ask me
This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner, and developers must stay disciplined to keep components portable from day one, but you can get a lot of mileage out of free credits without burning dollars on any infrastructure. The biggest challenge becomes finding the time to perform these migrations among other competing priorities, such as new feature development, especially if you're growing fast.
Our startup is mostly built on Google Cloud, but I don't think our sales rep is very happy with how little we spend or that we're unwilling to "commit" to spending. The ability to move off of the cloud, or even just to another cloud, provides a lot of leverage in the negotiating seat.
Cloud vendors can also lead to an easier risk/SLA conversation for downstream customers. Depending on your business, enterprise users like to see SLAs and data privacy laws respected around the globe, and cloud providers make it easy to say "not my problem" if things are structured correctly.
Reading author's article:
> For me, that meant:
> RDS for the PostgreSQL database (my biggest monthly cost, in fact)
> EC2 for the web server (my 2nd biggest monthly cost)
> Elasticache for Redis
https://rameerez.com/how-i-exited-the-cloud/
Right. But none of the cloud providers encourage that mode of thinking, since they all have complete different frontends, API's, different versions of the same services (load balancers, storage) etc. Even if you standardize on k8s, the implementation can be chalk and cheese between two cloud providers. The lock in is way worse with cloud providers.
My recollection from working at a tech company in the early 2010s is that renting rack space and building servers was expensive and time consuming, estimating what the right hardware configuration would be for your business was tricky, and scaling different services independently was impossible. also having multi regional redundancy was rare (remember when squarespace was manually carrying buckets of petrol for generators up many flights of stairs to keeps servers online post sandy?[1]).
AWS fixed much of that. But maybe things have changed in ways that meaningfully changes the calculus?
[1] https://www.squarespace.com/press-coverage/2012-11-1-after-s...
Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be. I find it a blissful way to work.
I think this is an important point. It's quick.
When cloud got popular, doing what you did could take upwards of 3 months in an organisation, with some being closer to 8 months. The organisational bureaucracy meant that any asset purchase was a long procedure.
So, yeah, the choices were:
1. Wait 6 months to spend out of capex budget
Or
2. Use the opex budget and get something in 10m.
We are no longer in that phase, so cloud services makes very little sense now because you can still use the opex budget to get a VPS and have in going in minutes with automation.
Back when AWS was starting, this would have taken 1-3 days.
Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers.
The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc.
And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months.
In 2006 when the first EC2 instances showed up they were on par with an ok laptop and would take 24 months to pay enough in rent to cover the cost of hardware.
Today the smallest instance is a joke and the medium instances are the size of a 5 year old phone. It takes between 3 to 6 months to pay enough in rent to cover the cost of the hardware.
What was a great deal in 2006 is a terrible one today.
Renting dedicated servers was really expensive. To the extent that it was cheaper for us to buy a 1U server and host it in a local datacenter. Maintaining that was a real pain. Getting the train to London to replace a hard drive was so much fun. CDNs were "call for pricing". EC2 was a revelation when it launched. It let us expand as needed without buying servers or paying for rack space, and try experiments without shoving everything onto one server and fiddling with Apache configs in production. Lambda made things even easier (at the expense of needing new architectures).
The thing that has changed is that renting bare metal is orders of magnitude cheaper, and comparable in price to shared hosting in the bad old days.
I'd argue that Docker has done that in a LOT of ways. The huge draw to AWS, from what I recall with my own experiences, was that it was cheaper than on-prem VMware licenses and hardware. So instead of virtualizing on proprietary hypervisors, firms outsourced their various technical and legal responsibilities to AWS. Now that Docker is more mature, largely open source, way less resource intensive, and can run on almost any compute hardware made in the last 15 years (or longer), the cost/benefit analysis starts to favor moving off AWS.
Also AWS used to give out free credits like free candy. I bet most of this is vendor lock in and a lot of institutional brain drain.
Second, egress data being very expensive with ingress being free has contributed to making them sticky gravity holes.
Edit: although actually many people on here are American so I guess for you aws is legally a person...
Et al. = et alii, "and other things", "among other things".
Etc. = et cetera, "and so on".
Either may or may not apply to people depending on context.
Corporate legal personhood is actually older than Christianity, and it being applied to businesses (which were late to the game of being allowed to be corporations) is still significantly older than the US (starting with the British East India Company), not a unique quirk of American law.
Everything else was just reference material for how to sell it to your management.
https://learn.microsoft.com/en-us/azure/cloud-adoption-frame...
TL;DR version - its about money and business balance sheets, not about technology.
For businesses past a certain size, going to cloud is a decision ALWAYS made by business, not by technology.
From a business perspective, having a "relatively fixed" ongoing cost (which is an operational expense ie OpEx ) even if it is significantly higher than what it would cost to do things with internal buy and build out (which is a capital expense cost ie CapEx), make financial planning, taxes and managing EBITDA much easier.
Note that no one on the business really cares what the tech implications are as long at "tech still sorta runs mostly OK".
It also, via financial wizardry, makes tech cost "much less" on a quarter over quarter and year over year basis.
The issues are mostly in the SME segment and where it really depends on what your business is. Do you need completely separate system for each customer? In that case, AWS is going to be easier and probably cheaper. Are you running a constant low 24/7? Then you should consider buying your own servers.
It's really hard to apply a blanket conclusion to all industries, in regards to cloud cost and whether or not it's worth it. My criticism in regards to opting for cloud is that people want all the benefits of the cloud, but not use any of the features, because that would lock them into e.g. AWS. If you're using AWS as a virtual machine platform only, there's always going to be cheaper (and maybe better) options.
If running on your own data center, or renting physical/virtual machines from ie Hetzner, you will pay for that capability overhead for 30.5 days per month, when in reality you only need it for 2-3 days.
With the cloud you can simply scale dynamically, and while you end up paying more for the capacity, you only pay when you use it, meaning you save money for most of the month.
I keep seeing this take on here and it just shows most people don't actually know what you can do off the cloud. Hertzner allows you to rent servers by the hour, so you can just do that and only pay for the 2-3 days you need them.
Tricky networking though.
At the time I did this no one had good gaming CPUs in the cloud, they are still a bit rare really especially in VPS offerings and I was hosting a gaming community and server. So I made a pair of machines in a 1U with dual machines in there and had a public and private server with raid 1 drives on both and redundant power. Ran that for a gaming server for many years until it was obsolete. It wasn't difficult and I think the machine was about £1200 in all, which for 2 computers running game servers wasn't too terrible.
I didn't do this because it was necessarily cheaper, I did it because I couldn't find a cloud server to rent with a high clockspeed CPU in it. I tested numerous cloud providers, sent emails asking for specs and after months of chasing it down I didn't feel like I had much choice. Turned out to be quite easy and over the years it saved a fortune.
Because of proximity it was easy to run over and service the systems physically if needed, and we also used modem based KVM systems if we really needed to reboot a locked up system quickly (not sure that ever actually happened!).
I'm sure customer owner hardware place in a datacenter rack is still a major business