Readit News logoReadit News
jwr · 3 years ago
I know it's not in fashion, but I will suggest that renting physical servers is a very good and under-appreciated compromise. As an example, 45€/month gets you a 6-core AMD with 64GB of RAM and NVMe SSDs at Hetzner. That's a lot of computing power!

Virtualized offerings perform significantly worse (see my 2019 experiments: https://jan.rychter.com/enblog/cloud-server-cpu-performance-...) and cost more. The difference is that you can "scale on demand", which I found not to be necessary, at least in my case. And if I do need to scale, I can still do that, it's just that getting new servers takes hours instead of seconds. Well, I don't need to scale in seconds.

In my case, my entire monthly bill for the full production environment and a duplicate staging/standby environment is constant, simple, predictable, very low compared to what I'd need to pay AWS, and I still have a lot of performance headroom to grow.

One thing worth noting is that I treat physical servers just like virtual ones: everything is managed through ansible and I can recreate everything from scratch. In fact, I do use another "devcloud" environment at Digital Ocean, and that one is spun up using terraform, before being passed on to ansible that does the rest of the setup.

phpnode · 3 years ago
I really don't understand why the industry seems to have lost sight of this. It's really common to see super complicated, incredibly expensive, but highly scalable cloud deployments for problems that can be trivially solved with one or two dedicated servers. Even the mere suggestion of renting a dedicated server provokes scorn from devops teams. The overall difference in cost when taking into account all of the complexity, feature-lag and general ceremony must be at least 10x and maybe even closer to 100x. It's madness.
FlyingSnake · 3 years ago
The allure of our industry to mime FAANG leaders might be one of the reasons we're at this stage. Most products could be safely run on simpler setups.

I suspect that VendorOps and complex tools like kubernetes are favored by complexity merchants which have arisen in the past decade. It looks great on resume and gives tech leaders a false sense of achievement.

Meanwhile Stackoverflow, which is arguably more important than most startups, is chugging along on dedicated machines.

1: https://stackoverflow.blog/2016/02/17/stack-overflow-the-arc...

macNchz · 3 years ago
The dedicated server deployments I worked on at smallish software companies 10+ years ago wound up being really annoying. I enjoy sysadmin type stuff and this idea has tempted me, but I think it’s a false economy in most cases.

The incremental cost of being on a cloud is totally worth it to me to have managed databases, automatic snapshots, hosted load balancers, plug and play block storage etc.

I want to worry about whether our product is good, not about middle of the night hardware failures, spinning up a new server and having it not work right because there was no incentive to use configuration management with a single box, having a single runaway task cause OOM or full disk and break everything instead of just that task’s VM, fear of restarting a machine that has been up 1000 days etc.

alex7734 · 3 years ago
In addition to what has already been said in the other comments, devs have an incentive to push cloud tech against the interest of their employers so they can put the experience in their resume, since cloud tech is seen as more specialized and in higher regard and as such a dev experienced in it commands a higher salary.

Plus who doesn't want to play with the newest, coolest toy on another's dime?

cameronh90 · 3 years ago
If you're ever trying to get big enterprise contracts, the kind of scalability, compliance and disaster recovery that providers like AWS/Azure enable are now just table stakes.

Sure, you can get something working on Hetzner but be prepared to answer a lot more questions.

swatcoder · 3 years ago
The industry lost sight of a lot of things.

We went through a growth boom, and like all of them before, it meant there were lots of inexperienced people being handed lots of money and urgent expectations. It’s a recipe for cargo culting and exploitative marketing.

But growth is slowing and money is getting more expensive, so we’ll slow down and start to re-learn the old lessons with exciting new variations. (Like Here: managing bare metal scaling and with containers and orchestration)

And the whole cycle will repeat in the next boom. That’s our industry for now.

esskay · 3 years ago
A lot of its the buzz. 'Cloud Hosting' is something everyone thinks they need, despite not grasping why, or that the chances are they arent getting cloud hosting at all, just a VPS.

A solid dedicated server is 99% of the time far more useful than a crippled VPS on shared hardware but it obviously comes at an increased cost if you dont need all the resources they provide.

adrianmsmith · 3 years ago
I think it might be due to the incentives - the costs are borne by the company (thus irrelevant to the employee), however putting new cool technologies on their CV is very relevant to the employee.
onlyrealcuzzo · 3 years ago
> I really don't understand why the industry seems to have lost sight of this.

There's a disconnect between founders and everyone else.

Founders believe they're going to >10x every year.

Reality is that they're 90% likely to fail.

90% of the time - you're fine failing on whatever.

Some % of the 10% of the time you succeed you're still fine without the cloud - at least for several years of success, plenty of time to switch if ever necessary.

jwr · 3 years ago
I think one of the reasons is that people confuse physical servers with manual administration. As I said, I do not do manual administration. Nothing ever gets configured on any server by hand. All administration is through ansible.

I only have one ansible setup, and it can work both for virtualized servers and physical ones. No difference. The only difference is that virtualized servers need to be set up with terraform first, and physical ones need to be ordered first and their IPs entered into a configuration file (inventory).

Of course, I am also careful to avoid becoming dependent on many other cloud services. For example, I use VpnCloud (https://github.com/dswd/vpncloud) for communication between the servers. As a side benefit, this also gives me the flexibility to switch to any infrastructure provider at any time.

My main point was that while virtualized offerings do have their uses, there is a (huge) gap between a $10/month hobby VPS and a company with exploding-growth B2C business. Most new businesses actually fall into that gap: you do not expect hockey-stick exponential growth in a profitable B2B SaaS. That's where you should question the usual default choice of "use AWS". I care about my COGS and my margins, so I look at this choice very carefully.

criley2 · 3 years ago
I'll be the counter opinion for this. Let's say you're Sprocket Masters, and you have to run two front ends (a Sprocket Sales site and a Sprocket Repair site) and a backend connecting them. But ultimately your sprockets are made in your factory on-site, and the majority of your staff is related to the manufacture of sprockets.

You're not a software company, fundamentally you make and sell Sprockets.

The opinions here would be hire a big eng/IT staff to "buy and maintain servers" (PaaS is bad) and then likely "write a bunch of code yourself" (SaaS is bad) or whatever is currently popular here (last thread here it was "Pushing PHP over SFTP without Git, and you're silly if you need more" lol)

But I believe businesses should do One Thing Well and avoid trying to compete (doing it manually) for things outside of their core competency. In this case, I would definitely think Sprocket Masters should not attempt to manage their own hardware and should rely on a provider to handle scaling, security, uptime, compliance, and all the little details. I also think their software should be bog-standard with as little in-house as possible. They're not a software shop and should be writing as little code as possible.

Realistically Widget Masters could run these sites with a rather small staff unless they decided to do it all manually, in which case they'd probably need a lot larger staff.

codegeek · 3 years ago
Because cloud is the new "No one gets fired for using IBM". If you are using dedicated servers somewhere, you cannot have the same reputation of say Using AWS for cloud. Everyone knows the AWSs, Google Cloud etc and no one gets fired for selecting/using them.

I personally have run Dedicated Servers for our business in the earlier days but as we expanded and scaled, it became a lot easier to go with the Cloud providers to provision various services quickly even though the costs went up. Not to mention it is a lot easier to tell a prospective customer that you use "cloud like AWS" than "Oh we rent these machines in a data center run by some company" (which can actually be better but customers mostly wont get that). Audit, Compliance and others.

l5870uoo9y · 3 years ago
> trivially solved with one or two dedicated servers...

For many even lot less than that. I run a small side project[1] that went viral a few times (30K views in 24 hours or so) and it is running on a single core CPU web server and a managed Postgres likewise on a single CPU core. It hasn't even been close to full utilization.

1: https://aihelperbot.com/

tstrimple · 3 years ago
I feel like many people use the cloud "wrong". If you're just standing up VMs then yeah, it's going to be slower and more expensive than renting a physical server. If you're just going to setup VMs and pretend you're still on-prem you're better off not switching. But I've got projects setup on PaaS services that literally cost a few bucks a month because it's only charging me for actual use and not idle time. I've got some small customers (tens of thousands of users) who I've helped move into the cloud who see the same thing. They have SPA websites backed by lambda functions and on-demand databases. If their website isn't being hit, it's not costing them anything at all. The only lock-in is the boiler plate code around the lambda activation which could be switched fairly easily to target any Functions as a Service platform.
jerf · 3 years ago
I am responsible for about 10-12 t3.medium-class servers (number includes redundant servers) and the services running on them. All of them are fairly overprovisioned for what they are, excepting that the monitoring system we use on them for some reason likes to consume about half-a-gig of RAM so I can't run on even smaller servers. I write my services in Go, so the CPU isn't sitting there spending 98% of its time chasing pointers and can use multiple CPUs as needed for its heavy-hitting tasks. It's a rough day for those servers when they pass 5% average CPU for the day.

Could I switch some of them to lambda functions? Or switch to ECS? Or switch to some other cloud service du jour? Maybe. But the amount of time I spent writing this comment is already about six month's worth of savings for such a switch. If it's any more difficult than "push button, receive 100% reliably-translated service", it's hard to justify.

Some of this is also because the cloud does provide some other services now that enable this sort of thing. I don't need to run a Kafka cluster for basic messaging, they all come with a message bus. I use that. I use the hosted DB options. I use S3 or equivalent, etc. I find what's left over is almost hardly worth the hassle of trying to jam myself into some other paradigm when I'm paying single-digit dollars a month to just run on an EC2 instance.

It is absolutely the case that not everyone or every project can do this. I'm not stuck to this as an end-goal. When it doesn't work I immediately take appropriate scaling action. I'm not suggesting that you go rearchitect anything based on this. I'm just saying, it's not an option to be despised. It has a lot of flexibility in it, and the billing is quite consistent (or, to put it another way, the fact that if I suddenly have 50x the traffic, my system starts choking and sputtering noticeably rather than simply charging me hundreds of dollars more is a feature to me rather than a bug), and you are generally not stretching yourself to contort into some paradigm that is convenient for some cloud service but may not be convenient for you.

znpy · 3 years ago
> Even the mere suggestion of renting a dedicated server provokes scorn from devops teams.

Have you ever had to manage one of those environments?

The thing is, if you want to get some basic more-than-one-person scalability and proper devops then you have to overprovision by a significant factor (possibly voiding your savings).

You're inevitably going to end up with a bespoke solution, which means new joiners will have a harder time getting mastery of the system and significant people leaving the company will bring their intimate knowledge of your infrastructure with them. You're back to pets instead of cattles. Some servers are special, after a while automation means "a lot of glue shell scripts here and there" and an OS upgrade means either half infra is KO for a while or you don't do OS upgrades at all.

And in the fortunate case you need to scale up... You might find unpleasant surprises.

And don't ever get me started on the networking side. Unless you're renting your whole rack and placing your own networking hardware, you get what you get. Which could be very poor in either functionalities or performances... Assuming you're not doing anything fancy.

mariusmg · 3 years ago
>I really don't understand why the industry seems to have lost sight of this.

Because the industry is full of people who are chasing trends and keywords and to which the most important thing is to add those keywords to the CVs.

dpweb · 3 years ago
Its big companies - their priorities are different. Saving a couple million dollars isn't worth the tradeoff of not being industry standrd (AWS, Google, MS). They hire tech consulting companies who sell their product, large deployments.
Slartie · 3 years ago
"No one ever got fired for buying AWS"
Akronymus · 3 years ago
> but highly scalable

IME aiming for scalability is exceedingly wrong for most services/applications/whatever. And usually you pay so much overhead for the "scalable" solutions, that you then need to scale to make up for it.

xnx · 3 years ago
devops people want things to be expensive and complicated to justify their salaries
avereveard · 3 years ago
if you need durability in the cloud you just pick any object storage service from the 3 bigs, and a db with multiple deployment with automatic replication and managed failover.

getting the equivalent reliability with irons is a lot more expensive than renting "two dedicated servers" - now you might be fine with one server and a backup solution, and that's fair. but a sysad to create all that, even on a short contract for the initial setup and no maintenance, is going to go well beyond the cloud price difference, especially if there's a database in the mix and you care about that data.

graymatters · 3 years ago
There are several reasons for that: - lack of knowledge how to maintain own servers on premises - MASSIVE PR by a handful of top tier cloud providers to instill the notion that cloud is the only solution, the best one and most advanced technologically and attractive silk set wise - hockey stick growth expectations of (past days) startups from investors, forming the mindset that it must be met from day one in deployment - “no one got fired for buying IBM” managers/tech leaders mentality, where IBM is replaced with “cloud” - difficulty obtaining true costs information of cloud vs on premises deployment. For example - a team of ~16 people already pays off the costs of dedicated sysadmin to maintain on premises servers - etc.
barbazoo · 3 years ago
As a developer I appreciate the compartmentalization, being able to make truly isolated changes. This is something that's not trivial for lots of folks, me included. I guess if one has the expertise in house to manage a K8s cluster on a raw machine, that'd be an option but I, personally, am not gonna touch that with a ten foot pole. So yeah with a big and capable enough infrastructure team that complexity could be abstracted away but in reality I (where I've worked) haven't seen that yet.
7952 · 3 years ago
It feels like artificial scarcity similar to what you see in high fixed cost monopolies. And that is exactly what most corporate IT departments are.
Spooky23 · 3 years ago
Growth hides sins. I ran a large self-funded enterprise service at a loss for 4-5 years, which is possible if you are growing 30%.

Today, cloud is similar - the time to market is quicker as there are less moving parts. When the economy tanks and growth slows, the beancounters come in and do their thing.

It happens every time.

slt2021 · 3 years ago
the most funny thing is that cloud engineering teams have no problem of using EC2 hosts, but somehow don't want to rent dedicated servers/VPC from regular datacenter companies.

This only makes AWS richer at the expense of companies and cloud teams

raducu · 3 years ago
Anybody tried mixing cloud + self hosted k8s on hetzner or is it too complicated to do?
sinenomine · 3 years ago
> Even the mere suggestion of renting a dedicated server provokes scorn from devops teams.

There is a method to the madness, here it is called "job-security-driven development".

jgalt212 · 3 years ago
> Even the mere suggestion of renting a dedicated server provokes scorn from devops teams.

Because it's a threat to their jobs.

itomato · 3 years ago
I just want to go back to leasing an always-available Opteron with optional remote hands.
rakoo · 3 years ago
I can use a 4$ VPS for my own personal cloud. I will never pay 45$ for that.

There's a whole band of people who have the technical chops to self-host, or host little instances for their family/friends/association/hiking club. This small margin where you're ok to spend a little extra because you want to make it proper, but you can't justify paying so much and spend time doing hard maintenance. A small VPS, with a shared Nextcloud or a small website, is all that's needed in many cases.

codazoda · 3 years ago
> host little instances for their family/friends/association/hiking club

For this I even use a little Raspberry Pi 400 in my bedroom.

https://joeldare.com/private-analtyics-and-my-raspberry-pi-4...

ThatMedicIsASpy · 3 years ago
I pay 6euro for a root (ovh/kimsufi).

atom 2c/4t, 4gb ram, 1tb drive, 100mbit

A few years of uptime at this point

PragmaticPulp · 3 years ago
> I can use a 4$ VPS for my own personal cloud. I will never pay 45$ for that.

Exactly. These sub-$10 VPS instances are great for small projects where you don't want to enter into long contracts or deal with managing your own servers.

If you're running an actual business where margins are razor-thin and you've got all the free time in the world to handle server issues if (when) they come up, those ~$50 dedicated servers could be interesting to explore.

But if you're running an actual business, even a $10,000/month AWS bill is cheaper than hiring another skilled developer to help manage your dedicated servers.

This is what's generally missed in discussions about cloud costs on places like HN: Yes, cloud is expensive, but hiring even a single additional sysadmin/developer to help you manage custom infrastructure is incredibly expensive and much less flexible. That's why spending a hypothetical $5000/month on a cloud hosted solution that could, in theory, be custom-built on a $50/month server with enough time investment can still be a great deal. Engineers are expensive and time is limited.

jayski · 3 years ago
100% agree.

AWS is very cost efficient for other services (S3,SES,SQS, etc) but virtual machines are not a good deal. You get less RAM & CPU, with the virtualization overhead, and pay a lot more money.

Especially for Postgres if you run some tests with pgbench you can really see the penalty you pay for virtualization.

Maybe the sysadmin skill of being able to build your own infrastructure is becoming a lost art, otherwise I can't explain why people are so in love with paying 5x for less performance.

Hetzner is cheap and reliable in Europe, if you're in North America take a look at OVH. Especially their cost-saving alternative called SoYouStart. You can get 4/8 4.5ghz, 64 RAM and an NVME drive for $65.

(I have no affiliation with OVH, I'm just a customer with almost 100 servers, and it's worked out great for me)

KyeRussell · 3 years ago
AWS’s value proposition is dead simple to understand if you’ve actually used it. You said it yourself. AWS gets you through the door with their managed services, meta components (like billing and IAM), and whatever else. Saying even in a tongue in cheek way that people are in love with paying more for less raw power isn’t giving people enough credit. I know I could get a better cost per hertz elsewhere, or whatever. That’s not the whole equation.
DGideas · 3 years ago
I'm interested if there were any such providers in East Asia(Hong Kong, Taiwan, Japan, South Korea, etc.)
unity1001 · 3 years ago
> Hetzner is cheap and reliable in Europe

Hetzner cloud now has two US locations... Still no US dedicated servers though - those would kick real ass. Even if their current cloud offerings themselves are already ~30% of the price of the major cloud equivalents...

nevi-me · 3 years ago
I was upset last week when I saw how much our managed Postgres service cost us at work. $800 for the month, it's storing around 32GB of data, and is allocated 4 CPU cores.

Like you, I also run my services from a rented physical server. I used to use Versaweb, but their machines are too old. I didn't previously like Hetzner because I'd heard bad things about them interfering with what you're running. However, I moved to them in December when my Versaweb instance just died, probably SSD from old age. I'm now paying 50% of what I paid Versaweb, and I can run 6 such Postgres instances.

Then it makes one wonder whether it's worth paying $700 of $800 for a managed service with a fancy cloud UI, automatic upgrades and backups, etc. For a 1 person show or small startup, I think not. Cheaper to use an available service and dump backups to S3 or something cheaper.

dagw · 3 years ago
$800 for the month, it's storing around 32GB of data, and is allocated 4 CPU cores.

Company I used to work for happily paid company A four times what company B charged for the exact same service, just because company A was willing to send quarterly invoices in way that played nicely with our invoicing system. For companies, saving a few hundred bucks here and there often isn't worth the hassles of introducing any extra friction.

dachryn · 3 years ago
for a startup, it might actually make more sense. Do you want to pay an engineer to setup the backups and monthly update cycle? Setup the alert monitoring?

There is an implicit cost there. If its only one or two of those things, just take the managed services.

If you start to scale, get an administrator type of employee to save on this

neilv · 3 years ago
Managed PostgreSQL seemed worthwhile for an early startup that couldn't afford to lose an hour of production data.

Otherwise, I'd have to hire/contract someone very experienced, or dedicate a solid month or more of my time (which was not available), just to be 100% sure we could always restore journaled PITR backups quickly.

I can save orders of magnitude on cloud costs other places, but credible managed PostgreSQL was a pretty easy call (even if the entry-level price tag was more than you'd expect).

klodolph · 3 years ago
I’ve come to the same conclusion. Whenever I’ve looked at managed database pricing, I’ve always wanted to just do the management myself and just rent the underlying computation and storage.

I think even for larger teams it may make sense to manage databases yourself, assuming you have the competence to do it well. There are so many things that can go wrong with managed services and they don’t hide the underlying implementation the way that things like block storage or object storage do.

quickthrower2 · 3 years ago
Why is postgres particularly overpriced in clouds?
shapefrog · 3 years ago
I went the opposite direction at Hetzner with the last round of price hikes. I now use multiple of the Hetzner Cloud instances for my personal projects, for 1/4 of the price (most of the time) or for more if I am messing with something in particular.

Peak performance is certainly worse - but I am not too bothered if something takes longer to run anyway. You are certainly correct on having as much automation in the provisioning of a server, something I did not do with a physical server.

MrGilbert · 3 years ago
This.

I used to have a root server for my pet projects, but honestly, that doesn't make sense. I'm not running a high traffic, compute intense SaaS on my machines. It's just a static website and some projects. I'm down to monthly costs of 24€, which includes a storage box of 1 TB to store all my data.

BirAdam · 3 years ago
I would actually say just invest in the hardware and count the asset depreciation on taxes. Further, “scaling” horizontally is rather easy if you properly separate functions into different servers. For example, a few really light machines running nginx (with fastcgi cache enabled, because yes) behind an haproxy machine, your PHP/Python/JS/Ruby machines behind your nginx machines, and your DB cluster with TiDB or something behind that. You’ve removed the overhead of the container systems and the overhead of the virtualization platform. You’re no longer sharing CPU time with anyone. You’re not experiencing as many context switches or interrupts. The cost is all upfront though. You will still pay for bandwidth and power, but over time your cost should be lower.

The main issue in any scenario involving real hardware is that you need staff who are competent in both hardware and Linux/UNIX systems. Many claim to be on their resumes and then cannot perform once on the job (in my experience anyway). In my opinion, one of the major reasons for the explosion of the cloud world was precisely the difficulty in building and the financial cost of building such teams. Additionally, there is a somewhat natural (and necessary) friction between application developers and systems folks. The systems folks should always pushing back and arguing for more security, more process, and fewer deployments. The dev team should always be arguing for more flexibility, more releases, and less process. Good management should then strike the middle path between the two. Unfortunately, incompetent managers have often just decided to get rid of systems people and move things into AWS land.

Finally, I would just note that cloud architecture is bad for the planet as it requires over-provisioning by cloud providers, and it requires more computer power overall due to the many layers of abstraction. While anyone project is responsible for little of this waste, the entire global cloud as an aggregate is very wasteful. This bothers me and obviously likely factors as an emotional bias in my views (so large amounts of salt for all of the above).

kkielhofner · 3 years ago
Generally agree but in the US at least from a financial and accounting standpoint you can actually do better than the favored op-ex of cloud services with a Section 179[0] lease[1]. Caveat here, of course, is that you have to have income.

The argument could be made you can develop a means to to rent physical servers pre-income, then, when it makes sense, you can either use standard depreciation -or- Section 179 on outright purchases and/or Section 179 leases.

As an example, you can deploy an incredibly capable group of let's say four absolutely ridiculous completely over-provisioned $100k physical 1U machines in different colo facilities for redundancy. There are all kinds of tricks here for load balancing and failover with XYZ cloud service, DNS, anycast, whatever you want. You can go with various colo facilities that operate datacenters around the world, ship the hardware from the vendor to them, then provision them with Ansible or whatever you're into without ever seeing the facility or touching hardware.

So now you have redundant physical hardware that will absolutely run circles around most cloud providers (especially for I/O), fixed costs like all you can eat bandwidth (that doesn't have the 800% markup of cloud services, etc) - no more waiting for the inevitable $50k cloud bill or trying to track down (in a panic) what caused you to exceed your configured cloud budget in a day instead of a month. Oh btw, you're not locking yourself into the goofy proprietary APIs to provision and even utilize services other than virtual machines offered by $BIGCLOUD.

If you're doing any ML you can train on your own hardware or (or the occasional cloud) and run inference 24/7 with things like the NVIDIA A10. Continuous cloud rental for GPU instances is unbelievably expensive and the ROI on purchasing the hardware is typically in the range of a few months (or way ahead almost immediately with Section 179). As an example, I recently did a benchmark with the Nvidia A10 for a model we're serving and it can do over 700 inference requests/s in FP32 with under 10ms latency. With a single A10 per chassis across four healthy instances that's 2800 req/s (and could probably be tuned further).

Then, if you get REALLY big, you can start getting cabinets and beyond. In terms of hardware failures as mentioned, all I can say is dual PS RAID-ed out, etc hardware is (in my experience) extremely reliable. Frankly having had multiple full cabinets of hardware in the past hardware failures were few and far between and hardware vendors will include incredibles SLAs for replacement. You notify them of the failure, they send a tech in < eight hours directly to the colo facility and replace the disk, PS, etc with the flashing light.

My experience is one (good) FTE resource can easily manage this up to multiple cabinet scale. To your point, the current issue is many of these people have been snatched up by the big cloud providers and replaced (in the market) with resources that can navigate the borderline ridiculousness that is using dozens (if not more) products/services from $BIGCLOUD.

I've also found this configuration is actually MUCH more reliable than most $BIGCLOUD. No more wondering what's going on with a $BIGCLOUD outage that they won't even acknowledge (and that you have absolutely no control over). Coming from a background in telecom and healthcare it's completely wild to me how uptime has actually gotten much worse with cloud providers. Usually you can just tell customers "oh the internet is having problems today" because they'll probably be seeing headlines about it but for many applications that's just totally unacceptable - and we should expect better.

[0] - https://www.section179.org/section_179_deduction/

[1] = https://www.section179.org/section_179_leases/

bluehatbrit · 3 years ago
I do exactly this, using Hetzner as well. I was managing some side projects and self-hosting and the bill just seemed to creep up because the VPS's we never power enough to host. I started feeling the need to add more VPS's and then I started shopping around. In the end I got a similar deal and specs. I can do anything I want with it now and even with quite a few self hosted services and projects I'm still running at only about 10-15% capacity.

If I want to spin up a new project or try out hosting something new it takes a couple minutes and I've got the scripts. Deployments are fast, maintenance is low, and I have far more for my money.

For anyone who's interested this is the rough cut of what I'm using: * Ansible to manage everything * A tiny bit of terraform for some DNS entries which I may replace one day * restic for backups, again controlled by ansible * tailscale for vpn (I have some pi's running at home, nothing major but tailscale makes it easy and secure) * docker-compose for pretty much everything else

jwr · 3 years ago
Your management tools are quite similar to what I use. Ansible, terraform for disaster recovery and development cloud systems, several approaches to backups (duplicity+gpg, rsync), ansible+docker for updating certs, VpnCloud for VPN, docker containers for various things.

Main app is Clojure, so I run a native JVM. Database is fully distributed, RethinkDB, now working on moving to FoundationDB.

The important thing is not to manage anything manually, e.g. treat physical servers just like any other cloud server. It shouldn't matter whether it's physical or virtualized.

cryptos · 3 years ago
In 2015 I worked in a project where really big servers (lots of RAM, fast SSDs) where needed for large database. The client had preferred AWS, but the monthly bill would have been something like 30K Euros. So, they went with Hetzner for a few hundred bucks a month ...
sakopov · 3 years ago
That sounds like managed (RDS) instance pricing.
limteary · 3 years ago
This is too expensive compared to VPS for lots of personal and hobby use cases or minor side business.

I've seen lots of less experienced people overpay for hetzner and similar when a $5-10 vps would've worked.

systems_glitch · 3 years ago
Don't discount your local colo, either. I pay $75/month for 2U, a gigabit Ethernet link to a router two hops from the metro SONET ring in Albany, NY, and 1 mbps 95th percentile bandwidth. I've got a router, a switch, and a 1U 32-core AMD Bulldozer box in there hosting VMs (it's past due for replacement but running fine).

Yes, you're supporting your own hardware at that point. No, it's not a huge headache.

dm · 3 years ago
And with that computing power it's easy to install qemu-kvm and virtualise your own servers which is more scalable (and easier to move when the hardware you're renting becomes redundant) than having one or two monolithic servers with every conceivable piece of software installed, conflicting dependencies, etc.

The biggest additional cost to this is renting more IPv4 addresses, which Hetzner charge handsomely for now that there are so few available.

marcosdumay · 3 years ago
Virtual servers scale down better.

Whatever you create, will start with 0 users, and an entire real machine is completely overkill for that 0 load you will get. You upgrade your VPS into a pair of real machines, then into a small rented cluster, and then into a datacenter (if somebody doesn't undercut that one). All of those have predictable bills and top performance for their price.

BirAdam · 3 years ago
Modern server hardware can hit very low power states at idle just like modern desktops can. Most servers are also headless, so you’re not wasting power on a GPU. This means that you’re looking at a lower power bill if you’re not doing much. Likewise, you usually pay most for egress, and at low traffic that’s not much of an issue. Physical hardware can scale down exceedingly well. The main thing I would argue is that one own the hardware rather than rent.
zer0tonin · 3 years ago
I actually agree with you, it's just a little bit more expensive. An under-appreciated thing with dedicated servers is that they often come with very solid network bandwith, which really helps for use cases like streaming audio/video.
hknmtt · 3 years ago
the cloud is the golden cage. companies, and people, got sold on comfort and ease of use whilst trapping themselves into vendor lock-in environment that is the california hotel. when they realize the problem, they are too deep in the tech and rewriting their codebase would be too complex or expensive so they bite the expense and keep on doing what they are doing. constantly increasing their dependency and costs and never be able to leave.

as you pointed out, bare metal is the way to go. is works the opposite of cloud - a bit more work at the beginning but a way lot less of expenses at the end.

dom96 · 3 years ago
In general I agree that physical servers are great, but I think it's important to note that for most people a $4/month VPS is more than enough. So actually 45€/month would be overkill in that case.
danjac · 3 years ago
Hetzner is excellent value of money, especially if you are based in Europe.
kouteiheika · 3 years ago
It can actually be an even better value if you are not based in Europe, because for certain countries they don't charge you VAT, so you effectively get everything ~20% cheaper.
JaggerJo · 3 years ago
I also self host on a 4$ box. Quite impressive how much performance you get for that kind of money.

Setting up and managing Postgres is a pain tho. Would be nice to have a simpler way of getting this all right.

robertlagrant · 3 years ago
My great hope is either fly.io or cloudflare providing managed postgres behind their super lightweight services and networking.
pepa65 · 3 years ago
That's what docker is for, making setup simpler.
lallysingh · 3 years ago
I think 5 reasons:

1. Forces config to be reproducible, as VMs will go down.

2. You can get heavy discounts on AWS that reduce the pain.

3. The other stuff you have access to atop the VMs that's cheaper/faster once your stuff is already in the cloud

4. Easier to have a documented system config (e.g. AWS docs) than train people/document what special stuff you have in-house. Especially useful in hiring new folks.

5. You don't need space or redundant power/internet/etc on premesis. Just enough to let people run their laptops.

Cthulhu_ · 3 years ago
I've been on a physical server for years, but the problem is wear & tear and eventually they will shut down forever; unless you're willing to set up backups and a recovery plan for physical hardware failure, I'd stick with a VPS for now.

I used a VPS before that, but stopped and switched to a physical one because it was a better deal and we didn't run into CPU limit(ation)s.

jwr · 3 years ago
Hmm. But every server goes down eventually. What's the difference whether it's a VPS or a physical one? I don't assume anything about my physical servers.
sourcecodeplz · 3 years ago
But then again you have to monitor the health of the drives as opposed to a VPS.
bell-cot · 3 years ago
Ask in advance about who's responsible for hardware health monitoring, recommended tools for that, and how they handle hardware failures. Bonus - how they respond to such questions may separate the good providers of private rent-a-servers from the not-so-good ones.
k8sToGo · 3 years ago
Not if you build a cluster with disposable nodes.
SergeAx · 3 years ago
The problem with bare metal is quite sophisticated contingency plan for physical failure. With VMs when any one of them fails - you just re-run your Terraform/Ansible scipts, restore backup (if it was a stateful VM that failed) and voila, you are up and running again in minutes.
kuschku · 3 years ago
Why would it be any different with rented bare metal? I can just run my terraform script against any one and be up and running in less than 2 minutes.
komali2 · 3 years ago
For me I'm wondering about ISP overhead. When I rent a VPS I'm more confident in their uptime and that all the funky dns shit I'm less confident in will just work. I run a home server as a hobby and I had to do all sorts of weird negotiating with my isp to make sure I always have uptime. Then my shitty modem that the isp refuses to let me replace randomly resets my forwarded ports once every 6 months or so. Etc just weird shit like that.

I guess if I was investigating commercial options I'd have the "trunk" sorted at the office with a commercial isp solution, static IP, good IT hardware maybe, but from what I know at this exact moment if a client needed hosting I'd always go straight to renting a vps.

JSavageOne · 3 years ago
I'm not a devops guy but I had to manage our Ansible deployment at one of my first jobs and always despised it. Scripts that would work locally would often not work on the Ansible deployment, and it was hard to debug. The other engineer (don't think he had experience with it either prior to the job) also felt the same way.

I was more of a junior dev at the time so maybe I was an idiot, but I don't miss it at all. In theory I agree with what you're saying, but deploying a Dockerfile to something like Google Cloud Run is just a hell of a lot easier. Yea I'm paying more than what I would be managing my own VPS, but I think this is more than offset by the dev hours saved.

yencabulator · 3 years ago
On a cloud VM in a good cloud, using disaggregated storage:

- physical hardware has trouble, e.g. fan failure -> my VM gets live migrated to a different host, I don't notice or care

- physical hardware explodes -> my VM restarts on a different host, I might notice but I don't care

Disaster planning is a lot easier with VMs (even with pets not cattle).

kumarvvr · 3 years ago
This absolutely works, but this comes at a later stage, perhaps after an MVP has picked up ground.

For a beginner, the cheapest ones get the work done.

I am sure that as cloud computing evolves, these offering become more common.

There is another aspect of cloud computing. The medium to large corporates, count cloud computing as single digit percentages on their cost calculations. This means that the decisions taken by managers and teams, often search for reliability and scalability (to be put on their presentations) rather than "is my setup costly or cheap".

troupe · 3 years ago
Once you know you are going to use the machine for a while, buying two used server (one for backup) and co-locating them somewhere breaks even pretty quickly.
Spooky23 · 3 years ago
Usually the issue is that on-prem is seen as legacy and gets legacy budget and talent. Bigger companies are going to stick to a couple of big providers who don’t sell/rent iron.

My employer adopted cloud as a business/financial play, not a religious one. We often land new builds in the cloud and migrate to a data center if appropriate later.

The apps on-prem cost about 40% less. Apps that are more cost effective in cloud stay there.

duxup · 3 years ago
I recall an HN user talking about their overly complicated cloud setup and … they found they could just do the job substantially faster on their local GPU.
closeparen · 3 years ago
That's a great option when you need servers in Europe. Hetzner offers cloud products only in its US datacenters, and as far as I know there are no dedicated server providers in the US with comparable pricing.

I think it's the case that AWS/GCP/Azure are not very cost-competitive offerings in Europe. What I'm not seeing is evidence of that for the US.

ska · 3 years ago
> and cost more.

for the same spec, sure. I think virtuals make sense at both ends - either dynamic scalability for large N is important, or you only actually need a small fraction of a physical box. Paying 45/mo for something that runs find on 5/mo isn't sensible either, and gives you more flexibility for not ganging things together just to use your server.

antisceptic · 3 years ago
How many nodes (droplets) do you spin up that you need Terraform? I do something similar but I use a single script to spin up the Digital Ocean side and then I complete the setup in Ansible (with an all-in-one master script, since the DO droplets are fetched with a handmade inventory plugin).
oauea · 3 years ago
Just keep in mind your disk can get zapped at any time, so keep backups and use raid if possible.
dspillett · 3 years ago
The whole VPS can go up in smoke fairly easily too, quite literally in some cases. There were a couple of small VPS providers hit buy the fire that took out one of OVH's data centres and either had no backups or had the backups on other machines in the same building. Heck, many cheap VPS providers don't have a reliable backup system at all, some are honest about it (and tell users to manage their own backups) and some are less so. Also remember that a small VPS provider will have low staff because the margins are low, so if there is any manual intervention needed to restore services when there is a hardware failure you might find spinning up a new VPS elsewhere, restoring your own backups¹, and switching over DNS, is faster than the hosting providers restore process. And their backups are often daily, with your own you may be able to efficiently manage much more frequent (hourly, or even perhaps near real-time if your apps are not write-intensive) snapshots. You aren't going to get a 1-hour max restore and 1-hour max data-loss guarantee for $4/month!

Keep backups in any case. Preferably on another provider or at least in a different physical location. And, of course, test them.

And if you are managing a good backup regime, and monitoring your data/app anyway, is monitoring drives a significant extra hardship?

-- [1] in fact if you automate the restore process to another location, which I do for a couple of my bits, then you can just hit that button and update DNS when complete, and maybe allocate a bit more RAM+cores (my test mirrors are smaller than the live VMs as they don't need to serve real use patterns).

ilrwbwrkhv · 3 years ago
This is what I do for all my companies. We have a $4 million profit this year.
klodolph · 3 years ago
Hijacking this—what are the main options here? I found pricing for Hetzner and OVH dedicated / bare metal instances easily enough, but I found it a little difficult to find information about other providers.
FpUser · 3 years ago
>"I know it's not in fashion, but I will suggest that renting physical servers is a very good and under-appreciated compromise. "

Exactly what I do for myself and my clients. Saves tons of dosh.

cpursley · 3 years ago
How much time is spent managing all of this monthly (deploys, etc)?
bluehatbrit · 3 years ago
Not the parent commenter but I do basically the same as them. It really depends how much you're hosting and how much you want the latest updates. If I didn't want to do any updates to the selfhosted stuff I could probably spend 0 time a month. Most of the stuff I selfhost is only available over my VPN anyway so security isn't a huge concern on those.

Even if I did want to update, it's just a case of pulling the latest version into the docker-compose template and re-running the ansible playbook. Obviously if the upgrade requires more then so be it, but it's no different to any other setup work wise.

Probably the only thing I _need_ to do which I do manually is test my backups. But I have a script for each project which does it so I just SSH on, run the one-liner, check the result and it's done. I do that roughly once a month or so, but I also get emails if a backup fails.

So it can be no time at all. Usually it's probably 1-2 hours a month if I'm taking updates on a semi-regular basis. But that will scale with the more things you host and manage.

jwr · 3 years ago
I don't see a difference between physical and VPS servers here. I manage everything using automated tools anyway. Why would this take longer for physical?

In other words, the only difference is where the ansible inventory file comes from. Either it's a static list of IPs, or it comes from terraform.

intelVISA · 3 years ago
What if, hypothetically, there was a tool that combined Terraform and Ansible and was native/performant rather than scripted?
nik736 · 3 years ago
That server does not come with ECC RAM!
coder543 · 3 years ago
The person at the top of the thread claimed €45/month, but the configuration they're describing actually seems to be €37/month.

If you want ECC RAM, that appears to be €60/month, and it also steps up to a more powerful 8-core CPU.

Regardless, if we're talking about a "full production environment and a duplicate staging/standby environment" (to quote the person you replied to), then €60/month * (2 or 3) is still dirt cheap compared to any startup's AWS bill that I've seen.

Use cases vary, but I tend to agree that AWS/GCP/Azure is not the answer to every problem.

For someone who can fit their application onto a $4 VPS, that's obviously going to be cheaper than anything bare metal, but the cloud scales up very expensively in many cases. Bare metal isn't the answer to every problem either, but a lot of people in the industry don't seem to appreciate when it can be the right answer.

TacticalCoder · 3 years ago
For 5 EUR / month you can also get a dedicated server (not a VPS) from OVH.

Sure it's only an ATOM N2800 with 4 GB of RAM / 1 TB SSD / 100 Mbit/s bandwith (which is definitely the bottleneck as I've got gigabit fiber to the home).

But it's 5 EUR / month for a dedicated server (and it's got free OVH DDoS protection too as they offer it on every single one of their servers).

I set up SSH login on these using FIDO/U2F security key only (no password, no software public/private keys: I only allow physical security key logins). I only allow SSH in from the CIDR blocks of the ISPs I know I'll only ever reasonably be login from and just DROP all other incoming traffic to the SSH port. This keeps the logs pristine.

Nice little pet these are.

I'm not recommending these 5 EUR / month servers for production systems but they're quite capable compared to their price.

TheRoque · 3 years ago
I was going to recommend OVH too, they have such cheap offerings for VPS too (on the screen in the article, we see that their 4$ VPS has 500MB RAM (probably amazon lightsail), for that same price you get 2GB RAM on OVH VPS I didn't see your 5€/month offer though

I'd think they are fit for production, a few services use them, like Lichess (can see their stack here https://lichess.org/costs )

TacticalCoder · 3 years ago
Oh my bad: OVH is totally fit for production and they've got very good DDoS protection (which you can combine with CloudFlare and whatnots) but...

I just meant that I wouldn't recommend the 5 EUR / month dedicated servers for production: these do not have a big bandwidth and they don't have ECC RAM.

I edited my other post accordingly.

netule · 3 years ago
A bit off-topic, but thanks for linking that breakdown. I would never have thought that a chess website would cost $400k/y to operate.
nl · 3 years ago
This is a great breakdown.

Seeing their single greatest cost line-item being "site moderation" (more than developer salary!) is such a stark reminder of where the real problems running an internet service are.

znpy · 3 years ago
I have one of those.

They're pretty much worthless. That atom processor hasn't even got support for AES-NI, meaning that all crypto must be done without hardware acceleration.

Basically all traffic (https? ssh? sftp? vpn?) is heavy duty for that procesor for the sole reason of existing.

I can literally see a cpu 100% busy because i'm downloading a file over sftp in another terminal.

Unless you're doing stuff in cleartext, which restricts the acceptable use cases by a lot.

Really, that kind of servers are really toys.

Also, in case of problems the datacenter people will just shut it off, replace it and re-provision it. I had the (single) disk fail on my kimsufi and the OVH people replaced it and rebooted my server to a blank state.

Ok fine, thank you and everything but where did my data go? Who's been handling my data on the PHYSICAL disk?

Really, those servers are barely toys.

Don't get me wrong though: great toys, but TOYS.

jcul · 3 years ago
Had a similar experience with online.net's dedibox.

Was a customer for years, and one day they said my server had died and gave some excuse that there was no way to physically access the hard drive as it was in a rack of other servers or something along those lines.

So what happens to my data? It can't be recovered but when is it going to be securely destroyed?

doix · 3 years ago
I've got something called "Kimsufi 2G" which I've been paying for since 2013. It's only got 2GB of RAM and 500gb SSD, but it's been going strong since then. The same thing runs a small IRC and mumble server for my friend group. I use it primarily for wireguard and hosting random tiny projects.

Works really well, highly recommend it. I don't block traffic from anywhere since I travel a lot. It's interesting to open the nginx logs and see all the automated scans checking me out.

sva_ · 3 years ago
Oracle Cloud gives away an arm vps with 4 CPUs, 24gb ram, and 200gb disk for free.

https://www.oracle.com/cloud/free/#always-free

deltarholamda · 3 years ago
FWIW, if your Free Tier Oracle service seems to go "unused"--which isn't defined as far as I can tell, and the one I use as a DNS server was dinged for being "unused"--you run the risk of it being shut down and you have to restart it, unless you upgrade to a pay-as-you-go account.

This is a new thing as of last week, I think.

Also, Oracle Cloud's management interface is best described as "let's make this as complicated as AWS, but with more 'Enterprise-y' features, but somehow worse in every possible way."

That said, I can't fault Oracle for being Oracle. I knew what I was getting myself into. And their "Always Free" tier is still free. And it does work well enough for my purposes.

jackdh · 3 years ago
The problem with free tiers however is always free does not mean 'always' free.
saagarjha · 3 years ago
I’ve been unable to actually provision one of those, unfortunately.
ptman · 3 years ago
Oracle seems to come on top when comparing cheap/free small scale hosting: https://paul.totterman.name/posts/free-clouds/
antifa · 3 years ago
GCP is doing the 2GB ARM VM for free, but it has an end date.
WrtCdEvrydy · 3 years ago
> For 5 EUR / month you can also get a dedicated server (not a VPS) from OVH.

Where?

TacticalCoder · 3 years ago
OVH "ECO Kimsufi" offering. Now... You'll need to "refresh" regularly the page for the 4.99 EUR one (I rounded @ 5 EUR but it's 4.99) for they're in demand and often sold out. Or you write a script that monitors the page to see when they're available again.

I had to wait a few days to fetch my last one but I got it. Always do : )

On OVH's website: OVH cloud / Bare metal & VPS / ECO Dedicated Servers.

P.S: back in the days the "Kimsufi" (french for "qui me suffit", literally "which is enough (for me)") used to be part of OVH, then OVH spun Kimsufi out of OVH and then now... Kimsufi are back into OVH (the older Kimsufi servers are still accessed using the old Kimsufi credentials but the newer ones are using the OVH credentials again).

explodingcamera · 3 years ago
Check out their kimsufi offering
sourcecodeplz · 3 years ago
They are there but rarely in stock. Also they start from $6.10 now.

Deleted Comment

megous · 3 years ago
Nowhere aparently. At least not now.
curiousgal · 3 years ago
I was an OVH customer for 4 years and finally had enough and decided to switch. They kept lowering the specs and increasing their prices.
jonpalmisc · 3 years ago
Where on OVH’s site are you seeing servers that cheap? I can’t seem to find anything like that, but I’m also on mobile right now.
jacooper · 3 years ago
SSD? its showing up as 1TB hdd sata
quickthrower2 · 3 years ago
OVH? Make sure to get some geo redundancy going!
reyqn · 3 years ago
Well... Like anywhere else...
olabyne · 3 years ago
or a dedicated Raspi 4 at Ikoula, which is not bad speaking about perf/price and Gb-Ram/price
r3trohack3r · 3 years ago
I’ve recently started deploying on Cloudflare workers.

They’re cheap and “infinitely scalable.” I originally picked them for my CRUD API because I didn’t want to have to worry about scaling. I’ve built/managed an internal serverless platform at FAANG and, after seeing inside the sausage factory, I just wanted to focus on product this time around.

But I’ve noticed something interesting/awesome about my change in searches while working on product. I no longer search for things like “securely configuring ssh,” “setting up a bastion,” “securing a Postgres deployment,” or “2022 NGinx SSL configuration” - an entire class of sysadmin and security problems just go away when picking workers with D1. I sleep better knowing my security and operations footprint is reduced and straightforward to reason about. I can use all those extra cycles to focus on building.

I can’t see the ROI of managing a full Linux stack on an R620 plugged into a server rack vs. Workers when you factor in the cost of engineering time to maintain the former.

I do think this is a new world though. AWS doesn’t compare. I’d pick my R620s plugged into a server rack over giving AWS my credit card any day. AWS architectures are easy to bloat and get expensive fast - both in engineering cost and bills.

impulser_ · 3 years ago
I'm scared to use Cloudflare products, because yes they are cheap and good but the company is burning money, not profitable and has large amount of debt. They wil have to raise those prices to not be cheap. Can you predict when and how much they will increase those prices?

If you depend on them for everything and then they decide to make a big price increase to become profitable. Will you be able to handle that price increase? You are pretty much stuck with paying the price.

Yeah other companies can increase their prices, but most of the time profitable companies in cloud infrastructure will only increase if their expenses increase and this is pretty predictable if you pay attention to costs. Like last year it was pretty easy to predict a price increase coming because of inflation and supply chain issues.

awesomeMilou · 3 years ago
Imo, even if they tripled their pricing, they'd still be more cheap than any serverless product other cloud providers have to offer. Looking at their performance over the past 2 years, their losses are in no proportion to their revenue increase [0].

I'm nervous about them changing their pricing too, but just the fact that they're so much more transparent than AWS or GCE is a net plus for me, even with an increase in price.

[0]https://simplywall.st/stocks/us/software/nyse-net/cloudflare...

(ignore the contents and forecast of the article and just look at the graph)

Deleted Comment

matthewfcarlson · 3 years ago
I completely agree. Most of my personal projects are unlikely to ever go above 50 concurrent users, so I don't really benefit from the scaling part of cloud flare, but I recently switched to using cloud flare pages for all new personal projects and it's fantastic. The ease of use really makes my life all that much better.

Just buy a domain name and start deploying. Unlike other cloud providers (looking at you Azure/AWS) the time from push to deployment finished is under a minute. Azure could take 15-20 minutes and AWS still relied on zip file uploads for functions last I checked.

komali2 · 3 years ago
I recently was asked to port an app to cloudflare pages but found it can only basically handle static rendered content or server side rendered content if the toolset is compatible with their workers thing. Is that the case for you or do I have more to learn about cloudflare? Like I can't just drop Django onto it I assume?
wesleyyue · 3 years ago
cf workers look so promising, but their pricing makes websocket / persistent connections untenable. I know they are possible with durable objects, but wish they would have a full product story around actually building apps with live requirements with pricing that makes sense.
haolez · 3 years ago
What are you using to manage your schema? Do you use an ORM? Maybe something like PocketBase[0]?

[0]https://pocketbase.io/

r3trohack3r · 3 years ago
Nope. Just have a directory:

    ./sql/0001_create_users.sql
    ./sql/0002_create_sessions.sql
    ....
Each query for modifying the schema is idempotent and safe to be rerun.

Then I do:

   ls ./sql | xargs -I{} wrangler d1 execute <db> --file {}
Can put that in a script to make things easy. You use the same script to modify the db as you do to bootstrap a db from scratch.

matthewfcarlson · 3 years ago
I looked at pocketbase and other tools, but decided to keep it simple.

Like GP, I'm also using D1 (https://developers.cloudflare.com/d1) which is based on SQLite and still in early Alpha. In combination with KV (https://developers.cloudflare.com/workers/learning/how-kv-wo...) it's trivial to have a great database layer with caching using kysely (https://github.com/aidenwallis/kysely-d1) and trpc (https://trpc.io) you can have typing from DB to front end.

OJFord · 3 years ago
How's that 100MB limit on D1 going though? I realise support for 'larger' databases is coming, but it gives me the impression they don't intend it ever to be a main application database, for anything that's not small and with a fairly constant data requirement (not scaling with users adding content, say).
austhrow743 · 3 years ago
How's the developer experience of writing code for workers?
matthewfcarlson · 3 years ago
Pretty fantastic (not an employee of cloud flare or anything, just a happy customer). They have this concept of mini flare that you can host a tiny cloud flare on your dev box/CI pipeline so it makes it easy to run unit tests and the like.
comprev · 3 years ago
I have worked with several small clients to migrate away from AWS/Azure instances onto dedicated hardware from Hetzner or IBM "Bare Metal" hardware.

The question I ask first is: as a company, what is an acceptable downtime per year?

I give some napkin calculated figures for 95%, 99%, 99.9% and 99.99% to show how both cost and complexity can skyrocket when chasing 9s.

They soon realise that a pair of live/standby servers might be more than suitable for their business needs at that particular time (and for the foreseeable future).

There is an untapped market of clients moving _away_ from the cloud.

lbriner · 3 years ago
SLA is overrated. SLA mostly relates to "unplanned downtime" so if you need to often fix things, just schedule downtime, mess around with it and bring it back up.

Also, we have seen both cloud and non-cloud hosts having significant downtime more than their SLA but just put it down to a "small subset of our customers" so they don't have to do anything.

It's a bit like my Student Loan guarantee, "Do the paperwork and we guarantee you will have the loan on time". The loan was not paid, "I thought you guaranteed it?" "We do but we made a mistake" "So what do I get because of the guarantee?" "Nothing". Cheers!

chrismorgan · 3 years ago
I have never seen a publicly-advertised SLA practically worth anything, by my reckoning—whether offered to all customers or an extra that you’d pay for. (Privately-arranged SLAs I can’t comment on. They could potentially have actually meaningful penalties.)

Vultr’s, as an example I’m familiar with, being a customer, but which I believe is pretty typical:

• 100% uptime, except for up to ten minutes’ downtime (per event) with 24 hours’ notice or if they decide there was a time-critical patch or update. (I have one VPS with Vultr, and got notice of possible outages—identified purely by searching “vultr service alert” in my email—8× in 2022, 3× in 2021, 14× in 2020, 9× in 2019. No idea how many of them led to actual outage.)

• They’ll give you credits according to a particular schedule, 24–144× as much as the outage time (capped at a month’s worth after a 7h outage, which is actually considerably better than most SLAs I’ve ever read). Never mind the fact that if you’re running business on this and actually depending on the SLA, you’re probably losing a lot more than what you’re going to get credited for.

• Onus of reporting outages and requesting credits is on you, by submitting a support ticket manually and explicitly requesting credit. So the vast majority of SLA breaches (>99.9%, I expect; I don’t care to speculate how many more nines could be added) will never actually be compensated. And determination of whether an eligible outage occurred is at their sole discretion, so that they could frankly get away with denying everything all the time if they wanted to.

Such SLAs basically just completely lack fangs. I suppose you’d want something along the lines of insurance instead of an SLA, if it all mattered to you.

znpy · 3 years ago
> They soon realise that a pair of live/standby servers might be more than suitable for their business needs at that particular time (and for the foreseeable future).

I've worked at one of those companies that have the live/standby model in place.

The problem is, how to switch load from live to stand-by in case of problem often requires manual intervention and a procedure.

The procedure must be tested from time to time, and adjusted according to changes.

Oh and the live and standby environments must be kept in sync...

kccoder · 3 years ago
My go to setup if I need more uptime that a single server running in google cloud with live migration grants, is a three node galera cluster, with an A-record pointing to each node, which also runs the application in addition to the database. You can do rolling updates without any downtime, and I've even had setups like this go years without downtime. It isn't perfect but it works very well and obviates having to worry about things like stand-by switchover.
8n4vidtmkvmk · 3 years ago
isn't this just kubernetes with 2 nodes? that's what I'm doing. it's like $100/mo
bak3y · 3 years ago
I've always heard that adding another 9 adds another 3 zeros to the cost.
njovin · 3 years ago
IME many companies claim 99.99+ uptime but then the penalties are trivial. If a 99.99 SLA is busted with an hour of downtime in a month but the penalty is 5% bill credit, the company just lost $500 on $10k revenue, assuming that:

A) Customers actually chase the credit, which (again IME) many companies make very difficult

B) The downtime is very clearly complete downtime. I've seen instances where a mobile app is completely down (but the web product works) or a key API is down (but the core product works) or there are delays in processing data (but eventually things are coming through). All of these can cause downstream downtime to customers but may not be covered by a "downtime" SLA.

VikingCoder · 3 years ago
We need nine 5s of uptime. 55.5555555% uptime AT LEAST.
zamadatix · 3 years ago
At one dollar for 99 that's a billion dollars for 99.999. Seems a bit extreme on each end for a three 9's difference.
dmak · 3 years ago
I haven't done much SLA math in my work before. Could you share some of that napkin math? I am just curious to see how it looks like
newaccount74 · 3 years ago
I've been running my company website for years on $5 Linode. I used to host everything on there (downloads, update checking, crash reporting, licensing, a Postgres database for everything).

I've never had any performance issues. A $5 VPS is plenty for Apache, PHP, PostgreSQL, for a few thousand users a day.

I've started using multiple VPS, one for each service. Not for performance reasons, but for two things:

- isolation: if there's a problem with one service (eg. logs used up all disk space) it doesn't bring everything down at once

- maintainability: it's easier to upgrade services one by one than all at once

throwawaaarrgh · 3 years ago
Does anyone here remember developing applications on machines with 25MHz of CPU and 8MB of memory? That VPC has probably 1GHz CPU and 1GB of memory.

How you develop an application depends completely on what you have available to you and what its use case is. If you don't have money, design it to be resource-efficient. If you do have money, design it to be a resource pig. If it needs to be high performance, design it to be very efficient. If it doesn't need to be high performance, just slap something together.

As a developer, you should know how to design highly efficient apps, and highly performant apps, and how to develop quick and dirty, and how to design for scalability, depending on the situation. It's like being a construction worker: you're going to work on very different kinds of buildings in your career, so learn different techniques when you can.

I highly recommend, for fun, trying to develop some apps inside a VM with very limited resources. It's pretty neat to discover what the bottlenecks are and how to get around them. You may even learn more about networking, file i/o, virtual memory allocation, CoW, threading, etc. (I wouldn't use a container to start, as there's hidden performance issues that may be distraction)

pjc50 · 3 years ago
Yup. I think HN skews surprisingly middle-aged now, and therefore many of us remember that era. You can do a lot on a tiny server if you're efficient.
megous · 3 years ago
For a few years I did run my mail/web/... servers on a light virtualization host that billed by actual memory, disk and CPU use by minute. It's amazing what optimizations one finds to lower the resource consumption when it has direct effect on the price. At times I was running for $0.5/month :D
uvesten · 3 years ago
The article does not really answer the question in any meaningful way, just tests a CRUD blogging server written in go, using a mongodb database (both dockerized…)

If you expect any comprehensive benchmarks or testing, save the time.

detaro · 3 years ago
Although a major point with most cheap providers is what you don't get: consistency in performance and reliability. (Although $4 is not the deepest bargain barrel yet, so not necessarily that bad)
bob1029 · 3 years ago
I think the biggest thing that snipes a lot of technology teams is some notion that production can never ever go down no matter what. Every byte must be synchronously replicated to 2+ cross-cloud regions, etc. Not a single customer can ever become impacted by a hacker, DDOS, or other attack.

Anyone in this industry is prone to these absolutist ideologies. I wasted a half-decade chasing perfection myself. In reality, there are very few real world systems that cannot go down. One example of a "cannot fail" I'd provide is debit & credit processing networks. The DoD operates most of the other examples.

The most skilled developer will look at a 100% uptime guarantee, laugh for a few moments, and then spin up an email to the customer in hopes of better understanding the nature of their business. We've been able to negotiate a substantially smaller operational footprint with all of our customers by being realistic with the nature and impact of failure.

If you can negotiate to operate your product on a single VM (ideally with the database being hosted on the same box), then you should absolutely do this and take the win. Even if you think you'll have to rewrite due to scale in the future, this will get you to the future.

Periodic, crash-consistent snapshots of block storage devices is a completely valid backup option. Many times it is perfectly OK to lose data. In most cases, you will need to reach a small compromise with the business owner where you develop an actual product feature to compensate for failure modes. An example of this for us would be emailing of important items to a special mailbox for recovery from a back-office perspective. The amount of time it took to develop this small product feature is not even .01% of the amount of time it would have taken to develop a multi-cloud, explosion-proof product.