Readit News logoReadit News
zelon88 · 2 years ago
> PaaS is always going to offer less for the same price, but what surprised me was how start the difference can be. On Render.com, a 4GB RAM and 2vCPU service costs $85/mo. The same spec costs $14/mo on Scaleway (ARM).

I notice a trend that the people who scoff at hardware specs are usually the same ones standing in line for 2 cores and 4gb of RAM for $50+/month. They'll laugh when you suggest utilizing an obsolete $50 computer with a decade old CPU (that's just sitting in a closet), but more than willing to spend $50/month on similar performing hardware from a Cloud vendor.

anotherhue · 2 years ago
Cue the downvotes but I imagine you could sort those people into two buckets:

1. Those who love their Macs a little too much

2. Those who routinely build a PC.

i.e. Are you familiar with the hardware market.

zelon88 · 2 years ago
I'm not gonna downvote you. But to your point about the hardware market, a mid range server CPU of today will be about as fast as an entry model CPU next year. Actual performance wise, your old server can probably keep up with a newer but slightly lower end server just fine. Obsolete !== useless.

I'm the kind of person who would rather take the 3y server and recomission it as a lower priority service than just do a 1:1 replacement. "Old" computers aren't as useless as they used to be. Computing power has advanced to the point where computation is arbitrary. For all intents, in most sectors, you can scale your compute capacity as high as your budget allows and you won't hit any performance barriers ever. There will always be more compute. It is a commodity now.

My stance is this; sure the new server is faster than the old one, but you know what's even faster? Dividing the existing workload between both of them.

jamil7 · 2 years ago
I think people know they’re paying a premium on hardware. The point of PaaS isn’t to get the best deal on specs it’s for small teams to iterate quickly and focus on product. If they’re successful they’ll outgrow it and hire people to manage infra.
rapind · 2 years ago
3. Those who build their own hackintosh because they love OSX too much.
system2 · 2 years ago
It is not the hardware actually. It is network reliability makes cloud better.
wharvle · 2 years ago
I've seen real-world connectivity, latency, and bandwidth problems crop up with enough frequency to be a real problem on a major budget "cloud" provider. It looked liked they'd badly cheaped out on their peering agreements.

Move the exact same workload to the industry's default but much more expensive choice, and the problems vanish entirely.

This is, unfortunately, one of those things that's really hard to judge about a hosting provider unless you have direct experience using them "at scale", as they say. Nobody puts that stuff on a sales page spec sheet or comparison grid.

Could I save money by hosting on real hardware at some popular, cheapish server-leasing place? Or colocating at one? Or hosting out of my own basement!? Maybe. Would it cause some users to consistently see dial-up speeds and dropped connections on gigabit Internet service because of either some quirk of routing, or bad peering agreements? Who knows!

imglorp · 2 years ago
Not just that, it's also all the legwork of managed hosting: all that OS configuration, patching, redundancy, testing, monitoring etc etc is someone else's job, and they are accountable. Plus other managed services like LB, DB, auth, etc etc. you might not want to duct tape yourself and manage every day. That cloud bill could be cheaper than your time is worth.

Plus flexibility of scale up / scale down as needed for load, transient testing, etc.

Cloud is definitely not for everyone but it makes sense for some.

GiorgioG · 2 years ago
Colocation exists. It's a solved problem. It's just not in-vogue right now.
1vuio0pswjnm7 · 2 years ago
But isn't it more than just the vendor's hardware that one is paying for. Namely, the vendor's internet connection. Apologies if I am missing something obvious.
_w1tm · 2 years ago
And for network redundancy / UPS / datacenter security / etc. Comparing a PaaS to some old PC running in a closet is completely missing the point.

If the thought of your service going down for a few hours because some random unplugged the power cable or your ADSL router crashed doesn’t make the CFO lose sleep then some raspberry pi is surely good enough. Just make sure you run it inside a safe if you store personal information.

davedx · 2 years ago
I've recently gone the other way. I was self-hosting everything on a DigitalOcean VPS, but keeping the OS maintained, and indeed the headaches of configuring Nginx, letsencrypt, postgres and so on became more annoying not less each time I wanted to make a new app, because every app was a bit different.

I'm now running my primary project on Fly.io and I'm pretty happy with it overall.

"No matter how small is the service, no matter how rarely you need it, you’d need to pay the full price."

On Fly.io I'm running an app server, a db server, and another app server with a 1GB volume for a Discord bot. Everything fits in the free plan.

The thing about PaaS is you really have to do your research. It's not like VPS providers where all you really need to look at is how much compute and storage you get for a monthly price. PaaS have a lot more subtleties and yes, it happens that the startups behind them sometimes blow up or get bought out by huge public enterprise companies. VPS providers tend to be lower risk.

The tradeoff is worth it for me, but it really depends on your skillset, your priorities and so on. I can maintain a VPS, but I have very limited time, so I want to focus every spare hour I have on developing my product.

takinola · 2 years ago
Interestingly, I have a very different experience than you. I simply have a script that sets up the server. I keep updating the script to make it better each time I hit an edge case. At this point, it's pretty bullet-proof. Updates are automatic and so I can leave the server running for months without any intervention.

Due to my love to shiny things, I keep wanting to find an excuse to move to a PaaS but I can never find a sufficient justification.

davedx · 2 years ago
My issue was that the ubuntu version my VPS was on went out of LTS so I had to do a full major OS upgrade which I kept keeping off…
xrobledo84 · 2 years ago
How do you make your updates automatic?
pdimitar · 2 years ago
Is it a normal shell script or do you use something else?
tstrimple · 2 years ago
> The thing about PaaS is you really have to do your research.

I find this true of pretty much all modern cloud development. So many people want to just pretend it’s another VPS alternative and are shocked that their misuse leads to high monthly bills. You need different patterns to take advantage of the strengths of PaaS cloud services and to avoid the weaknesses and gotchas. I PaaS all of the things that I can and my maintenance and support efforts have never been lower. My services scale to zero and I only pay for actual utilization.

If you’re standing up a lot of VMs in the cloud and pretending it’s just another data center of course you’re going to have a bad time and waste money.

tronikel · 2 years ago
Have you tried dokku?
wharvle · 2 years ago
I've been amazed at the resilience and convenience of a handful of shell scripts calling "docker" commands on Debian for my server at home.

- Figured I'd need to screw with Systemd at some point. Nope, whatever Docker's doing restarts my services on a system restart, and auto-restarts if they break. I haven't had to lift a finger for any of that. My services are always just there, unless something really goes horribly wrong.

- Which directories I need to backup is documented in the shell scripts themselves. Very clear and easy.

- Moving those directories and my shell scripts to another server, potentially with a different distro, would be trivial. Rsync two directories (I've put all the directories I mount in the docker images, under a single directory for convenience), shell in, run the scripts. Writing a meta-script to run all of them would be easy. On a VPS I could have everything that mattered on a network drive, and that'd make it even simpler. Mount network drive, run script(s).

- Version updates are easy. I can switch between "use the latest" and "use this specific version until I say otherwise" at will. Rollbacks are trivial. If the services were public-facing I could automate a lot of this with maybe an hour of effort.

- Port mapping's covered by Docker. If these were public-facing it'd be pretty easy to add one extra container for SSL certs and termination (probably Caddy, because I'm lazy, though historically my answer for this at paying gigs has been haproxy). Like, truly, the degree to which I can interact with and configure this system entirely by using portable-everywhere docker commands & config is very high.

I've been running servers (sometimes private, sometimes public) at home since like 2000, and this is easily my favorite approach I've used so far.

I've used stuff like Dokku at work. I dunno—it's another thing that can and does break. If you're just self-hosting a few services and aren't trying to coordinate the work of several developers, IMO it's simpler and not-slower to just use Docker directly.

Dead Comment

_heimdall · 2 years ago
What's the common definition of self-hosting these days?

I've always considered self-hosting to mean I'm managing hardware, but its clear the author here sees it more as a self-managed OS and infrastructure.

It actually feels very similar to the whole MPA vs SPA debate in web development. Maybe I'm just getting old, but self-hosting and SPA have specific meanings I learned years ago that seem to be getting redefined now rather than coming up with new names.

layer8 · 2 years ago
Self-hosting doesn’t necessarily imply that it’s on-premises, or that you own the hardware. It means that you are fully managing the software side of things (everything that is “on the host”) and have full control over that.
paxys · 2 years ago
This thread is the first time I'm hearing this definition. Self hosting has always meant using your own hardware. Cloud/VPS/VMs/shared hosting or whatever else have never qualified.
senectus1 · 2 years ago
huh, that's not my definition.

Self hosting is your hardware in your house/property with your software.

but times change i guess. I do understand why that definition would change. but I feel we should maybe name it slightly differently.

PH95VuimJjqBqy · 2 years ago
that would imply renting a VPS isn't self-hosting, which I think is clearly incorrect.

If you wanted to communicate that you're dealing with hardware I would imagine you would say co-locating or talk about your datacenter.

_heimdall · 2 years ago
On-prem is a really specific term for onsite hardware. Though I don't think its as clearly accepted that renting a VPS is self-hosting.

If that is the meaning now that is a more modern meaning, in my experience VPS was not part of a self-hosting concept before cloud providers were so common. At that time the options were your own hardware or a rented VPS, they couldn't both be self-hosting. Today that's less clear, hence my question of what is the common meaning today.

rdoherty · 2 years ago
Good on the author, but using a Virtual Private Server skirts very close to not self-hosting. When I read 'self-hosting', I imagined buying/building a physical server and either putting it into a datacenter or running it in your home.

Lately I've been thinking of creating a bare-bones HTML website of my own and maybe I'll run it on a Raspberry Pi at home. I think that would qualify as 'self-hosting'.

layer8 · 2 years ago
It makes little difference who owns the hardware in the data center. The important thing is who selects, controls, and manages the software that runs on it. Therefore that’s the main dividing line.
thinkingkong · 2 years ago
Moving from a PaaS to a VPS is 95% the same amount of effort and energy as spinning you own rust under your desk. Semantics matter but holding a definition to “need to suffer a power supply failure to count” isnt really necessary.
arter4 · 2 years ago
Between bots and the Reddit/HN hug of death, if you ever do this don't ever advertise your website (to avoid getting DoS'd) and put some firewall in front.
zzyzxd · 2 years ago
> Self-hosting has Become Easier

> Self-hosting has become more reliable

Docker and Kubernetes really are the two best things happened to me in my selfhosting journey. They made billion dollar enterprise grade tech approachable in my homelab.

- I powered on a brand new mini PC, 10 minutes later it showed up as a node in my cluster and started running some containers

- Two servers died but I didn't notice until a month later, because everything just kept working.

- Some database file got corrupted but the cluster automatically fixed it from a replica on another node.

- I almost completely forget how to manage certs with letsencrypt, because the system will never miss the renewal window unless the very last server in my lab goes down.

jauntywundrkind · 2 years ago
These are great operational wins. Agreed very much that having autonomic (can fix itself) systems at your back is a massive game changer. De-crustifies the act of running things.

The other win is that there's a substantial cultural base to this way to go. Folks have been doing selfhosting for ages, but everyone has their own boutique setup some their way. A couple tools and techniques could be shared, but mostly everyone took blank slate configs & built their own system up, & added their own monitoring & operational scripts.

https://github.com/onedr0p/home-ops is a set of helm scripts and other tools that is widely widely used, and there's a lot more like it. It's a huge build out, using convention and a common platform to enable portable knowledge & sharing.

Self hosting did not have intellectual scale out at it's back, before Kubernetes came along. Docker and ansible and others have been around, but theres never been remotely the success there has been today in empowering users to setup & run complex services.

We really have clawed out of the server-hugging jungle &started building some villages. It's wonderful to see.

zzyzxd · 2 years ago
IMHO they are the same wins. Because behind all these, the real value is standardization. Kubernetes offers standard APIs and all the vendors fall in line with their own implementations.

As a result, the software I run in my homelab for free, is the same software battle-tested in various enterprise environments, from 5-person startups to planet-scale mega corps. There are paid engineers and companies out there making serious long term commitment to it. This is truly amazing.

zilti · 2 years ago
Kubernetes is complete, utter overkill for almost everything.
sgarland · 2 years ago
Yes, but once you know how to use it, everything else is an annoying toy. Also, of course, it being the standard means there is no end of tooling designed to support it.

I can’t count the number of times I’ve been frustrated at my current job because I have to wade through layers of ECS bullshit to do something.

All that said, if you don’t need containers, turns out you can get a lot done with two servers behind HAProxy + keepalived, running stuff with systemd.

pcthrowaway · 2 years ago
I wonder what people's thoughts on KubeVela are (https://www.cncf.io/projects/kubevela/)

I've been pretty sour on Kubernetes, but this looks like it could bring some of the niceties of PaaS to k8s. I haven't looked into the deployment process yet though, perhaps that's where all the pain lies

zzyzxd · 2 years ago
You may enjoy learning and using different implementations for networking, storage, DNS, containerization...from time to time.

But I just learnt Kubernetes and suddenly all those implementations become interchangeable and my knowledge is transferable between homelab and work, and even across companies and platforms.

mplewis · 2 years ago
Of course, because very few projects need every feature of Kubernetes. But most projects need three or four of them, and many projects need something from the long tail too.

I’ve been running my personal cluster happily on k8s for years.

paxys · 2 years ago
Add VPN to that list. It used to be a monumental pain to set up a home network, but now with something like Wireguard/Tailscale/ZeroTier/Nebula you can do it in a few clicks.
nemothekid · 2 years ago
I would call this divide more-so serverless vs servers than self hosting. I"ve recently gone the opposite direction. I used to think serverless was too exepensive and unpredictable but after almost a decade of Go and 2 years of Rust I'm really coming the opinion that the resources required to run services are completely warped.

2 cores and 4GB ram is the recommended specs to run the 2008 game Crysis. It's hard to imagine that moving JSON around (which is probably 95% of services) is more demanding than Crysis. As I've begun really paying attention to the services being deployed you can actually get a lot of mileage out of the free tier especially now that they support running native binaries.

BrandoElFollito · 2 years ago
I wonder what people host that require such specs.

I host on a skylake about 7 years old I think, with 12 GB RAM. About 30 docker images running Home Assistant, Bitwarden, the *arr suite, jellyfin, minecraft ... Nothing fancy and I have heaps of free CPU and RAM.

I understand that one can easily load CPUs with compute processing, or RAM with video transformation but for a generic self-hoster of typical services I am surprised by the typical setup people have (which is great for them, I am just curious)

ozim · 2 years ago
IaaS for me also works better but I would not call it self hosting. VPS is also a cloud.

I get triggered by it because I get people in my company coming to me we should switch to cloud - but we are in cloud only that it is IaaS.