"start self-hosting more of your personal services."
I would make the case that you should also self host more as a small Software/SAAS business and it is not quite the boogeyman that a lot of cloud vendors want you to think.
Here is why. Most software projects/businesses don't require the scale and complexity for which you truly need the cloud vendors and their expertise. For example, you don't need Vercel to deploy NextJS or whatever static website or even netlify. You can setup Nginx or Caddy (my favorite) on a simple VPS with Ubuntu etc and boom. For majority of projects, that will do.
90%+ of projects can be self hosted with the following:
- A well hardened VPS server with good security controls. Plenty of good articles online on how to do the most important things (remove root login, ssh should only be key based etc).
- Setup a reverse proxy like Caddy (my favorite) or Nginx etc. Boom. Static files can now be served. Static websites can be served. No need for CDN etc unless you are talking about millions of requests per day.
- Setup your backend/API with something simple like supervisor or even the native systemd.
- The same Reverse proxy can also forward requests to backend and other services as needed. Not that hard.
- Self host a mysql/postgres database and setup the right security controls.
- Most importantly: Setup backups for everything using a script/cron and test them periodically.
- IF you really want to feel safe against DOS/DDOS etc, add cloudflare in front of everything.
- You want to deploy ? Git pull should do it for most projects like PHP etc. If you have to rebuild binary, it will be another step but possible.
You don't need Docker or containers. They can help but not needed for small to even mid sized projects.
Yes, you can claim that a lot of these things are hard and I would say they are not that hard. Majority of projects don't need the web scale or whatever.
The main thing that gives me anxiety about this is the security surface area associated with "managing" a whole OS— kernel, userland, all of it. Like did I get the firewall configured correctly, am I staying on top of the latest CVEs, etc.
For that reason alone I'd be tempted to do GHA workflow -> build container image and push to private registry -> trivial k8s config that deploys that container with the proper ports exposed.
Run that on someone else's managed k8s setup (or Talos if I'm self hosting) and it's basically exactly as easy as having done it on my own VM but this way I'm only responsible for my application and its interface.
I left my VPS open to password logins for over 3 years, no security updates, no firewalls, no kernel updates, no apt upgrades; only fail2ban and I survived: https://oxal.org/blog/my-vps-security-mess/
Don't be me, but even if you royally mess up things won't be as bad as you think.
> Run that on someone else's managed k8s setup ... this way I'm only responsible for my application and its interface.
It's the eternal trade-off of security vs. convenience. The downside of this approach is that if there is a vulnerability, you will need to wait on someone else to get the fix out. Probably fine nearly always, but you are giving up some flexibility.
Another way to get a reasonable handle on the "managing a whole OS ..." complexity is to use some tools that make it easier for you, even if it's still "manually" done.
Personally, I like FreeBSD + ZFS-on-root, which gives "boot environments"[1], which lets you do OS upgrades worry-free, since you can always rollback to the old working BE.
But also I'm just an old fart who runs stuff on bare metal in my basement and hasn't gotten into k8s, so YMMV (:
I used digital ocean for hosting a wordpress blog.
It got attacked pretty regularly.
I would never host an open server from my own home network for sure.
This is the main value add I see in cloud deployments -> os patching, security, trivial stuff I don't want to have to deal with on the regular but it's super important.
You can mitigate a lot of security issues by not exposing your self-hosted stack to the Internet directly. Instead you can use a VPN to your home network.
An alternative is a front-end proxy on a box with a managed OS, like OpenWRT.
> The main thing that gives me anxiety about this is the security surface area associated with "managing" a whole OS— kernel, userland, all of it. Like did I get the firewall configured correctly, am I staying on top of the latest CVEs, etc.
I've had a VPS facing the Internet for over a decade. It's fine.
$ $ ls -l /etc/protocols
-rw-r--r-- 1 root root 2932 Dec 30 2013 /etc/protocols
I would worry more about security problems in whatever application you're running on the operating system than I would the operating system.
There are distros who keep up to date with CVEs for you, many of them are set up to automatically update packages, automatically restart processes and reboot after Linux kernel upgrades. Once every few years you'll need to upgrade to the latest version of the distro, thats usually pretty quick though.
> gives me anxiety about this is the security surface
I hate how dev-ops has adopted and deploys the fine-grained RBAC permissions on clouds. Every little damn thing is a ticket for a permissions request. Many times it's not even clear which permission sets are needed. It takes many iterations to wade through the various arbitrary permission gates that clouds have invented.
They orgs are pretending like they're operating a bank, in staging.
This is why I built https://canine.sh -- to make installing all that stuff a single step. I was the cofounder of a small SaaS that was blowing >$500k / year on our cloud stack
Within the first few weeks, you'll realize you also need sentry, otherwise, errors in production just become digging through logs. Thats a +$40 / m cloud service.
Then you'll want something like datadog because someone is reporting somewhere that a page is taking 10 seconds to load, but you can't replicate it. +$300 / m cloud service.
Then, if you ever want to aggregate data into a dashboard to present to customers -- Looker / Tableau / Omni +$20k / year.
Data warehouse + replication? +$150k / year
This goes on and on and on. The holy grail is to be able to run ALL of these external services in your own infrastructure on a common platform with some level of maintainability.
Cloud Sentry -> Self Hosted Sentry
Datadog -> Self Hosted Prometheus / Grafana
Looker -> Self Hosted Metabase
Snowflake -> Self Hosted Clickhouse
ETL -> Self Hosted Airbyte
Most companies realize this eventually and thats why they eventually move to Kubernetes. I think its also why often indie hackers can't quite understand why the "complexity" of Kubernetes is necessary, and just having everything run on a single VPS isn't enough for everything.
This assumes you're building a SAAS with customers though. When I started my career it was common for companies to build their own apps for themselves, not for all companies to be split between SAAS builders and SAAS users.
I'm also in this space - https://disco.cloud/ - similarly to you, we offer an open source alternative to Heroku.
As you well know, there are a lot of players and options (which is great!), including DHH's Kamal, Flightcontrol, SST, and others. Some are k8s based - Porter and Northflank, yours. Others, not.
Two discussion points: one, I think it's completely fair for an indie hacker, or a small startup (Heroku's and our main customers - presumably yours too), to go with some ~Docker-based, git-push-compatible deployment solution and be completely content. We used to run servers with nginx and apache on them without k8s. Not that much has changed.
Two, I also think that some of the needs you describe could be considered outside of the scope of "infra": a database + replication, etc. from Crunchy Bridge, AWS RDS, Neon, etc. - of course.
But tableau? And I'm not sure that I get what you mean by 150k/year - how much replication are we talking about? :-)
If you start peaking success, you realize that while your happy path may work for 70% of real cases, it's not really optimal to convert for most of them. Sentry helps a lot, you see session replay, you get excited.
You realize you can A/B test... but you need a tool for that...
Problem: Things like Openreplay will just crash and not restart themselves, with multiple container setups, some random part going down will just stop your session collection, without you noticing.. try to debug that? Goodluck, it'll take at least half a day. And often, you restore functionality, only to have another random error take it down a couple of months later, or you realize, the default configuration is only to keep 500mb of logs/recordings (what), etc, etc...
You realize you are saving $40/month for a very big hassle and worse, it may not work when you need it. You go back to sentry etc..
> Majority of projects don't need the web scale or whatever.
Truth. All the major cloud platform marketing is YAGNI but for infrastructure instead of libraries/code.
As someone who works in ops and has since starting as a sysadmin in the early 00s, it's been entertaining to say the least to watch everyone rediscover hosting your own stuff as if it's some new innovation and wasn't ever possible before. It's like that old MongoDB is web scale video (https://www.youtube.com/watch?v=b2F-DItXtZs)
Watching devs discover Docker was similarly entertaining back then when us in ops have been using LXC and BSD jails, etc. to containerize code pre-DevOps.
Anyway, all that to say - go buy your gray beard sysadmins a coffee let them help you. We would all be thrilled to bring stuff back on-prem or go back to self hosting and running infra again and probably have a few tricks to teach you.
And there is an extra perk: Unlike cloud services, system skills and knowledge are portable. Once you learn how systemd or ufw or ssh works, you can apply it to any other system.
I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.
Well said. For me, "how to administer a website on a Debian server" is a must if you work in Web Dev because hosting a web app should not require you to depend on anyone else.
>I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.
But that is irrelevant as Docker brings more to the table that a simple Debian server cannot by design. One could argue that lxd is sufficient for these, but that is even more hassle than Docker.
Mostly agreed, and thanks for sharing your POV. One slight disagreement:
"No need for CDN etc unless you are talking about millions of requests per day."
A CDN isn't just for scale (offloading requests to origin), it's also for user-perceived latency. Speed is arguably the most important feature. Yes, beware premature optimization... but IMHO delivering static assets from the edge, close as possible to the user, is borderline table stakes and has been for at least a decade.
You're right, including your warning of premature optimization, but if the premise of the thread is starting from a VPS, user-perceived latency shouldn't be as wild as self-hosting in a basement or something because odds are your VPS is on a beefy host with big links and good peering anyway. If anything, I'd use the CDN as one more layer between me and the world, but the premise also presupposed a well-hardened server. Personally, the db and web host being together gave me itches, but all things secure and equal, it's a small risk.
For 20 years I ran a web dev company that hosted bespoke web sites for theatre companies and restaurants. We ran FreeBSD, postgreSQL and nginx or H2O-server with sendmail.
> No need for CDN etc unless you are talking about millions of requests per day.
Both caddy and nginx can handle 100s of millions of static requests per day on any off-the-shelf computer without breaking a sweat. You will run into network capacity issues long before you are bottlenecked by the web server software.
A small business should onlyself host if they are a hosting company. everyone else should pay their local small business self hosting company to host for them.
This is not a job for the big guys. You want someone local who will take care of you. They also come when a computer fails, ensuring updates are applied to them. Not by come I mean physically sending a human to you. This will cost some money but you should be running your business not trying to learn computers.
Exactly; but I would rather say that you don't need CDN unless you have tens of thousands of requests per second and your user base is global; single powerful machine can easily handle thousands and tens of thousands of requests per second
Are you talking about the VPS just serving as a reverse proxy and running the server on-prem or at home? Or are you having a reverse proxy on some VPS with a static IP send connections to other VPSs on a cloud service? I've self-hosted toy apps at home this way, with a static IP VPS as a reverse proxy in the middle, and it is indeed easy. With Tailscale you don't even need a static IP at home. A gigabit connection and a $100 box can easily handle plenty of load for a small app or a static site. To me, the reason I would never put something like that into production even for a very small client is, even in a fairly wealthy city in America, the downtime caused by electrical outages and local internet outages would be unacceptable.
I wish I understood why some engineers feel the need to over-engineer the crap out of everything.
Is it because of wishful thinking? They think their blog is gonna eventually become so popular that they're handling thousands of requests per second, and they want to scale up NOW?
I just think about what web servers looked like at the turn of the millennium. We didn't have all these levels of abstraction. No containers. Not even VMs. And we did it on hardware that would be considered so weak that they'd be considered utterly worthless by today's standards.
And yet...it worked. We did it.
Now we've got hardware that is literally over 1000 times faster. Faster clocks, more cache, higher IPC, and multiple cores. And I feel like half of the performance gains are being thrown away by adding needless abstractions and overhead.
FFS...how many websites were doing just fine with a simple LAMP stack?
I think this is a moderately good idea, if you are certain that you want to remain involved with the business operationally, forever.
It's still not ever a great idea (unless, maybe, this is what you do for a living for your customers), simply because it binds your time, which will absolutely be your scarcest asset if your business does anything.
To be fair to enum, the services they sell are around k8, an s3-equivalent, and devops. If they sold/promised self-hosting/sovereign email services, and then were "caught" using gmail, that might be a different story.
Your point stands - they're not fully completely independent. And maybe the language in the OP's article could have been different.. but the OP also specifically says "Oh no, I said the forbidden phrase: Self-hosted mail server. I was always told to never under any circumstances do that. But it's really not that deep."
They're aware of the issue, everyone is aware of the issue. It's an issue :-) But I get your point too.
I think it would be fair for them to use something like proton or enterprise msft relay service. Actually this is only for inbound mail, it can be self hosted without any issues, spf on the other hand (outbound verification) does need a relay at minimum.
Founder of enum here. That's a fair point, and a good catch.
Honestly, using Google Workspace for our internal email was a pragmatic choice early on to let us focus on building our core product. It's a classic startup trade-off, and one we're scheduled to fix in the coming weeks.
I want to be clear, though: our customer-facing platform and all its data are and always have been 100% sovereign. Our infrastructure is totally independent of Big Tech.
> Our infrastructure is totally independent of Big Tech
That's wishful thinking. You cannot be truly independent from them, no one can. They control major BGP routes, major ASN, big fiber cables, etc. It's just impossible
That’s fine. But as R_Spaghetti has kindly pointed out maybe you could try and convince your colleague to change the post to rather accommodate “… digital sovereignty is still just a phrase …” and then possibly add something like “and we are working to change that” :) Just a thought. Of course we all are free to talk anything we want, do anything we want, and definitely write and post anything we want.
Yeah I really will give people a pass here. The state of email is one of the worst collective mistakes I think we've made.
You can literally be an expert in everything relevant - and your mail will still not get delivered just because you're not google/mailgun/etc.
I was trying to do a very simple email-to-self use-case. I was sending mail from my VPS (residential IP not even allowed at all) which was an IPv4 i'd had for literally 2+ years to exactly only myself - my personal gmail. I had it all set up - SPF, DKIM, TLS, etc etc. And I was STILL randomly getting emails sent directly to spam / showing up with the annoying ! icon (grates on my sensibilities). I ended up determining - after tremendous, tremendous pain in researching / debugging - that my DKIM sigs and SPF were all indeed perfect (I had been doubting myself until I realized I could just check what gmail thought about SPF/DKIM/etc. It all passed). And my only sin was just not being in the in-crowd.
Incredibly frustrating. The only winning move is not to play. I ended up just switching from emails-to-self to using a discord webhook to @ myself in my private discord server, so I get a push notification.
And this was just me, sending to myself! Low volume (0-2 emails per WEEK). Literally not even trying to actually send emails to other people.
> TXT "google-site-verification=TTrl7IWxuGQBEqbNAz17GKZzS-utrW7SCZbgdo5tkk0"
just to clarify, this part is not evil, it is just a compromise one makes to prevent Gmail from classifying outgoing email as spam (I think).
With self hosting email, if the digital sovreignty aspect is more important to you than the privacy aspect...
What I do is use gmail with a custom domain, self host an email server, and use mbysnc[1] to always be downloading my emails from gmail. Then I connect to that email server for reading my emails, but still use gmail for sending.
It also means that google can't lock me out of my emails, I still retain all my emails, and if I want move providers, I simply change the DNS records of my domain. But I don't have any issues around mail delivery.
I did all of those DNS shnigannas with spf, dmarc and others ones like 6 years ago.
I think I had problems with my emails like 2 twice , with one exchange server of some small recruitment company. I think it was misconfigured.
Ah there were also some problem with gmail at the beginning they banned my domain because I was sending test emails to my own account there. I had to register my domain on their BS email post master tools website and configure my DNS with some key.
In overall I had much more problem with automatic backups, services going down for no reason, IPs being dynamic and etc. Email server just works.
The custom domain is all you need for complete e-mail sovereignty. As long as you have it, you can select between hundreds (thousands?) of providers, and take your business elsewhere at any time.
Not OP, but yes. For personal use, you don't have enough traffic to establish reputation, so you get constantly blocked regardless of DKIM/DMARC/SPF/rDNS. Receiving mail is reliable though, so you can do that yourself and outsource just sending to things like Amazon SES or SMTP relays.
Self hosting is awesome. I have been doing it for about a year since I quit my full time SWE job and pursued SaaS. I am using Coolify on a $20/month Hetzner server to host a wide variety of applications: Postgres, Minio (version before community neuter) for S3, Nuxt application, NextJs applications, Umami analytics, Open WebUI, and static sites. It was definitely a learning process, but now that I have everything set up, it really is just plug and play to get a new site/service up and running. I am not even using 1/4 of my server resources either (because I don't have many users xd). It is great.
Defining "self-host" so narrowly as meaning that the software has to run on a server in your home closet ensures that it will always remain niche and insignificant. We should encourage anything that's not SaaS: open source non-subscription phone apps, plain old installable software that runs on Windows, cloud apps that can easily be run (and moved) between different hosts, etc.
Anything that prevents lock-in and gives control to the user is what we want.
We can’t have the word lose all meaning either. A cloud app that uses standard protocols and can be moved is still being run on a server you don’t own or control, by someone who could decide to change polices about data collection and privacy at any time. You can leave, but will you be able to migrate before the data is harvested? How would you ever know for sure?
The general definition (although it can be pretty loose) is that you need to control the computer/server your software is running on. If that is a VPS or a server in your basement really doesn't matter all that much in the end when talking about if something is self-hosted or not.
At the very least, it should include colocating your server with somebody else who has better power and connectivity. As long as you have root, it's your server.
Excellent topic, I can offer a perspective from my own experience.
The biggest benefit of running a homelab isn't cost savings or even data privacy, though those are great side effects. The primary benefit is the deep, practical knowledge you gain. It's one thing to read about Docker, networking, and Linux administration; it's another thing entirely to be the sole sysadmin for services your family actually uses. When the DNS stops working or a Docker container fails to restart after a power outage, you're the one who has to fix it. That's where the real learning happens.
However, there's a flip side that many articles don't emphasize enough: the transition from a fun "project" to a "production" service. The moment you start hosting something critical (like a password manager or a file-syncing service), you've implicitly signed up for a 24/7 on-call shift. You become responsible for backups, security patching, and uptime. It stops being a casual tinker-toy and becomes a responsibility.
This is the core trade-off: self-hosting is an incredibly rewarding way to learn and maintain control over your data, but it's not a free lunch. You're trading the monetary cost of SaaS for the time and mental overhead of being your own IT department. For many on HN, that's a trade worth making.
Self hosting is great and I'm thankful for all the many ways to run apps on your own infra.
The problem is backup and upgrades. I self host a lot of resources, but none I would depend on for critical data or for others to rely on. If I don't have an easy path to restore/upgrade the app, I'm not going to depend on it.
For most of the apps out there, backup/restore steps are minimal or non existent (compared to the one liner to get up and running).
FWIW, Tailscale and Pangolin are godsends to easily and safely self-host from your home.
Every selfhosted app runs in docker, where the backup solution is back up the folders you mounted and the docker-compose.yml. To restore, put the folders back and run docker compose up again. I don't need every app to implement its own thing, that would be a waste of developer time.
+1 for the above... all my apps are under /app/appname/(compose and data)... my backup is an rsync script that runs regularly... when I've migrated, I'll compose down, then rsync then rsync to the new server, then compose up... update dns, etc.
It's been a pretty smooth process. No, it's not a multi-region k8s cluster with auto everything.. but you can go a long way with docker-compose files that are well worth it.
I would rather use headscale than netbird. Headscale is well established and very stable. netbird has a lot of problems and the fact their issue list is hardly looked at by the devs is more concerning
I would make the case that you should also self host more as a small Software/SAAS business and it is not quite the boogeyman that a lot of cloud vendors want you to think.
Here is why. Most software projects/businesses don't require the scale and complexity for which you truly need the cloud vendors and their expertise. For example, you don't need Vercel to deploy NextJS or whatever static website or even netlify. You can setup Nginx or Caddy (my favorite) on a simple VPS with Ubuntu etc and boom. For majority of projects, that will do.
90%+ of projects can be self hosted with the following:
- A well hardened VPS server with good security controls. Plenty of good articles online on how to do the most important things (remove root login, ssh should only be key based etc).
- Setup a reverse proxy like Caddy (my favorite) or Nginx etc. Boom. Static files can now be served. Static websites can be served. No need for CDN etc unless you are talking about millions of requests per day.
- Setup your backend/API with something simple like supervisor or even the native systemd.
- The same Reverse proxy can also forward requests to backend and other services as needed. Not that hard.
- Self host a mysql/postgres database and setup the right security controls.
- Most importantly: Setup backups for everything using a script/cron and test them periodically.
- IF you really want to feel safe against DOS/DDOS etc, add cloudflare in front of everything.
So you end up with:
Cloudflare/DNS=>Reverse Proxy (Caddy/Nginx)=>Your App.
- You want to deploy ? Git pull should do it for most projects like PHP etc. If you have to rebuild binary, it will be another step but possible.
You don't need Docker or containers. They can help but not needed for small to even mid sized projects.
Yes, you can claim that a lot of these things are hard and I would say they are not that hard. Majority of projects don't need the web scale or whatever.
For that reason alone I'd be tempted to do GHA workflow -> build container image and push to private registry -> trivial k8s config that deploys that container with the proper ports exposed.
Run that on someone else's managed k8s setup (or Talos if I'm self hosting) and it's basically exactly as easy as having done it on my own VM but this way I'm only responsible for my application and its interface.
Don't be me, but even if you royally mess up things won't be as bad as you think.
It's the eternal trade-off of security vs. convenience. The downside of this approach is that if there is a vulnerability, you will need to wait on someone else to get the fix out. Probably fine nearly always, but you are giving up some flexibility.
Another way to get a reasonable handle on the "managing a whole OS ..." complexity is to use some tools that make it easier for you, even if it's still "manually" done.
Personally, I like FreeBSD + ZFS-on-root, which gives "boot environments"[1], which lets you do OS upgrades worry-free, since you can always rollback to the old working BE.
But also I'm just an old fart who runs stuff on bare metal in my basement and hasn't gotten into k8s, so YMMV (:
[1] eg: https://vermaden.wordpress.com/2021/02/23/upgrade-freebsd-wi... (though I do note that BEs can be accomplished without ZFS, just not quite as featureful. See: https://forums.freebsd.org/threads/ufs-boot-environments.796...)
It got attacked pretty regularly.
I would never host an open server from my own home network for sure.
This is the main value add I see in cloud deployments -> os patching, security, trivial stuff I don't want to have to deal with on the regular but it's super important.
An alternative is a front-end proxy on a box with a managed OS, like OpenWRT.
I've had a VPS facing the Internet for over a decade. It's fine.
I would worry more about security problems in whatever application you're running on the operating system than I would the operating system.I hate how dev-ops has adopted and deploys the fine-grained RBAC permissions on clouds. Every little damn thing is a ticket for a permissions request. Many times it's not even clear which permission sets are needed. It takes many iterations to wade through the various arbitrary permission gates that clouds have invented.
They orgs are pretending like they're operating a bank, in staging.
This gives me anxiety.
Within the first few weeks, you'll realize you also need sentry, otherwise, errors in production just become digging through logs. Thats a +$40 / m cloud service.
Then you'll want something like datadog because someone is reporting somewhere that a page is taking 10 seconds to load, but you can't replicate it. +$300 / m cloud service.
Then, if you ever want to aggregate data into a dashboard to present to customers -- Looker / Tableau / Omni +$20k / year.
Data warehouse + replication? +$150k / year
This goes on and on and on. The holy grail is to be able to run ALL of these external services in your own infrastructure on a common platform with some level of maintainability.
Cloud Sentry -> Self Hosted Sentry
Datadog -> Self Hosted Prometheus / Grafana
Looker -> Self Hosted Metabase
Snowflake -> Self Hosted Clickhouse
ETL -> Self Hosted Airbyte
Most companies realize this eventually and thats why they eventually move to Kubernetes. I think its also why often indie hackers can't quite understand why the "complexity" of Kubernetes is necessary, and just having everything run on a single VPS isn't enough for everything.
As you well know, there are a lot of players and options (which is great!), including DHH's Kamal, Flightcontrol, SST, and others. Some are k8s based - Porter and Northflank, yours. Others, not.
Two discussion points: one, I think it's completely fair for an indie hacker, or a small startup (Heroku's and our main customers - presumably yours too), to go with some ~Docker-based, git-push-compatible deployment solution and be completely content. We used to run servers with nginx and apache on them without k8s. Not that much has changed.
Two, I also think that some of the needs you describe could be considered outside of the scope of "infra": a database + replication, etc. from Crunchy Bridge, AWS RDS, Neon, etc. - of course.
But tableau? And I'm not sure that I get what you mean by 150k/year - how much replication are we talking about? :-)
If you start peaking success, you realize that while your happy path may work for 70% of real cases, it's not really optimal to convert for most of them. Sentry helps a lot, you see session replay, you get excited.
You realize you can A/B test... but you need a tool for that...
Problem: Things like Openreplay will just crash and not restart themselves, with multiple container setups, some random part going down will just stop your session collection, without you noticing.. try to debug that? Goodluck, it'll take at least half a day. And often, you restore functionality, only to have another random error take it down a couple of months later, or you realize, the default configuration is only to keep 500mb of logs/recordings (what), etc, etc...
You realize you are saving $40/month for a very big hassle and worse, it may not work when you need it. You go back to sentry etc..
Does Canine change that?
Truth. All the major cloud platform marketing is YAGNI but for infrastructure instead of libraries/code.
As someone who works in ops and has since starting as a sysadmin in the early 00s, it's been entertaining to say the least to watch everyone rediscover hosting your own stuff as if it's some new innovation and wasn't ever possible before. It's like that old MongoDB is web scale video (https://www.youtube.com/watch?v=b2F-DItXtZs)
Watching devs discover Docker was similarly entertaining back then when us in ops have been using LXC and BSD jails, etc. to containerize code pre-DevOps.
Anyway, all that to say - go buy your gray beard sysadmins a coffee let them help you. We would all be thrilled to bring stuff back on-prem or go back to self hosting and running infra again and probably have a few tricks to teach you.
I’d even go as far as to say that the time/cost required to say learn the quirks of Docker and containers and layering builds is higher than what is needed to learn how to administer a website on a Debian server.
But that is irrelevant as Docker brings more to the table that a simple Debian server cannot by design. One could argue that lxd is sufficient for these, but that is even more hassle than Docker.
"No need for CDN etc unless you are talking about millions of requests per day."
A CDN isn't just for scale (offloading requests to origin), it's also for user-perceived latency. Speed is arguably the most important feature. Yes, beware premature optimization... but IMHO delivering static assets from the edge, close as possible to the user, is borderline table stakes and has been for at least a decade.
For 20 years I ran a web dev company that hosted bespoke web sites for theatre companies and restaurants. We ran FreeBSD, postgreSQL and nginx or H2O-server with sendmail.
Never an issue and had fun doing it.
Both caddy and nginx can handle 100s of millions of static requests per day on any off-the-shelf computer without breaking a sweat. You will run into network capacity issues long before you are bottlenecked by the web server software.
This is not a job for the big guys. You want someone local who will take care of you. They also come when a computer fails, ensuring updates are applied to them. Not by come I mean physically sending a human to you. This will cost some money but you should be running your business not trying to learn computers.
OK that's your opinion, in my view a business should selfhost if they want to maintain data sovereignty.
> everyone else should pay their local small business self hosting company to host for them.
That assumes all small business have at least one "local small business self hosting company" to choose from.
I wish I understood why some engineers feel the need to over-engineer the crap out of everything.
Is it because of wishful thinking? They think their blog is gonna eventually become so popular that they're handling thousands of requests per second, and they want to scale up NOW?
I just think about what web servers looked like at the turn of the millennium. We didn't have all these levels of abstraction. No containers. Not even VMs. And we did it on hardware that would be considered so weak that they'd be considered utterly worthless by today's standards.
And yet...it worked. We did it.
Now we've got hardware that is literally over 1000 times faster. Faster clocks, more cache, higher IPC, and multiple cores. And I feel like half of the performance gains are being thrown away by adding needless abstractions and overhead.
FFS...how many websites were doing just fine with a simple LAMP stack?
Deleted Comment
It's still not ever a great idea (unless, maybe, this is what you do for a living for your customers), simply because it binds your time, which will absolutely be your scarcest asset if your business does anything.
I am speaking from my acute experience.
info.addr.tools shows [1]: MX 1 smtp.google.com. TXT "mailcoach-verification=a873d3f3-0f4f-4a04-a085-d53f70708e84"
TXT "v=spf1 include:_spf.google.com ~all"
TXT "google-site-verification=TTrl7IWxuGQBEqbNAz17GKZzS-utrW7SCZbgdo5tkk0"
This is not just a phrase, it is a DNS entry. Using the most evil in phrases of digital sovereignty.
[1] https://info.addr.tools/enum.co
Your point stands - they're not fully completely independent. And maybe the language in the OP's article could have been different.. but the OP also specifically says "Oh no, I said the forbidden phrase: Self-hosted mail server. I was always told to never under any circumstances do that. But it's really not that deep."
They're aware of the issue, everyone is aware of the issue. It's an issue :-) But I get your point too.
Founder of enum here. That's a fair point, and a good catch.
Honestly, using Google Workspace for our internal email was a pragmatic choice early on to let us focus on building our core product. It's a classic startup trade-off, and one we're scheduled to fix in the coming weeks.
I want to be clear, though: our customer-facing platform and all its data are and always have been 100% sovereign. Our infrastructure is totally independent of Big Tech.
Thanks for holding us accountable!
That's wishful thinking. You cannot be truly independent from them, no one can. They control major BGP routes, major ASN, big fiber cables, etc. It's just impossible
You can literally be an expert in everything relevant - and your mail will still not get delivered just because you're not google/mailgun/etc.
I was trying to do a very simple email-to-self use-case. I was sending mail from my VPS (residential IP not even allowed at all) which was an IPv4 i'd had for literally 2+ years to exactly only myself - my personal gmail. I had it all set up - SPF, DKIM, TLS, etc etc. And I was STILL randomly getting emails sent directly to spam / showing up with the annoying ! icon (grates on my sensibilities). I ended up determining - after tremendous, tremendous pain in researching / debugging - that my DKIM sigs and SPF were all indeed perfect (I had been doubting myself until I realized I could just check what gmail thought about SPF/DKIM/etc. It all passed). And my only sin was just not being in the in-crowd.
Incredibly frustrating. The only winning move is not to play. I ended up just switching from emails-to-self to using a discord webhook to @ myself in my private discord server, so I get a push notification.
And this was just me, sending to myself! Low volume (0-2 emails per WEEK). Literally not even trying to actually send emails to other people.
damn, this guy don’t fuck around. respect
What I do is use gmail with a custom domain, self host an email server, and use mbysnc[1] to always be downloading my emails from gmail. Then I connect to that email server for reading my emails, but still use gmail for sending.
It also means that google can't lock me out of my emails, I still retain all my emails, and if I want move providers, I simply change the DNS records of my domain. But I don't have any issues around mail delivery.
I think I had problems with my emails like 2 twice , with one exchange server of some small recruitment company. I think it was misconfigured.
Ah there were also some problem with gmail at the beginning they banned my domain because I was sending test emails to my own account there. I had to register my domain on their BS email post master tools website and configure my DNS with some key.
In overall I had much more problem with automatic backups, services going down for no reason, IPs being dynamic and etc. Email server just works.
Doing the sending myself wouldn't improve my digital sovreignty, which is my primary motivation.
https://coolify.io/docs/
Anything that prevents lock-in and gives control to the user is what we want.
The problem is backup and upgrades. I self host a lot of resources, but none I would depend on for critical data or for others to rely on. If I don't have an easy path to restore/upgrade the app, I'm not going to depend on it.
For most of the apps out there, backup/restore steps are minimal or non existent (compared to the one liner to get up and running).
FWIW, Tailscale and Pangolin are godsends to easily and safely self-host from your home.
Every selfhosted app runs in docker, where the backup solution is back up the folders you mounted and the docker-compose.yml. To restore, put the folders back and run docker compose up again. I don't need every app to implement its own thing, that would be a waste of developer time.
It's been a pretty smooth process. No, it's not a multi-region k8s cluster with auto everything.. but you can go a long way with docker-compose files that are well worth it.
Instead of Tailscale, I can highly recommend self-hosting netbird[1] - very active project, works great and the UI is awesome!
1. https://github.com/netbirdio/netbird
Dead Comment
How do ship security patches?
How do backup? And do you regularly test your backup?
I feel like upgrade instructions for some software can be extremely light, or require you to upgrade through each version, or worse.
I assume everything running in docker.
For containers: Upgrading new versions can be done headless by watchtower or manually.
For the host: You can run package updates regularly or enable unattended upgrades.
Backups can be easily done with cron + rclone. It is not a magic.
I personally run everything inside docker. Less things to concern.
2. ?
3. ZFS duplicated pool with synced r-sync of snapshots on hetzner cloud
4. I don't really care about most of the upgrades because everything is behind a vpn.