Readit News logoReadit News
Posted by u/aosaigh a year ago
Ask HN: Small teams and solopreneurs, how are you hosting your apps?
I'm a solo dev working on both my own apps and a product as part of a small team. I'm interested to hear how others are running their apps?

- Self hosted on a VPS? - Managed infrastructure like AWS, Fly, Render, Digital Ocean App Platform, GCP? - Full-stack platform like Vercel, Firebase, Supabase?

from-nibly · a year ago
Buy some used servers on eBay, at least 3 will do for redundancy, then get 2 internet connections. Set up another server to act as your opnsense firewall/router and bond the two internet connections together. Make sure to buy a pair of UPSs for power back up. Install kubernetes and rook ceph with about 12 hard drives (redundancy) then find a friend who can host another server for offside backups. Dont forget the tape drive system for your second media. Start deploying gittea so you can host your code. Use argocd so you can have a build system. Deploy dapr so you can host all of your microservices in a standard way. Deploy otel and alertmanager so you can hook it into the only cloud service you need: OpsGenie (don't forget to winge the whole time you use it)

That way you can write an article about how much cheaper self hosting is compared to the cloud and get your next round of investment.

On a serious note. The next startup I do I'm going to just use supabase.

happytoexplain · a year ago
Only about 10% of this comes off as sarcastic in the context of HN.
flemhans · a year ago
Just one server and skip the redundancy. Then it's kinda manageable
j45 · a year ago
Self-hosting using FTTH is many orders of magnitude more stable than home based internet used to be.

A lot of the description above can be run pretty easily with Proxmox, which itself is a web based gui to setup said redundancy (even can run it as hybrid cloud with failover to the major cloud providers).

This approach can greatly simplify and reduce dependency formation on one cloud service in particular.

TobbenTM · a year ago
Fully managed so I can spend my time on actually building features. In my case, AWS is my go to cloud, and even with a couple of thousand users, Lambda for compute, DynamoDB for database and SNS+SQS for eventing is costing me less than 5€ per month. Yes, there are risks with serverless if you get DDOSed or whatever, but it’s a risk I’m fine with, and can mitigate with gateways in front if necessary. And Lambdas are not locking me in to AWS since I’m running “full” ASP.Net apps in them, so hosting them on actual compute platforms is an easy switch.
romanhn · a year ago
Very similar tech stack here, with all the same motivations. Biggest differences are I'm using RDS for the database and have a slightly different approach for serverless compute. I started out with Lambda, but the cold start times were bugging me so I moved production hosting to App Runner, which is the next best thing (eliminates cold start without breaking the wallet). Kept staging environment on Lambda. Also using the "fat lambda" approach with ASP.NET apps, so no lock-in here. Spending about $35/mo total - RDS is the biggest contributor, then App Runner and other small things.

I also have an SSR app for the homepage which required some dynamic functionality. Basic Node app hosted in a Cloudflare worker essentially for free, hitting the App Runner hosted API for data.

aosaigh · a year ago
Interesting. Did you migrate to a serverless architecture or was your app serverless from the start?
TobbenTM · a year ago
It was serverless from the start, although there isn't a whole lot of serverless unique code in it, apart from some event handler bootstrapping to handle SQS triggered Lambdas. Everything else is just standard .Net that would look the same no matter where it's hosted. Huge fan of separating infrastructure concerns from the rest of the app to not lock yourself in so much.
tmitchel2 · a year ago
Snap, I'm using CDK to set it all up too which makes everything pretty easy to manage.
qeternity · a year ago
Rancher k8s + Helm charts for HA services (Redis Sentinel + Patroni Postgres). Nightly snapshot backups + streaming Postgres WAL segments to S3.

I’ve never understood the k8s hate. We have been running this for a few years and it is rock solid. We can bring the entire cluster up on any provider anywhere in the world in about an hour. The DR is great.

eitchugo · a year ago
Self-hosted on a VPS is the first option for me, it's cheap and you have total control on everything. It's valid in my case because I know my way through infrastructure and configuration, so a panel isn't too much gain.

You sometimes can even get those for free - https://free-for.dev/#/?id=major-cloud-providers

For example, I have some (really) free instances on Oracle Cloud that I host pages, experiments, and so many things.

ivorbuk · a year ago
S3 buckets for the SPA's, AWS Lambda, API Gateway and EC2 for a MySQL instance. Some SQS to help with orchestration of the operational stuff. Have about 5k MAU and total cost barely breaks 40$ a month.

Chose this because it's the stack I'm super familiar with.

aristofun · a year ago
Hetzner + docker swarm can take you long way for a fraction of cost of any next alternative

And for a fraction of a mental burden and stress of the lightest k8s setup you can imagine

quectophoton · a year ago
I did something like this for a while when I was messing around with my personal website, but with one node in Hetzner and other in Linode. It was a really nice way of managing services without bringing a full kitchen sink factory factory.

The only thing I found lacking was an easy way to share volumes across nodes. There's plugins and stuff, but for me it was not worth the effort.

I ended up just ditching Linode and having everything in a single Hetzner VPS. I can do this because it was mostly experimenting and didn't really need to have more than one server. But for this specific case, the volumes thing was what ended up tipping the balance in favor of only having a single server because it wasn't worth the effort.

aristofun · a year ago
How often solopreneurs can’t get away with just 1 beefy machine? Or at least with just 1 vps provider to share extra volumes
aristofun · a year ago
I run 2 rails websites with hundreds of real daily users on a single shared 2cpu/2gb vps, inlcuding 1 staging env, postgres db, memcache for caching.
felipemesquita · a year ago
We have our own servers (mostly consumer pc parts) and run lxd to slice them up into vms/containers. Deploy to those using kamal and expose the web server port through cloudflare tunnels. With simple scripts to periodically compress and upload the storage volumes and database dumps to Google drive, most of our less critical apps cost about $0 to keep running. For the higher criticality apps, we also deploy them to Linux vms using kamal, but use a commercial service (linode).
derfabianpeter · a year ago
Spoiler: Shameless plug, but might be helpful for people

I understand most of the commentators, as well as OP, are probably looking to not spend too much time and money on their hosting infra. While that makes sense in the beginning, there might be a point in your successful journey where you want to hand over your operation duties to someone skilled and focus on building software because your business starts depending on safe and well-maintained infrastructure.

If you're looking for something between Heroku and AWS (both in terms of pricing and scalability) but based on K8s, with direct access to skilled platform engineers and personal support, you might want to check out https://www.ayedo.de/cloud/