Readit News logoReadit News
stonewall · 3 years ago
I self-host literally everything (email, calendar/contacts, VOIP, XMPP, you name it) from by basement with used 1U servers from eBay and a cable internet connection.

It was probably more hassle than most people would want to bother with to get it set up. But, with everything up and running, there's very little maintenance. I probably spend a few hours a month tinkering still, just because I enjoy it.

I use a stack of Proxmox VMs, FreeIPA for authn/authz, and Rocky Linux for all servers and workstations. My phone runs GrapheneOS with a Wireguard VPN back to the house. I don't expose anything to the public internet unless absolutely necessary.

I recently anonymized and Ansibilized my entire setup so that others might get some use out of it:

https://github.com/sacredheartsc/selfhosted

xyzzy123 · 3 years ago
I had fun doing this until I had kids.

I have a rack with 10gbe, ups, kubernetes a zfs storage server, multiple vlans, 4 unifi APs & locally hosted controller and all sorts of self-hosted stuff.

My heart breaks slightly as I watch things slowly degrade and break down due to bit-rot and version creep, I now wish I had a synology, flat network and cloud everything possible.

There are days when the kids can't watch a particular movie and I find out it's because a particular kube component failed (after an hour of root-causing) because I haven't touched it in 2 years. I then have regrets about my life choices. Sometimes the rack starts beeping while I'm working and I realise the UPS batteries are due for replacement because it's been 4 years. I silence the alarm and get back to the production issue at work, knowing it'll beep at me again in 30 days. I'll still be too busy to fix it. It doesn't help that in Australia the ambient can get to 45 degrees C pushing disks and cpus to their limits.

Just sharing a different perspective...

tharkun__ · 3 years ago
Sounds like a bit of overkill too if you ask me. You can self-host most things that make sense to keep private without going all in on the fun stuff.

As in, k8s is cool to play with and understand and all but why would I bring that complexity to a simple home setup that can run on a single machine in a corner somewhere?

You don't have to go to a synology box and give up everything but there are simpler options without going "Cloud everything". Of course you will be giving up some features as well, the more you strip things down, but that can beneficial in and of itself if you ask me.

Personally I went from being the "Linux from scratch" guy to running Ubuntu LTS. Natural progression and the kids can watch any of their movies at any time they want. Keep the hard drives rotated, do an LTS to LTS upgrade every few years and that's about it. Heck I've been running the exact same Postfix, fetchmail and IMAP setup for probably 20 years now and I don't even remember what all the options I set do any longer. I also don't need to though. It's just rock solid. All the other fun stuff has passed me by and I don't care. Don't get me wrong, it's still fun to play with stuff and we do use k8s at work and it's great. But it's just complete overkill for home.

8fingerlouie · 3 years ago
> I had fun doing this until I had kids.

As i keep telling people, self hosting is fun as long as your user count is 1. When it grows beyond that, you suddenly have a SLA.

I self hosted almost everything (e-mail is pointless from privay concerns), and when we had kids i moved to a dual Synology setup with a single proxmox server for running services. Fast forward some years and electricity suddenly costs about an arm and a leg, so i had to do "something".

I completely stopped self hosting anything "publicly" available. Everything moved to the cloud including most file storage, using Cryptomator for privacy where applicable.

The server got reduced to a small ARM device with the prime task of synchronizing our cloud content locally, and making backups of it, both remote and local. As a side bonus it also runs a Plex server off of a large USB hard drive. All redundancy has been removed, and my 10G network has been switched off, leaving only a single 16 port POE switch for Access Points and cameras.

The Synology boxes now only comes online a couple of times every week to take a snapshot of all shares, pull a copy from the ARM device, after which it powers down again.

In the process i reduced my network rack power consumption from just below 300W to 67W, and with electricity prices for the past year averaging around €0.6/kWh that means i save around 2050 kWh/year, which adds up to €1225/year, or just over €100/month.

Subtract from those savings the €25/month i pay for cloud services and i still come out ahead. On top of that i literally have zero maintenance now. My home network is only accessible from the outside through a VPN. The only critical part is backups, but i use healthchecks.io to alert me if those fail.

I still kept the network seggregation, so everything "IoT" is on it's separate VLAN, as well as the kids. The only major change was that the "adults" VLAN is now the management VLAN. I have no wired computers, so maintaining a management VLAN over WiFi was more trouble than i could be bothered with :)

Why are the kids on their own VLAN/WiFi ? Because kids wants to play games with their friends, something the normal Guest network does not support. Kids also brings all sorts of devices with new and exiting exploits/vira, and i didn't feel like doing the maintenance on that. So instead my kids have their very own VLAN with access to just printers, AirPlay devices and the Plex server.

Helmut10001 · 3 years ago
This is of course very context dependent and no critique whatsoever. I also have kids and my self-hosting became ever more important since. No Youtube commercials or auto-continuation for kid-videos thanks to invidious, reduced costs due to a lot of cancelled software plans (because everything runs on my rust), I can care better for my parents, e.g. helping with technology, monitoring, burglars (my homelab sits at my parents house, remotely connected via IPSEC), data backup is solid and under my control (ZFS Rz2, & offsite backup with borgmatic & rsync), and most important I have reduced my life dependencies and lock-in effects to worldwide companies.

Maintenance is 1-2 hours a month: Proxmox, various Docker nested in unprivileged LXC, everything automated (cronjobs, Watchtower, Backups etc.). I also built a pretty big PV-plant to safe energy costs (30wkp). My main strategy was a "minimal" approach, going slowly, thinking carefully _what I really need_ and preferring robustness over new features or software. I usually take 1-2 months of review before deciding to install any new software, most often longer. I am against the "All in One" mentality (e.g. prefer custom bash scripts over third-party automation; or selectively install needed parts instead of the all-in-one alternatives, e.g. nextcloud/all-in-one).

stonewall · 3 years ago
Your perspective resonates with me! I have 3 kids under 6 years old, and I can definitely see this easily creeping up in my future.

My family situation is partly why I just went with plain old VMs and a Linux Distro with a 10 year support cycle. Its easy to keep all the moving parts in my head, and I figure I can mostly coast for 10 years and then reevaluate.

Thanks for reminding me, I also need to replace my UPS battery...

hinkley · 3 years ago
For procrastination you have to set yourself up for success.

For instance, what if the alarm sent you the product page for the model of battery you need? You order them, silence the alarm, and when they show up you’re reminded you need to change them. Or if that’s a bad time, when the alarm goes off again.

I think we’ve only begun to work out how alarms are the wrong solution to the problem and what we need are prompts.

BLKNSLVR · 3 years ago
Also in Aus - I've got a not-quite-as-complex setup, but I do have it all in a purpose-built room in the shed which is fitted out with an old box air-conditioner[0] with a thermostat power controller to keep the room below a certain temperature, which should help to extend the working life of "all the shit in there". Damn it's nice visiting the "cool room" in summer, there isn't enough floor space to sleep in there though.

Also have kids, and they can be demanding when stuff ain't working.

Also second guess my life choices, but then again I also still love playing around with this stuff, knowing that I can maintain the full stack.

[0]: Replacing that old air-con with a (far) more modern small split system could possibly have paid for itself by now in power savings. I think I should look into that.

logifail · 3 years ago
> I watch things slowly degrade and break down due to bit-rot and version creep [..] > There are days when the kids can't watch a particular movie and I find out it's because a particular kube component failed (after an hour of root-causing) because I haven't touched it in 2 years. I then have regrets about my life choices. [..] > I now wish I had a synology, flat network and cloud everything possible

No snark intended, but this sounds as though you chose to include a lot of unnecessary complexity into your self-hosting, then discovered that there's almost always a cost to unnecessary complexity(?)

aliasxneo · 3 years ago
You're not alone :) The only thing I have left at this point is a rather complex network, mostly because it's a pain to undo at this point. Plex went away last year and I just "license" all the kids stuff through Google play now...
ryjo · 3 years ago
Incredible. The usual response to "should I host my own email" is "don't do it; you'll get hacked."

Three questions:

1. Have you heard of this complaint?

2. Do you use a home ISP connection, or a commercial ISP connection? A "home ISP connection" here usually comes with a dynamic IP address; you can't get your hands on a static address without paying a very large amount monthly or getting a commercial connection.

3. You say "I don't expose anything to the public internet unless absolutely necessary." Is your ip address via your domain name one of those "necessary" items?

stonewall · 3 years ago
1. Yes, most people will tell you not to host your own email, because its too complicated/difficult to get your mail delivered reliably.

A lot of this is FUD. Yes, email is a bit more difficult to get right than say, hosting a web app behind Nginx. It's an old protocol, with many "features" bolted on years later to combat spam.

I'm not sure how email is easier to "hack," unless there is a zero day in Postfix or something. Back in the day, lots of script kiddies would find poorly configured mail servers that were happy to act as an open relay...maybe the stigma persists?

To deliver mail reliably, you need 4 things (in my experience):

- A static, public IP address with a good reputation (ie, not on any spam blacklists)

- A reverse DNS record that resolves back to your mail server's IP

- A domain SPF record that says that your mail server is allowed to deliver mail

- DKIM records and proper signing of outgoing messages (DMARC records help too)

2. I have a residential cable internet connection, but pay extra for static IPs. You can probably get by with a dynamic IP and some kind of dynamic DNS service, as long as you don't want to send email. You could still receive email locally if your MX recorded pointed to some kind of dynamic DNS record.

Note that some ISPs explicitly block outbound traffic on port 25 due to spammers. You might need to check with yours.

3. The only things I expose to the internet are Postfix (to send/receive emails), XMPP (to chat with others), and my web server. Everything else (calendar/contacts, IMAP, Syncthing, etc) stays behind my firewall, accessible only to internal hosts. I use wireguard on my Android phone to access these services seamlessly when I leave the house.

I've never bothered to conceal my IP address. For awhile, I experimented with using Mullvad VPN for all my egress traffic. Unfortunately I spent all day solving CAPTCHAs...wasn't worth it (for me, anyway).

EDIT: I should add, that I also have a "normie" email address at one of the usual providers that I use for really important things like bank accounts / utility providers. If I get hit by a bus, I don't want my (very nontechnical) wife to deal with sysadminning on top of my early death.

For all our personal communications though, we use my selfhosted email domain.

roxgib · 3 years ago
I have a $4/month VPS that comes with a static IP address. Any reason you shouldn't use that as a proxy to solve the dynamic IP problem?
girvo · 3 years ago
> 2. Do you use a home ISP connection, or a commercial ISP connection? A "home ISP connection" here usually comes with a dynamic IP address; you can't get your hands on a static address without paying a very large amount monthly or getting a commercial connection.

Weirdly, most of the ISP's I've had on the NBN here in Australia were happy to give me a static IPv4 address for free (and my current one will set you up an IPv6 /56 block, but its beta apparently).

novok · 3 years ago
How much power does it take? I've realized with some services it's cheaper to use it than the electricity and hardware cost.
stonewall · 3 years ago
I almost certainly don't save any money considering electricity cost. I have a dell r630 for compute and an r730xd that I use as a NAS. Then I have one switch for the rack and a POE switch for the house. Probably 3-5amps total?

If I started over, I would probably choose more efficient gear.

That said, I don't mind paying for the electricity too much. I enjoy the warm fuzzies of knowing my data lives under my roof.

digitallyfree · 3 years ago
I pull <100W idle with a HPE G8, Thinkcenter Tiny, and enterprise routing/switching in my basement. All this is old hardware and you can bring it down with newer stuff. The idea is to size your equipment appropriately and not have a huge rack running just because you got the servers for free.

Also while bandwidth costs less in the cloud compute and storage is much cheaper if you host it locally. If you want a server to host your public website, do it in the cloud. If you want a file server for local use the price and performance benefits quickly overweigh the power cost. There also is the additional factor of having the equipment/data 100% under my control, which is very important to me.

j45 · 3 years ago
For homelab or self hosting, Power per watt is my favourite measure now.

Depending on your need (many apps just idle most of the time) a usff pc can make an excellent proxmox server.

Check out a Lenovo m920q, Dell Optiplex 7060, HP EliteDesk or ProDesk 800 series. They are easy enough to bump to 64G of ram and stack up as you need. The 8700T cpu is a desktop grade in a small shell and watt footprint and also has vpro and hyperthreading.

It’s not a rack server but it’s easy enough to add a Mac Studio/Mini soon enough for crunching.

I have spent too much time with full rack server gear and using it a can seem like a matter of preference before need. It’s heavy, hungry, noisy, and my better half didn’t like when I brought the leftover data centre stuff home.

The USFF boxes are near silent and sip electricity.

hinkley · 3 years ago
One of the advantages of getting the family Togo outside during the warm months is that more of your kWh for self hosted equipment get burned in the winter, where they are offsetting some of your heating costs.
hinkley · 3 years ago
We need to work on a mostly turnkey solution for these things.

I still think another generation or two of raspi and friends and you can build a little cluster of them.

beaukin · 3 years ago
This GitHub share is pure gold. You’re amazing.
scruple · 3 years ago
Agreed. It's divine compared to the janky Ansible setups I've seen in the wild.
stonewall · 3 years ago
Thank you for the kind words :)
triyambakam · 3 years ago
Very inspiring and thank you for sharing. I run GrapheneOS too but I haven't set anything up like a Wireguard VPN. What is the rough idea of how that works?
stonewall · 3 years ago
I plug my cable modem into a server running the OPNsense firewall [0], which has a wireguard plugin.

I set up a wireguard VPN in OPNsense.

Then I downloaded the wireguard app in F-Droid, and pasted my credentials from the wireguard Android app into the wireguard configs on the firewall.

I set the VPN in grapheneOS as "always on," so from my phone's perspective, it always has access to my internal network, even when on LTE. All my phones internet traffic ends up going through my home internet connection as a result.

[0] https://opnsense.org/

j45 · 3 years ago
Try installing algovpn it’s pretty much a turnkey wireguard installation, lots of tutorials on YouTube.

I would advise against setting up wireguard manually.

zwilliamson · 3 years ago
Checkout Tailscale for an easy to rollout WireGuard based solution that has a fair free tier
zamnos · 3 years ago
What do you do for backups? If your house gets destroyed in a natural disaster, will all your pictures persist?
stonewall · 3 years ago
I regularly back up to some external HDDs that I keep outside the home.

For pictures specifically, I recently discovered M-Disc [0], which are (allegedly) archival-quality, writable Blu-Ray discs. I'm considering burning an M-Disc of each year's pictures and storing them in jewel cases at a family member's house.

[0] https://www.mdisc.com/

MuffinFlavored · 3 years ago
> Email newsletter tools: Old or new, your pick

Am I wrong to think that most businesses/people pay for Mailchimp because getting your e-mail actually delivered into the inboxes of your target audience/customers is non-trivial? aka, you're going to end up in "spam" otherwise?

I find it hard to believe that you can "free-ly" send e-mail to, say, 100,000 e-mails and actually have it get delivered at a high rate? I would love to learn if I'm wrong though.

This article could've talked about DataDog vs Jaeger/ELK stack I think for tracing/logs.

dijit · 3 years ago
> I find it hard to believe that you can "free-ly" send e-mail to, say, 100,000 e-mails and actually have it get delivered at a high rate? I would love to learn if I'm wrong though.

You can do this, I have done this, but honestly it's annoyingly painful and you're always one bad ad campaign away from being nuked to death by people marking your emails as spam.

There's a lot of rules to follow and even when you follow them you need to ensure that you start emailing a low volume for each new sending IP until the reputation grows over time.

Nextgrid · 3 years ago
To be fair, if people are marking your emails as spam frequently enough to get your IPs/domains blacklisted then it suggests the system is working as designed and you shouldn’t be sending whatever you’re sending to those people.
capableweb · 3 years ago
+ unsurprisingly, lots of hosting providers disable SMTP/block port 25/ban you if any email sending is being detected coming from your instances, legitimate or not, as the problem with hosting IPs that are sending spam is so annoying (and even illegal in some places).
Thaxll · 3 years ago
You can't just send 100k emails with a good delivery rate, if you're a nobody Gmail will never trust you.

You can follow all the rules the want ( dkim, spf etc .. ) somehow it will no be delivered because you don't know exaclty how they rate your IP.

Breza · 3 years ago
This is the correct answer. I work at a company that sends millions of emails every week from our self-hosted IP range. If you have a high quality list of recipients who actually want to hear from you and warm up your IPs gradually, you can be successful.

Deleted Comment

djbusby · 3 years ago
How does one even know that message are being tagged as spam?
gwbrooks · 3 years ago
You can get high deliverability -- the keys, whether you're using your own servers or someone else's come down to a clean list that won't generate complaints and staying within the TOS of your mailserver host or third-party SMTP service.

Host your mail-creation/list-management/analytics stack yourself (I like Mautic and MailWizz but there are other options) and use a third party for SMTP services. Amazon SES charges $1 per 10,000 emails; other services are slightly more expensive but it's all still very affordable.

tedivm · 3 years ago
I'm not sure why you're getting the downvotes but this is the way for people who want some level of self hosting. I finally gave up hosting my own mail server about two years ago- I had been self hosting email since 2005, but it reached the point where delivery to the big companies was extremely difficult. If someone wants to host their own software but actually have their emails delivered they really do need a third party SMTP service that specialized in deliverability or has a big company behind it.
locustous · 3 years ago
I've had really poor deliverability from SES. Our emails went straight to spam on many providers. Just trying to do email verification on new signups.
luckylion · 3 years ago
That's also why the phishing campaigns now use Amazon SES (and amazon happily lets them, as long as they pay, it seems): their email will get delivered.
jd3 · 3 years ago
> I find it hard to believe that you can "free-ly" send e-mail to, say, 100,000 e-mails and actually have it get delivered at a high rate? I would love to learn if I'm wrong though.

The company I work for has an outbox feature which supports this and it was a non-trivial problem to solve. We ended up using sparkpost with a bunch of modifications to isolate potentially bad actors (i.e., clients who pay for our software but send what is basically spam) to an individual sending pool. We also have CSMs that handle this and help to coach clients to not send spam.

https://support.sparkpost.com/docs/deliverability

galdor · 3 years ago
You go with Mailchimp (or equivalent) for newsletters because they give you the subscription form, handle email verification, unsubscriptions, GDPR mentions everywhere, provide useful stats and notifications, segmentation and targeting… Getting email delivered is indeed really hard, especially if you send thousands of emails, but building all these other features is insanely time consuming. The cost of Mailchimp is negligible in comparison.

Same reason why companies use Sendgrid for marketing campaigns.

SoftTalker · 3 years ago
Anything I get from Mailchimp or similar services is auto-flagged as spam by rule.
j45 · 3 years ago
A dedicated IP address can be warmed up to deliver email well enough but it can take some time.

A mail server software like mdaemon can quickly handle the heavy lifting of improving deliverability. It’s a small price for the deliverability. I’m just a former user of it.

It’s ok to use an external email provider for outgoing email delivery.

ESPs (email service providers) are handy because they can separate outgoing transactional emails from marketing ones to ensure deliverability.

samstave · 3 years ago
The biggest aspect that used to be used in spam detection (from an OSI, not a content reading perspective) was source IP blocks.

Many people dont realize that spam was the original source for social networking...

I cant type up all the history I know quickly, but Friendster (who 'invented the social graph', HI5, Tagged, MySpace, were all started as an overlay to email harvesting mechanisms to --> spam....

They needed to create high value email-lists of valid emails.

Asking for such, was stupid as most people rejected it.

Then, they figured out that adding a service (chat and share with your friends, give us your email and their email so we can connect you by sending them invites etc) was the best social-engineering (the 'hacker' meaning) mechanism was to have people validate their personal email, offer a novel e-'service' to 'connect' with your friends within some context - and have you pre-validate the email list based on your invites and contacts... then parlay MLM structure to create better more validated email lists.

Then you sell the lists on the BM to spammers looking to avoid a high bounce rate based on real emails.

Then they started nefariously stealing your contacts with auto-opt-in agreements and such....

Then as the battle btwn spam and socially-interesting services ramped up the spam companies (such as Postini (which was bought by google) became the spam filters (selling their services to BigCorps) began to realize that filtering on the sending IPs was a good measure for determining spam (along with rate-limiting, and other aspects) - such that spammers were getting blocked based on delivery IP blocks.

This set-off a market incentive for spammers to buy up swaths of IPv4 blocks so they could swap out IPs...

Then there were many ranges, sources, tracrts etc used to determine senders and ID them as spammers etc....

So - the spammers invented VPN/Tunneling delivery routes such they could send to a number of various global relays so that they could send from a central source of machines, but be delivered to the endpoints from a variety of global IP blocks.

There was a market for IPv4 blocks all over the world and spammers were spending big bucks on all aspects, from paying for the IP blocks, relationships with ISP/VPN/etc tech....

All while attempting to provide what was a thin layer of utility service to the user to keep what was effectively continued access to the growing address books of their users and keep them engaged on the platform such that they could keep knowing if existing or new contacts were valid.

There were even back-room deals between spammers/tech/isp etc to allow access.

So, the "social networks" we know know of were birthed literally upon spam.

-

Have you ever wondered why as soon as tiktock came out, all of a sudden a fuck-ton of spam was hitting your gmail inbox (previously postini) <-- Because tictock was eating the revenue lunch.

Zuck literally stated that the entire revenue model for FB was "senator, we sell ads"

When in an interview with Google, they asked "what kind of company do you think google is "Well, most people think youre a search engine, but youre actually an advertisement correlation engine"

In an interview with Twitter (dont forget about the infamous ATT room 641A?) - what do you think twitter is: "Twitter is a global sentiment monitering engine" (this was ~2006?8? I cant recall)

--

Source: I know these founders and many of the original devops members from the above companies, and other more scary outcomes from the above statements.

And here we are today with the advanced learning all built upon "consumption" ad algos

bsnnkv · 3 years ago
My experience has been that after having become comfortable with Nix, self-hosting is the path of least resistance for the majority of "mainstream" tooling where you can pick between paying for a SaaS or self-hosting. So nice to not have to deal with Docker containers to deploy (most things) anymore.

I see a lot of people suggesting hosting on a VPS, but I feel that a Hetzner Auction box is often much better bang-for-buck and serves as a nice remote dev/build box for projects that need that extra oomph when you aren't working from a capable desktop or laptop.

[1]: This was the article that finally opened my eyes to the power of Nix for self-hosting https://arne.me/blog/plex-on-nixos, and it is such a huge upgrade from the previous Docker setup I was running[2]

[2]: https://github.com/madslundt/docker-cloud-media-scripts

PuffinBlue · 3 years ago
That was a great write up on nix. Nix isn't something I know much about at all so thanks for the link.

What surprised me the most was learning rclone can mount object storage locally! That's vey interesting to learn :-)

simongray · 3 years ago
One of the main reasons I use Docker is being able to run the exact same Dockerfiles locally and in prod, with virtualisation taken care of automatically on e.g. Mac.

Is Nix a viable alternative to that?

ParetoOptimal · 3 years ago
> Is Nix a viable alternative to that?

Yes, it even provides stronger reproducibility guarantees that what you build locally and what's on prod are exactly the same.

You can also build a docker container from the Nix expression, see:

https://nix.dev/tutorials/building-and-running-docker-images

If you are interested, I recommend then also checking out https://zero-to-nix.com/

bsnnkv · 3 years ago
Again, just my experiences as a long-time DevOps person- you can build Dockerfiles on two different machines and get two entirely different results (ie. success vs failure), and especially on macOS, Docker performance is quite poor, even moreso when mounting directories from the host.

Nix on the other hand will produce the same result every time wherever I run it. This alone for me is enough reason to prefer it over Docker the majority of the time.

seqizz · 3 years ago
Oh I was looking the thread of the cult :) /s

As a fellow follower, I can also recommend SNM[0] if anyone wants to self-host their e-mail. Works with zero maintenance, except upgrades which cause few lines to change.

[0]: https://gitlab.com/simple-nixos-mailserver/nixos-mailserver

einhverfr · 3 years ago
At PGConf India, one of the keynotes addressed exactly this topic. The largest stock broker firm in India had made the decision to self-host everything. The CTO made a number of points that I think are missed in this discussion and article, namely:

1. You may think you are a software company, but HR, accounting etc are just as critical to your operations as the customer product. Therefore there isn't really a distinction between core business and non-core business that people like to think, and

2. By self-hosting you ensure you learn the technology and can therefore respond to problems yourself. In an environment where businesses are increasingly on the hook for defects in their services to the end user, that's a good thing.

Obviously hiring knowledgeable people is probably the bottleneck but it is still a cost saver and it is important to create an organizational culture where people can learn the technology on the job.

sgt · 3 years ago
Agreed fully. You might risk slightly more downtime, but the overall benefit of owning it yourself is well worth it long term.
einhverfr · 3 years ago
Over time, if you come to understand the technology, you can fix things a managed service cannot, so you might actually risk less downtime if you prioritize that.

At least that's my experience based on fighting weird bugs on managed database services.

efields · 3 years ago
This is me, a principal frontend engineer and “player-coach” team lead of a few engineers in a pharmaceutical. We do as much as we can for the various digital properties, short of intense design (leaders want a willing agency they can torment).
InnerGargoyle · 3 years ago
can you link it?
einhverfr · 3 years ago
https://pgconf.in/conferences/pgconfin2023/program/proposals...

Maybe check back in a week and see if they have the video up.

linsomniac · 3 years ago
Self-hosting is a big operations problem, with few tools to automate it.

Long ago, I had an associate tell me that he was having some success with setting up Wordpress sites for local political organizations. I said to him: "Oh, that's really neat! What are you doing to ensure that the sites stay up to date with security patches?" His response was completely unrelated to my question, which I figured was my answer and was why there are so many hacked sites out there.

Anything I deploy needs to have an upgrade plan. Ideally, something that provides a package (either on distro or a repo the package provides), so "apt update" will resolve it. Docker can be a good way as well, Sentry does a pretty good job at this.

KronisLV · 3 years ago
> Anything I deploy needs to have an upgrade plan. Ideally, something that provides a package (either on distro or a repo the package provides), so "apt update" will resolve it. Docker can be a good way as well, Sentry does a pretty good job at this.

I wrote an article called "Never update anything": https://blog.kronis.dev/articles/never-update-anything which in truth argued that while updates are necessary, they're also going to break things... a lot. And there isn't always going to be an upgrade path either (e.g. using AngularJS or Clusterpoint).

In my experience, even containers break things surprisingly often: everything from GitLab, Nextcloud, OpenProject to even things breaking in regular server updates, like a Debian update breaking GRUB or another Debian install automatically launching exim4 which prevented my own mail server from working.

Perhaps that's because of how we build and package software, because of the fact that we don't separate the runtime from the data enough (e.g. persistent directories) or that we make too many assumptions about the environments...

Regardless, I can understand why some don't even update working but insecure software: because of the risk to turn it into secure but not working software.

XCSme · 3 years ago
I agree, I ran many WordPress sites, UXWizz dashboards or even my own tools written in Node.js, and they never ever broke by themselves in 10+ years. The only time there is an issue if I decide to update/change anything. In general, software that has once worked will always work. If I open my PSP (Playstation Portable) from 2005 it will still work as well as in the first day, all the games work the same and the boot time and interface are faster than most consoles nowadays. Why does it still work? Because it once worked and nothing changed.
linsomniac · 3 years ago
Agreed, updates are probably at some point going to break things, and you're going to have to spend time fixing them, maybe even recovering from backups... As I said, an operations problem.

Leaving things without upgrades is also a problem as well though, due to security problems.

dicknuckle · 3 years ago
Anecdote, I also had a container app fail, from a trusted provider that I assumed weighsy have plenty of testing in place, but maybe it was too synthetic.

Sonarr from LinuxServer.io briefly changed to a dev build and boned the DB before switching back to a stable build.

x0x0 · 3 years ago
The entire discussion on the link obscures the fact that saas companies are providing a real service. Even if you don't want the product to be updated, staying abreast of security patches, external api changes, OS changes, client changes, browser changes, etc is real work. Self hosting requires the person hosting to do all the ktlo work.
chillfox · 3 years ago
I have run a lot of WordPress sites, it's easy for a skilled admin to run it securely for a long time with barely any effort and unfortunately it's also easy for a user to make it insecure.
linsomniac · 3 years ago
Sure, and that was what I floated with that guy, but the impression I got from him is that he installed it and moved on. So, as you say, a skilled admin can do it, given some ongoing attention. Which is exactly what I'm talking about WRT operations.
triyambakam · 3 years ago
> which honestly kind of upset me a lot

I've seen this language more and more frequently: minimized (kind of) + maximized (a lot) qualifiers. No real insight, just interesting.

scubbo · 3 years ago
In my idiom, at least, "kind of" is not solely deminisher, but can also be an approximater - to say something "kind of upset me" _could_ mean "it upset me, but not a great deal", or it could mean "it had an effect on me which is complicated and difficult to concisely describe, but which can be approximately described as 'upset'". In that reading, this isn't a contradiction at all - "which honestly had an extremely large effect on me which was similar to, but not entirely the same as, being upset".
rhaway84773 · 3 years ago
I don’t think the “kind of” here is serving to minimize the “upset ness”. I think it’s describing the fact that the person wasn’t really “upset”, but some other emotion which they can’t express, which was kind of like being upset, but not exactly the same.
eointierney · 3 years ago
As a modifier it's kind of a mollifier

Edit: just looked it up and wikipedia has a difinition I didn't know :)

https://en.m.wikipedia.org/wiki/Mollifier

However in the colloquial usage 'round these parts mollifier means to soften or make gentle

https://www.etymonline.com/word/mollify#etymonline_v_17411

powersnail · 3 years ago
To my non-native speaker ear, “a lot” indicates the strength the emotion (“very upset”), while “kind of” is a defensive wording indicating lack of objectivity or surety (“not saying it’s objectively annoying, but it does upset me”). It shows up a lot, in my experience, when people are talking about something anecdotal or subjective.
zeroonetwothree · 3 years ago
I think "kind of" is being used as a modal marker here, similar to how "like" is used. In particular, this could have been said as "which honestly like upset me", but because of some negative backlash against excessive use of "like" as a marker, people have switched to using other words.

This particular use of the word is trying to "soften the blow" of the discomfort that the listener (in this case, reader) would feel at the un-modified phrase. So if you just said "which honestly upset me a lot" that might seem like an extreme reaction for just a price increase of some service (it's not as if someone is dying), so the "kind of" is added to signal to the listener/reader that the speaker acknowledges that this is perhaps too-strong language for the situation.

Jedd · 3 years ago
In terms of annoying-once-you-notice-it idioms, should readers assume dishonesty on all other statements made by people that prefix only some small subset of their claims with 'Honestly ...' and 'To be honest ...'?
zeroonetwothree · 3 years ago
It's typically used for statements that might otherwise be interpreted non-literally or hyperbolically. Of course the risk is that over time it becomes so commonplace that it starts to be used itself to connote hyperbole, much like how "literally" has come to mean "figuratively" in many uses. But c'est la vie.
cal85 · 3 years ago
No, because they mean "honest" in the sense of "frank/unguarded", not "truthful".
_dain_ · 3 years ago
no, it's just a carelessness
bitsinthesky · 3 years ago
Nice catch. I've been using this construction and I've been oblivious to its hypocrisy until now :) I might start seeing how far I can stretch it to make it obvious how silly it is. "Which honestly did not at all upset me a ridiculous amount." Sounds unhinged.
jeppester · 3 years ago
This is definitely a thing, and I worry that I'm guilty of it myself.

I don't know if I should thank you for this insight or if you just cursed me.

Deleted Comment

creativenolo · 3 years ago
This. I've seen a lot using this on its own more and more frequently too.
fabianhjr · 3 years ago
Its better to design, implement, and use local-first software: https://www.inkandswitch.com/local-first/
__MatrixMan__ · 3 years ago
I'm developing such an app. I'm excited to get to the network connectivity part so I can see how much I've saved by making the client smart.

I think I'm going to be able to get away with just running the server for 36 minutes a day (three minutes every hour). The client will know to sync data during those time windows. 1hr of latency is fine for a lot of things if the client is smart about what it caches.

triyambakam · 3 years ago
What is the app?
triyambakam · 3 years ago
Very cool, and interesting that Martin Kleppmann of DDIA is an author. I am glad to come across this - I was brainstorming such a manifesto, now I can use this as a resource.

One local first that I recently switched to is migrating from ynab.com to my own Libre Calc spreadsheets. It took a few days to figure out all the formulas, but now I have even more control over how I track my budget.

grvdrm · 3 years ago
Do you integrate your bank accounts/etc via API or do you pull the data into the sheet manually on some periodic basis? Second question - are you using your own categories or do you rely on bank/card?

Asking as I’m sort of in the middle of the two. I keep a mostly complete spreadsheet of my expenses but that doesn’t account for things that I purchase regularly like groceries/Amazon/etc. Trying Copilot for a year right now as well.

r1cka · 3 years ago
Care to share your Libre Calc spreadsheet formulas?
triyambakam · 3 years ago
bordercases - I can't reply to your comment, it's dead.
bordercases · 3 years ago
Interested in sharing?
flakeoil · 3 years ago
If you have a family and your family uses all this self-hosted stuff (backup, file storage, email etc), what would happen when you die or get a serious health issue? Do you think your spouse or kids can get the data out themselves? No-one will have a clue where the backups are, how the emails are stored, where those pics are etc. And even more difficult in case you have implemented some clever encryption and 2-factor etc.

We all think it's simple. It's just to copy the files from "that" folder on the server to your own machine or a USB disk, but it is not so easy in practice for most people. Even for someone competent it can be difficult to sort through the mess if it is not well documented.

I say this as someone who hosts the backup and all the pics on a NAS.

romwell · 3 years ago
> Do you think your spouse or kids can get the data out themselves? No-one will have a clue where the backups are, how the emails are stored, where those pics are etc.

As opposed to what, your spouse having no idea which cloud services you used for file storage, what login credentials you had, and having no ability to access those accounts?

> Even for someone competent it can be difficult to sort through the mess if it is not well documented.

The most difficult part is sorting through the mess, however it is documented.

It's been three years since my father's passing, and I have yet to open his laptop. I can't.

I don't need any credentials to access the box of old family photos to get them scanned, and yet it's still an item on the to-do list.