(I do not think it was AI.)
The most egregious of course is ISPs rewriting TTLs (or resolvers that just ignore them). But there are other implementation issues too, like caching things that shouldn't be or doing it wrong. I've seen resolvers that cache a CNAME and the A record it resolves to with the TTL of the CNAME (which is wrong).
I'm also very concerned about the "WHY DNS MATTERS FOR SYSTEM DESIGN" section. While everything there is correct enough, it doesn't dive into the implication of each and how things go wrong.
For example, using DNS for round robin balancing is an awful idea in practice. Because Comcast will cache one IP of three, and all of a sudden 60% of your traffic is going to one IP. Similar issue with regional IPs. There are so many ways for the wrong IP to get into a cache.
There is a reason we say "it's always DNS".
you mean DNSSEC, right? RIGHT?
1 - Cloud – This is minimising cap-ex, hiring, and risk, while largely maximising operational costs (its expensive) and cost variability (usage based).
2 - Managed Private Cloud - What we do. Still minimal-to-no cap-ex, hiring, risk, and medium-sized operational cost (around 50% cheaper than AWS et al). We rent or colocate bare metal, manage it for you, handle software deployments, deploy only open-source, etc. Only really makes sense above €$5k/month spend.
3 - Rented Bare Metal – Let someone else handle the hardware financing for you. Still minimal cap-ex, but with greater hiring/skilling and risk. Around 90% cheaper than AWS et al (plus time).
4 - Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills, scale, cap-ex, and if you plan to run the servers for at least 3-5 years.
A good provider for option 3 is someone like Hetzner. Their internal ROI on server hardware seems to be around the 3 year mark. After which I assume it is either still running with a client, or goes into their server auction system.
Options 3 & 4 generally become more appealing either at scale, or when infrastructure is part of the core business. Option 1 is great for startups who want to spend very little initially, but then grow very quickly. Option 2 is pretty good for SMEs with baseline load, regular-sized business growth, and maybe an overworked DevOps team!
[0] https://lithus.eu, adam@
> Buy and colocate the hardware yourself – Certainly the cheapest option if you have the skills
back then this type of "skill" was abundant. You could easily get sysadmin contractors who would take a drive down to the data-center (probably rented facilities in a real-estate that belonged to a bank or insurance) to exchange some disks that died for some reason. such a person was full stack in a sense that they covered backups, networking, firewalls, and knew how to source hardware.
the argument was that this was too expensive and the cloud was better. so hundreds of thousands of SME's embraced the cloud - most of them never needed Google-type of scale, but got sucked into the "recurring revenue" grift that is SaaS.
If you opposed this mentality you were basically saying "we as a company will never scale this much" which was at best "toxic" and at worst "career-ending".
The thing is these ancient skills still exist. And most orgs simply do not need AWS type of scale. European orgs would do well to revisit these basic ideas. And Hetzner or Lithus would be a much more natural (and honest) fit for these companies.
''As a personal note, I do not like this decision. To me LFS is about learning how a system works. Understanding the boot process is a big part of that. systemd is about 1678 "C" files plus many data files. System V is "22" C files plus about 50 short bash scripts and data files. Yes, systemd provides a lot of capabilities, but we will be losing some things I consider important.
However, the decision needs to be made.''
I'd just like to interject for a moment. What you're referring to as Linux, is in fact, Linux plus systemd, or as I've recently taken to calling it, Linux/systemd.
Linux is not an operating system unto itself, but rather another free component of a fully functioning systemd system made useful by the systemd corelibs, systemd daemons, and vital systemd components comprising a full OS as defined by Poettering.