Famous to the point of being a cliche, the titanic was thought to be unsinkable, and I would have a similarly hard time convincing the engineers behind the ship's design to believe otherwise.
The level of confidence you're displaying in predicting the unforeseeable is something you may want to take a deeper look at.
I understand that most of those leetcode corporations don't care much about resilience, likely even incapable of producing highly reliable systems, and may give you a false impression that reliability is something of an unachievable fantasy. But it's not, it's something we have enough research done on and can do really well today if needed, we are not in titanic era anymore.
I have high confidence in these things (not in "predicting the unforeseeable"), because I've done them myself. My edge infrastructure had like half an hour of downtime total in many years, almost a decade already.
Speaking of things that don't make sense... if it's unforeseeable, one will have a difficult time adequately preparing for it
> If you design for resilience, you get more resilience and you build confidence as you see the evidence how the system works in real world.
You simply can't foresee or eliminate all risk. This is referred to as "the turkey problem." It's not my idea, but one I certainly subscribe to.
https://www.convexresearch.com.br/en/insights/the-turkey-pro...
What if a political event impacts you, for instance? A pandemic? A storm taking out a major data center? A weird Linux kernel edge case that only happens beyond a certain point in time? That only sounds ridiculous because it hasn't happened, but weird things like that happen all the time. There are so many unseen possibilities.
I understand that might sound unreasonable or facetious or like I'm expanding the scope.
The point is, the more confident that you've built something that has no SPOF the more exposed your are to the risk of it, because one probably does exist.
I remember when I first deployed DNS routed system it was too reactive, constantly jumping between servers, monitoring was too sensitive, it didn't wait for servers to stabilize to return them into the mix and exponential backoff was taking servers out for far too long. But even given all that it was still able to avoid outages caused by data center failures and connectivity problems.
Taking accountability and having backup plans are extremely important, but you simply can't remove every last shred of dependence. You eventually have to accept that there are things that are out of your control and may take you by surprise despite best efforts.
Other than that it's your choice whether to make your infrastructure dependent on a bunch of unreliable centralized SPOFs from big corporations or build highly available infrastructure relying on servers from many different providers running your own DNS servers with DNS routing, failover, etc. You will definitely beat Cloudflare's availability this way many times over.
That is, you have to be discontent with things like 99% of the time. Because of this, you'll feel compelled to improve them. But the price of it would be being less than happy most of the time.
The names of once-trusted news companies has stayed the same, but it's about the only thing about them that has.
I believe the tipping point was smartphones, and find it very ironic that Steve Jobs showed off iPhone's ability to load up The New York Times in its reveal keynote in 2007.
This was the exact topic of my first substack piece on Monday if anyone is interested. https://benlumen.substack.com/p/thank-god-i-never-went-into-...
I did feel a little vindicated reading Bari Weiss' NYT resignation letter the next day saying that "Twitter has become its ultimate editor".
Objective journalism was never ever a thing. That's why Manufacturing Consent happened and all the works from Edward Bernays and all the way to Noam Chomsky.
What journalism had before though is just more consistency in worldview, because mass media was very centralized and pushed much more consistent propaganda with nothing to oppose it.
But, in reality, it was probably a mix of all 3 and some more.
When a 3rd party has access to your keys, their responsibilities to you are spelled out in your contract with them. That's true for CDNs as well as hosting companies.
For most websites today if someone can intercept traffic somewhere close to the server they don't even need the keys, they can just fake responses to pass CA validation and issue valid certificates with their own keys and MITM like there is no encryption.
And coldboot attacks performed by a hosting provider staff of dumping memory and finding keys isn't that realistic of a threat, just like putting servers into a locked cage on someone else's property isn't much of a protection.