In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.
Remember people, 10,000 CPU hours of fuzzing can save you 5ms of borrow checking!
(I’m joking, I’m joking, Zig and Rust are both great languages, fuzzing does more than just borrow checking, and I do think TigerBeetle’s choices make sense, I just couldn’t help noticing the irony of those two sentences.)
Edit: ugh... if you rely on GH Actions for workflows though actions/checkout@v4 is also currently experiencing the git issues, so no dice if you depend on that.
I assume most organizations, both small and large, just host on whatever provider they know or that costs them the least. If you have budget maybe you deploy to multiple providers for redundancy? But that increases cost and complexity.
Who’s going to bother with colo given the cost / complexity? Who’s going to run a server from their office given ISP restrictions and downtime fears?
What is the realistic antidote here?