I'm not necessarily defending the implementation, just pointing out another way in which time is irreducibly ambiguous and cursed.
For context, my current project is a monolith web app with services being part of the monolith and called with try/catch. I can understand perhaps faster, independent, less risky recovery in the micro services case but don’t quite understand the fault tolerance gain.
I have tens of thousands of ads in the collection and it would take me many lifetimes to complete, but I've been using AI to extract and catalog the meta data. I can get through about 100 ads/day this way.
One of my favorite ads, a computer from 1968 that "answers riddles": https://adretro.com/ads/1968-digi-comp-digi-comp-1-table-top...
When I first saw the scene I said: "This amount of servers is not remotely enough to pull something like this".
When I think of the scene now: "These amount of servers can do much more than the scene portrays".
I mean, most of the tech presented in the series is almost standard operations procedure via mundane equipment now.
Scary.
For me, it’s a question of when, not if this happens in real life.