They also implement child specific locks, such as limiting the duration kids can play a game, and for only specific hours (not during night time).
It'd be interesting to try and quantify both columns - the good and the bad.
Ideally, we would go back to being able to do some things 'fast' and hopefully do a bit better at avoiding the bad things.
The difference now I guess is that we eventually learned and (mostly) everything is built without issue, sometimes at the tradeoff of time. But some countries are going through their own growing pains right now (with the tradeoff of money/people/shortcuts)
Dead Comment
I learned Python more than 10 years ago, but later chose Rails to be my first web framework to learn, as I also wanted to learn more about Ruby, hence the question.
I don't think people realize the size of these compute units.
When the AI bubble pops is when you're likely to be able to realistically run good local models. I imagine some of these $100k servers going for $3k on eBay in 10 years, and a lot of electricians being asked to install new 240v connectors in makeshift server rooms or garages.
>A typical 1U or 2U server can accommodate 2-4 H100 PCIe GPUs, depending on the chassis design.
>In a 42U rack with 20x 2U servers (allowing space for switches and PDU), you could fit approximately 40-80 H100 PCIe GPUs.