I had responsibility (working with consultants who did this for a living) to work with the project manager, and architect/GC. All of the datacenters (back when companies put data-centers in their buildings), IDFs, MDFs. The MDF in particular was complex as it combined the floors IDF + the buildings MDF/telco connections, punchdowns, and a massive Nortel Option51c set of cabinets. We carefully laid out the room - measuring the minimal possible distance for cable techs to get in between the racks. Everything was designed down to the 1/4" in the room.
I showed up (mostly randomly) with a tape measure during construction - internal walls were up - and they were off by almost 14" - which would have made the internals almost unusable for their original purpose. They had to tear down their framing, pull everything out - thankfully before any electrics/racks/hvac had been put in place.
Having something like this would have greatly reduced that possibility. Bet they end up on every site (if they aren't already).
Nobody at commercial volume pays list to AWS - everyone gets a discount.
UV doesn't change any of that for me - it just wraps virtualenv and pip downloads dependencies (much, much) more quickly - the conversion was immediate and required zero changes.
UV is a pip / virtualenv wrapper. And It's a phenomenal wrapper - absolutely changed everything about how I do development - but under the hood it's still just virtualenv + pip - nothing changed there.
Can you expand on the pain you've experienced?
Regarding "things that need to be deployed" - internally all our repos have standardized on direnv (and in some really advanced environments, nix + direnv, but direnv alone does the trick 90% of the time) - so you just "cd <somedir>", direnv executes your virtualenv and you are good to go. UV takes care of the pip work.
Has eliminated 100% use of virtualenvwrappers and direct-calls to pip. I'd love to hear a use case where that doesn't work for you - we haven't tripped across it recently.
What's (still) super impressive was the 48 drives. Looking around -the common "Storage" nodes in rack these days seem to be 24x24TB CMR HDD + 2 7.68 TB NVME SDD (and a 960 GB Boot Disk) - I don't know if anyone really uses 48 drive systems commonly (outside the edge cases like Backblaze and friends)
| We were still running on the older Intel Xeon E5 processors, ...
| Moving to the more modern Xeon Scalable processors showed major performance gains for our server application
But - I was unable to find any mention in the article as to what processors they were actually comparing in their before/after.
This is what you get when you have an educator completely dedicated to a single topic and surpasses all expectations of education.