https://www.spectrumsourcing.com/spectrum-news-feed/industry...
> Going forward, customers must order the full server system to obtain the motherboard.
Not sure how it was in the 90's, if it was harder it was probably because the case designs were much worse, but I think PC building is not at its easiest today either and was probably easier in the mid 2000s or 2010's (but, of course, it's still fun!):
- Graphics cards and CPUs are more power hungry, e.g. there's more fire risk from GPU power connectors now
- Graphics cards are also heavier so physical strain and location/orientation matter, some even come with a "card holder" (a little pillar to support its weight)
- There now exists "RAM training" (which can make the first bootup look as if it's failing) and in general compatibility between RAM's max speed and CPUs seems less guaranteed
- I also think RAM memory is a bit more sensitive to be plugged in perfectly in its slots now
- Storage drives now need to be screwed into the motherboard (in sometimes hard to reach places like under the huge CPU cooler) and possibly need heat sinks
- PCI lanes amount feels more limiting now than it used to (multiple storage drives and GPU fighting for bandwidth on the motherboard, limitations like "if you put an nvme drive here and here, then that will be disabled..."), it seems devices outgrew what even top end consumer CPU's have to offer
Regarding your last point: that's just market segmentation. Plenty of lanes on server CPUs. Remember Linus' rant about Intels refusal to offer ECC for consumer CPUs?
Sounds like a lot, but I was almost paying the same before - 220€ for power at home, 110€ for a dedicated Hetzner server, 95€ for a secondary internet connection (as not to interfere with the main uplink used for home office by my partner and me).
Not having to deal with the extra heat, noise and used up space at home anymore has been worth it as well.
Initial setup is a handful of commands interacting with Vault's CLI, from there, with CI in place, client certs are renewed automatically. Services are restarted / reloaded as well. Works flawlessly.
I should maybe write a (small) blog explaining how it works.
My only beef is with this:
> If you're running a large domain, you'll get a bunch of these reports. If you're running a small one, you might be able to handle it yourself.
Even with a "small" domain, you're looking at basically another part time job to analyze these reports. It's not fun, and you grow tired of it very quickly. Sure if you're running 1 website, and that's all you do it might make sense. But for a web firm like mine (serving small biz), there's no way I can set this up for clients without charging them an extra fee and most aren't willing to pay me to spend several hours each week analyzing DMARC reports.
The Vault initialization and configuration is more or less manual (just a bunch of commands, I have them in my notes). From there I am using an ansible role based on the hasi_vault module [1] which is run by a Jenkins job every night, logging into each target system, renewing certs if needed and reloading services.
Has been working very well for about a year now. Of course, there's a little more technical context needed - my CA needs to be present on all systems interacting with it, and my CI needs to be able to log into each target system (SSH keypair + sudo user). This ties into the rest of my infrastructure, which is managed by Terraform and Ansible.
I might write up a small blog post about this if I find the time.
[1] https://docs.ansible.com/ansible/latest/collections/communit...