So you have to modify all potential clients for this constraint to be enforced. So it's effectively worthless as there is no way to roll it out in any meaningful sense.
This is where I get rankled.
In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.
Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”
The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.
If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.
The trust is always in the root itself.
It's not an active directory / LDAP / tree type mechanism where you can say I trust things at this node level and below.
This is the only way anything will ever change. GitHub is _easily_ the most unreliable SaaS product. There's not a week whereby we aren't affected by an outage. Their reputation is mud.
This is a great headline and very impressive. However, it’s also somewhat puzzling to see the company spend so much investment money to build a small prototype plane that doesn’t resemble a commercial airliner in any way, break the sound barrier 6 times, retire it, and then conclude they’re on their way to delivering commercial supersonic passenger planes in five years
Boom Aero is one of those companies I want to see succeed, but everything I read about them tickles my vaporware senses. Snowing off a one-off prototype that doesn’t resemble the final product in any way (other than speed) is a classic sign of a company spending money to appeal to investors.
Retiring the plane after only a few flights is also a puzzling move. Wouldn’t they be making changes and collecting data as much as possible on their one prototype?
What are they using, then?
Now with copilot I'd be surprised if they weren't profitable
How would this work practically? If a single client is overflowing the edge router queues you are kindof screwed already? Even if you dropped all packets from that client you would need to still process the packets to figure out what client they belong to before dropping the packets?
I guess you could somehow do some shuffle sharding where a single client belongs to a few IP prefixes and when that client misbehaves you withdraw those prefixes using BGP to essentially black hole the network routes for that client. If the shuffle sharding is done right only the problem client will have issues as other clients on the same prefixes will be sharded to other prefixes.