Since iOS of a couple of versions ago, you can trigger color filters on and off from shortcuts, and get a similar behaviour, but it isn't perfect and sometimes glitches. I do this so my photos app and a few others are in color, but the rest are in grey scale.
TLS 1.3 version in the record header is 3.1 (that used by TLS 1.0), and later in the client version is 3.3 (that used by TLS 1.2). Neither is correct, they should be 3.4, or 4.0 or something incrementally larger than 3.1 and 3.3.
This number basically corresponds to the SSL 3.x branch from which TLS descended from. There's a good website which visually explains this:
https://tls13.xargs.org/#client-hello/annotated
As for if someone is correct or whatever for calling out TLS 1.x as SSL 3.(x+1) IDK how much it really matters. Maybe they're correct in some nerdy way, like I could have called Solaris 3 as SunOS6 and maybe there were some artifacts in the OS to justify my feelings about that. It's certainly more proper to call things by their marketing name, but it's also interesting to note on they behave on the wire.
When the bad guys want to DDoS the personal blog website they don’t go and figure out the correct amount they need to send to fill up that pipe/tube that directly connects the personal blog website, they just throw roughly one metric fton at it. This causes the pipes/tubes before the personal blog website to fill up too, and has the effect of disrupting all the other pipes/tubes downstream.
The result is your hosting provider is pissed because their infrastructure just got pummeled, or if you’re hosting that on your home/business ISP they also are pissed. In both cases they probably want to fire you now.
The fault was two different clients with divergent goal states:
- one ("old") DNS Enactor experienced unusually high delays needing to retry its update on several of the DNS endpoints
- the DNS Planner continued to run and produced many newer generations of plans [Ed: this is key: its producing "plans" of desired state, the does not include a complete transaction like a log or chain with previous state + mutations]
- one of the other ("new") DNS Enactors then began applying one of the newer plans
- then ("new") invoked the plan clean-up process, which identifies plans that are significantly older than the one it just applied and deletes them [Ed: the key race is implied here. The "old" Enactor is reading _current state_, which was the output of "new", and applying its desired "old" state on top. The discrepency is because apparently Planer and Enactor aren't working with a chain/vector clock/serialized change set numbers/etc]
- At the same time the first ("old") Enactor ... applied its much older plan to the regional DDB endpoint, overwriting the newer plan. [Ed: and here is where "old" Enactor creates the valid ChangeRRSets call, replacing "new" with "old"]
- The check that was made at the start of the plan application process, which ensures that the plan is newer than the previously applied plan, was stale by this time [Ed: Whoops!]
- The second Enactor’s clean-up process then deleted this older plan because it was many generations older than the plan it had just applied.
Ironically Route 53 does have strong transactions of API changes _and_ serializes them _and_ has closed loop observers to validate change sets globally on every dataplane host. So do other AWS services. And there are even some internal primitives for building replication or change set chains like this. But its also a PITA and takes a bunch of work and when it _does_ fail you end up with global deadlock and customers who are really grumpy that they dont see their DNS changes going in to effect.