In principle, you could imagine the server packing all the external resources that the browser will definitely ask for together, and just sending them together with the original website. But I'm not sure how much re-engineering that would be.
I'm personally pretty skeptical that the first round of PQC algorithms have no classically-exploitable holes, and I have seen no evidence as of yet that anyone is close to developing a computer of any kind (quantum or classical) capable of breaking 16k RSA or ECC on P-521. The problem I personally have is that the lattice-based algorithms are a hair too mathematically clever for my taste.
The standard line is around store-now-decrypt-later, though, and I think it's a legitimate one if you have information that will need to be secret in 10-20 years. People rarely have that kind of information, though.
I was of the impression that this was the majority opinion. Is there any serious party that doesn't advocate hybrid schemes where you need to break both well-worn ECC and PQC to get anywhere?
> The standard line is around store-now-decrypt-later, though, and I think it's a legitimate one if you have information that will need to be secret in 10-20 years. People rarely have that kind of information, though.
The stronger argument, in my opinion, is that some industries move glacially slow. If we don't start pushing now, they won't be any kind ready when (/if) quantum computing attacks become feasible. Take industrial automation: Implementing strong authentication / integrity protection, versatile authorization and reasonable encryption into what would elsewhere be called IoT is just now becoming an trend. State-of-the-art is still "put everything inside a VPN and we're good". These devices usually have an expected operational time of at least a decade, often more than one.
To also give the most prominent counter argument: Quantum computing threats are far from my greatest concerns in these areas. The most important contribution to "quantum readiness"[1] is just making it feasible to update these devices at all, once they are installed at the customer.
[1] Marketing is its own kind of hell. Some circles have begun to use "cyber" interchangeable with "IT Security" – not "cyber security" mind you, just "cyber".
(And my highest preference would be for vendors to be forced to publish both server and client code as free software, if they don't continue selling their service for reasonable prices. Not only for games, but for all services and connected devices. Getting political support for such regulations is, of course, extremely hard.)
I know this might look like a valid approach on the first glance but... it is stupid for anyone who knows how git or GitHub API works? Remote (GitHub's) reflog is not GC'd immediately, you can try to get commit hashes from events history via API, and then try to get commits from reflog.
You need to know how git works and GitHub's API. I would say I have a pretty good understanding about how (local) git works internally, but was deeply surprised about GitHub's brute-forceable short commit IDs and the existence of a public log of all reflog activity [1].
When the article said "You might think you’re protected by needing to know the commit hash. You’re not. The hash is discoverable. More on that later." I was not able to deduce what would come later. Meanwhile, data access by hash seemed like a non-issue to me – how would you compute the hash without having the data in the first place? Checking that a certain file exists in a private branch might be an information disclosure, but gi not usually problematic.
And in any case, GitHub has grown so far away from its roots as a simple git hoster that implicit expectations change as well. If I self-host my git repository, my mental model is very close to git internals. If I use GitHub's web interface to click myself a repository with complex access rights, I assume they have concepts in place to thoroughly enforce these access rights. I mean, GitHub organizations are not a git concept.
The kind of person who cares and reads about "the Xth century" can also trivially understand the date range involved.
The kind of person who can't tell 18th century is the 1700s and 21st century is 2000s, it would make them little good to read history, unless they get the basics of counting, calendars, and so on down.
Second, your implicit premise is likely wrong. Different people have different talents and different challenges. Concrete example: In German we say eight-and-fighty for 58. Thus 32798 gets two-and-three-thirty-seven-hundred-eight-and-ninety where you constantly switch between higher and lower valued digits. There are many people, me included that not-seldomly produce "Zahlendreher" – transposed digits – because of that, when writing those numbers down from hearing alone, e.g. 32789. But then, there are also people for whom this is so much of a non-issue that when they dictate telephone numbers they read it in groups of two: 0172 346578 becomes zero-one-seven-two-four-and-thirty-five-and-sixty-eight-and-seventy. For me this is hell, because when I listen to these numbers I need to constantly switch them around in my head with active attention. Yet others don't even think about it and value the useful grouping it does. My current thesis is that it is because of a difference between auditory and visual perception. When they hear four-and-thirty they see 34 in their head, whereas I parse the auditory information purely auditory.
What I want you to take from my example, is that these issue might not be training problems alone. I have learned the German number spelling from birth and have worked in number intensive field and yet I continue to have these challenges. While I have not been deeply into history, I suspect that my troubles with Xth century versus x-hundreds might persist, or persist for a long time, even if I get more involved in the field.
When a monopoly uses it's status in an attempt to gain another monopoly, that's a problem and governments eventually strike this behavior down.
Sometimes it takes time, because you'd rather not go on a ideology power trip and break something that's useful to the country/world.
> Yes, free markets and monopolies are not incompatible.
How did you get from "efficient markets" to "free markets"? The first could be accepted as inherently value, while the latter is clearly not, if this kind of freedom degrades to: "Sure you can start your business, it's a free country. For certain, you will fail, though, because there are monopolies already in place who have all the power in the market."
Also, monopolies are regularly used to squeeze exorbitant shares of the added values from the other market participants, see e.g. Apple's AppStore cut. Accepting that as "efficient" would be a really unusual usage of the term in regard to markets.