Imo the most "unhinged" cpus for AVX-512 are early batches of Alder Lakes which is the only cpu family that has nearly full coverage of all existing avx-512 subsets.
It's a shame that Intel seemed to really not want people to use it, given they started disabling the ability to use it in future microcode, and fused it off in later parts.
> It's a shame that Intel seemed to really not want people to use it
AVX-512 was never part of the specification for those CPUs. It was never advertised as a feature or selling point. You had to disable the E cores to enable AVX-512, assuming your motherboard even supported it.
Alder Lake AVX-512 has reached mythical status, but I think the number of people angry about it is far higher than the number of people who ever could have taken advantage of it and benefitted from it. For general purpose workloads, having the E cores enabled (and therefore AVX-512 disabled) was faster. You had to have an extremely specific workload that didn't scale well with additional cores and also had hot loops that benefitted from AVX-512, which was not very common.
So you're right: They never wanted people to use it. It wasn't advertised and wasn't usable without sacrificing all of the E cores and doing a lot of manual configuration work. I suspect they didn't want people using it because they never validated it. AVX-512 mode increased the voltages, which would impact things like failure rate and warranty returns. They probably meant to turn it off but forgot in the first versions.
I think I have two of these sitting in a box, one prototype with avx512 and one retail without. Is it worth me breaking these out for ML experiments and such?
And, as noted in the article, that's an instruction which only works on two desktop CPU architectures (Tiger Lake and Zen 5), including one where it's arguably slower than not using it (Tiger Lake).
Meaning... this entire effort was for something that's faster on only a single kind of CPU (Zen 5).
This article is honestly one of the best I've read in a long time. It's esoteric and the result is 99.5% pointless objectively, but in reality it's incredibly useful and a wonderful guide to low-level x86 optimization end to end. The sections on cache alignment and uiCA + analysis notes are a perfect illustration of "how it's done."
What a post. It should have taken a week just to write it, never mind the amount of time it took to actually come up with all this stuff and overcome all the obstacles mentioned. What a dedication to improving the performance of phrase search.
Fascinating blog post. Having said that, it may seem like nitpicking but I have to take issue with the point about recursion, which is often far too easily blamed for inefficiency.
The blog post mentions it as one of the reasons for the inefficiency of the conventional algorithm.
A glance at the algorithm shows that the recursion in question is a tail call. This means that any overhead can be readily eliminated using a technique known for nearly fifty years already.
Steele, Guy Lewis (1977). "Debunking the "expensive procedure call" myth or, procedure call implementations considered harmful or, LAMBDA: The Ultimate GOTO". Proceedings of the 1977 annual conference on - ACM '77.
The main problem with tail call optimization is that it's unreliable; small apparently unrelated changes elsewhere in the function, a difference in the compiler command line flags, or a different compiler version, could all make a tail call become a non-tail call. Some languages have proposed explicit markers to force a call to be a tail call (and generate a compilation error if it can't), but I don't think these proposals have been adopted yet.
Not specifically address space layout randomization in the way it's usually implemented; ASLR as applied in most modern production OSes randomizes each stack's base address, but each stack is still laid out and used in a normal way. There are some research projects towards actual stack layout randomization (involving stack rearrangement via static analysis, randomly sized stack padding frames, and other techniques) which would also definitely blow up cache, but none that are mainstream in a production system as far as I know.
However, for the naive case where the recursion is a full-blown function call, without some kind of optimization, other security mitigations than ASLR will significantly affect the efficiency of recursion by adding function call overhead (and possible cache side effects) - for example, the stack cookie will still be verified and control-flow guard checks and the shadow/return stack will still be in play, if present.
Isn't it true that if an algorithm can be tail-call optimized, it can be rewritten to not use recursion at all? (And conversely, if something can't be rewritten without recursion, it can't be tail-call optimized?)
> Why are you merging up to one rare token at the beginning or at the end? Let’s consider that someone searched for C_0 R_1 C_2 C_3. If we don’t do this merge, we would end up searching for C_0, R_1, C_2 C_3, and this is bad. As established, intersecting common tokens is a problem, so it’s way better to search C_0 R_1, C_2 C_3. I learned this the hard way…
But since R_1 C_2 C_3 is in the index as well, instead of searching for C_0 R_1, C_2 C_3 with a distance of 2, you can instead search for C_0 R_1, R_1 C_2 C_3 with a distance of 1 (overlapping), which hopefully means that the lists to intersect are smaller.
From my 1980s 8-bit CPU perspective, the instruction is unhinged based solely on the number of letters. Compared to LDA, STA, RTS, that's not an assembler mnemonic, it's a novel. :-)
I think I actually need that instruction and have a use case for it, and it does something with a matrix transpose so I might finally find a real world useful demonstration of a matrix operation I can cite to people who don't know what those mean.
"This is an instruction that doesn't exist in any computer right now, so why should I put it in a machine, if it's supposed to be realistic? Well, it's because it's ahead of time."
It has a fixed polynomial, so not really that useful for anything but AES
The only case where I've had use of GF(2^8) inverses is in FEC algorithms (Forney's algorithm) and then you need some kind of weird polynomial. But all of those needs are rarely in the hot-path, and the FEC algo's are way outdated
Sometimes I read through the instrinsics guide just to play the game of spotting instructions defined primarily because certain cryptologic agencies asked for it.
AVX-512 was never part of the specification for those CPUs. It was never advertised as a feature or selling point. You had to disable the E cores to enable AVX-512, assuming your motherboard even supported it.
Alder Lake AVX-512 has reached mythical status, but I think the number of people angry about it is far higher than the number of people who ever could have taken advantage of it and benefitted from it. For general purpose workloads, having the E cores enabled (and therefore AVX-512 disabled) was faster. You had to have an extremely specific workload that didn't scale well with additional cores and also had hot loops that benefitted from AVX-512, which was not very common.
So you're right: They never wanted people to use it. It wasn't advertised and wasn't usable without sacrificing all of the E cores and doing a lot of manual configuration work. I suspect they didn't want people using it because they never validated it. AVX-512 mode increased the voltages, which would impact things like failure rate and warranty returns. They probably meant to turn it off but forgot in the first versions.
[1]:https://en.wikipedia.org/wiki/Advanced_Vector_Extensions
Meaning... this entire effort was for something that's faster on only a single kind of CPU (Zen 5).
This article is honestly one of the best I've read in a long time. It's esoteric and the result is 99.5% pointless objectively, but in reality it's incredibly useful and a wonderful guide to low-level x86 optimization end to end. The sections on cache alignment and uiCA + analysis notes are a perfect illustration of "how it's done."
Also, the fact that it's deprecated by Intel.
The blog post mentions it as one of the reasons for the inefficiency of the conventional algorithm.
A glance at the algorithm shows that the recursion in question is a tail call. This means that any overhead can be readily eliminated using a technique known for nearly fifty years already.
Steele, Guy Lewis (1977). "Debunking the "expensive procedure call" myth or, procedure call implementations considered harmful or, LAMBDA: The Ultimate GOTO". Proceedings of the 1977 annual conference on - ACM '77.
Regardless of how valid the excuse is, for such an obvious and old optimization, it’s very poorly supported.
However, for the naive case where the recursion is a full-blown function call, without some kind of optimization, other security mitigations than ASLR will significantly affect the efficiency of recursion by adding function call overhead (and possible cache side effects) - for example, the stack cookie will still be verified and control-flow guard checks and the shadow/return stack will still be in play, if present.
But since R_1 C_2 C_3 is in the index as well, instead of searching for C_0 R_1, C_2 C_3 with a distance of 2, you can instead search for C_0 R_1, R_1 C_2 C_3 with a distance of 1 (overlapping), which hopefully means that the lists to intersect are smaller.
vs
"Galois Field 2^8 affine transform on quad binary words" (GF2P8AFFINEQB)
The compression factor isn't quite the same on character count, but it's still abbreviated. :)
https://www.youtube.com/watch?v=r_pPF5npnio&t=3300 (55:00)
"This is an instruction that doesn't exist in any computer right now, so why should I put it in a machine, if it's supposed to be realistic? Well, it's because it's ahead of time."
The only case where I've had use of GF(2^8) inverses is in FEC algorithms (Forney's algorithm) and then you need some kind of weird polynomial. But all of those needs are rarely in the hot-path, and the FEC algo's are way outdated
> The inverted index will look something like this:
He isn't wrong. It is indeed "something like".