Today's find: You can get a floating-point approximation of 1/x that's accurate to 3 bits with a single integer subtraction instruction.
float fast_reciprocal(float x)
{
unsigned i = *(unsigned *) &x;
i = 0x7effffffU - i;
return *(float *) &i;
}
The magic number 0x7effffff accomplishes two things:1) The exponent is calculated as 253-e, which effectively negates the exponent and subtracts 1.
2) The mantissa is approximated as a 1st order polynomial in the interval [1, 2).
Interesting, but perhaps not very useful (as most CPU:s have more accurate reciprocal approximations these days).
One can do better still - 0x7EF311BC is a near optimal value at least for inputs in the range of [0.001 .. 1000].
The simple explanation here is:
The post’s number 0x7EFFFFFF results in an approximation that is always equal to or greater than 1/x. The value 0x7EEEEBB3 is better, but it’s less than 1/x around 2/3rds of the time. My number 0x7EF311BC appears to be as well balanced as you can get, half the time greater and half the time less than 1/x.
To find this number, I have a Jupyter notebook that plots the maximum absolute value of relative error over a range of inputs, for a range of magic constants. Once it’s setup, it’s pretty easy to manually binary search and find the minimum. The plot of max error looks like a big “V”. (Edit while the plot of mean error looks like a big “U” near the minimum.
The optimal number does depend on the input range, and using a different range or allowing all finite floats will change where the optimal magic value is. The optimal magic number will also change if you add one or more Newton iterations, like in that github snippet (and also seen in the ‘quake trick’ code).
PPS maybe 0x7EF0F7D0 is a pretty good candidate for minimizing the average relative error…?
It might be nice to start sharing modern/safe versions of this snippet & the Quake thing. Is using memcpy the only option that is safe in both C and C++? That always felt really awkward to me.
[1] https://tttapa.github.io/Pages/Programming/Cpp/Practices/typ...
Apart from memcpy(), the 'allowed' methods include unions in C (writing to one member and reading from another), and bit_cast<T>() and std::start_lifetime_as<T>() in C++.
[0] https://godbolt.org/z/dxMMfazoq
The first is to allocate the memory using char a[sizeof(float)]. In C, char pointers may alias anything, so then you can do pointer conversions that would normally be undefined behavior and it should work. The other option is to use the non-standard __attribute__((__may_alias__)) on the pointer.
By the way, using union types for this is technically undefined behavior in the C and C++ standards, but GCC and Clang decided to make it defined as an implementation choice. Other compilers might not.
(I am not a C or C++ expert.)
The big problem, apart from "memcpy is a bit tedious to write", is that it is impossible to guarantee absence of UB in a large program. You will get inlined functions from random headers and suddenly someone somewhere is "dereferencing" a pointer by storing it as a reference, and then your null pointer checks are deleted. Or an integer overflows and your "i>=0" array bounds assert is deleted. I have seen that happen several times and each time the various bits of code all look innocent enough until you sit down and thoroughly read each function and the functions it calls and the functions they call. So we compile with the worst UB turned off (null is a valid address, integers can overflow, pointers can alias, enums can store any integer) and honestly, we cannot see a measurable difference in performance.
UB as it is currently implemented is simply an enormous footgun. It would be a lot more useful if there were a profiler that could find parts of the code, which would benefit from UB optimisations. Then we could focus on making those parts UB-safe, add asserts for the debug/testing builds, and turn on more optimisations for them. I am quite certain nobody can write a fully UB-free program. For example, did you know that multiplying two "unsigned short" variables is UB? Can you guarantee that across all the template instantiations for custom vector types you've written you never multiply unsigned shorts? I'll leave it as an exercise to the reader as to why that is.
Outside of freestanding/kernel scenarios, null treatment shouldn't affect anything, and again can similarly benefit probably a decent amount of code by removing what is unquestionably just dead code. There's the question of pointer addition resulting in/having an argument of null being UB, which is rather bad for offsetof shenanigans or misusing pointers as arbitrary data, but that's a question of provenance, not anything about null.
Global integer overflow UB is probably the most questionable thing in that list, and I'd quite prefer having separate wrap-on-overflow and UB-on-overflow types (allowing having unsigned UB-on-overflow too). That said, even having had multiple cases of code that could hit integer overflow and thus UB, I don't recall any of them actually resulting in the compiler breaking things (granted, I know not to write a+1<a & similar for signed ints), whereas I have had a case where the compiler "fixed" the code, turning a `a-b < 0` into the more correct (at least in the scenario in question) `a<b`!
I do think that it would make sense to have an easy uniform way to choose some specific behavior for most kinds of UB.
They don't understand that one of the biggest barriers to developers writing and adopting more C in their projects is the random jankiness that you get from the compilers. Instead they make C this elite thing for the few people who have read every single line of C code they themselves compiled and ran on their Gentoo installation. Stuff like having no bounds checks is almost entirely irrelevant outside of compute kernels. It doesn't get you much performance, because the branches are perfectly predictable. It merely reduces code size.
There is also the problem that the C development culture is uttlery backwards compared even to the semiconductor industry. If you want to have these ultra optimized release builds, then your development builds must scream when they encounter UB and it also means that no C program or library without an extensive test suite should be allowed onto any Linux distribution's package repository. Suddenly the cost of C programming appears to be unaffordably high.
Some things that could mess with you:
* Floating-point endianity is not the same as integer endianity.
* Floating-point alignment requirements differ from integer alignment requirements.
* The compuler is configured to use something else than 32-bit binary32 IEEE 754 for the type "float".
* The computer does not use two's complement arithmetic for integers.
In practice, these are not real problems.
You could use intrinsics for the bit casting & so on and it would be well-defined to the extent that those intrinsics are.
(I understand some people consider SIMD intrinsics in general to be UB at language level, in part because they let you switch between floats & ints in a hardware-specific way.)
I did this a few days ago in the approximate division thread: https://news.ycombinator.com/item?id=42481612#42489596
In some cases, it can be faster than hardware division.
[1]: https://en.wikipedia.org/wiki/Fast_inverse_square_root
More detail on https://en.wikipedia.org/wiki/Fast_inverse_square_root#Alias... .
IEEE754 floats are a very well-designed binary format, and the fact that these approximations are possible is part of this design; indeed, the first known instance of this trick is for a=1/2 by W. Kahan, the main designer of IEE754.
If you want full precision, you need to do three Newton-Raphson iterations after the initial approximation. One iteration is:
1. 7e: the first sig bit has to be zero to preserve the sign of the overall number.
2. ffffff: due to Taylor series 1/(1 + X) = 1 - X ..., negating gives the multiplicative inverse.
Although an IEEE float has a sign bit (thus 2's complement does not work on the float itself) but mantissa and the exponent individually work with 2's complement for different reasons. The exponent has a biased range due to being chopped by half; where as the mantissa has a biased range due to its definition and constant offset. The exponent's 1 offset (7e instead of 7f) is a bit more difficult to see at first -- the devil is in the details, but 2's complement is the basic intuition.
It's in Go, but ports easily to other languages.
See also: https://stackoverflow.com/questions/32042673/optimized-low-a...