Readit News logoReadit News
arraypad · 3 years ago
I was surprised by the observation that Python rounds towards even numbers, so round(0.5) = 0, but round(1.5) = 2.

This is indeed clearly documented [1], I guess I never looked closely enough. I found some discussion on the dev-python list [2] which shows at least I'm not the only one surprised by this!

[1] https://docs.python.org/3/library/functions.html#round

[2] https://groups.google.com/g/dev-python/c/VNf8TABiB9k

cm2187 · 3 years ago
That's good to know and kind of dangerous. I have seen banking regulators getting upset in the past because of roundings not following the usual convention ("banker's rounding"). You will have a hard time explaining it's an unexpected behaviour of some programming language rather than the management of the bank trying to be cute with the numbers.
masklinn · 3 years ago
> I have seen banking regulators getting upset in the past because of roundings not following the usual convention ("banker's rounding").

That… is exactly the behaviour python implements, and what GP was surprised by…

It is also, incidentally, the IEEE 754 recommendation.

Python 2 used to round away from 0.

rubicks · 3 years ago
I have seen banking regulators getting upset because they (effectively) assumed infinite precision until the "final rounding" (their verbiage). This was at a major financial services provider with exactly zero numerical stability analysts.

So, consider the source of the complaint.

masklinn · 3 years ago
The biggest annoyance with P3’s round is that you can’t specify a rounding mode.

Well that and it returns an integer when rounding to 0 digits, but that’s less of an issue since the langage also switched to “true division”.

dec0dedab0de · 3 years ago
You can specify a rounding mode if you use decimals instead of floats
fwsgonzo · 3 years ago
A fun fact about integer division is that on x86 dividing by zero is CPU exception 0. Except, it's not just for dividing by zero, it's whenever a division is impossible to perform.

So, for example what happens if you divide -1 by -INT_MIN? As you probably know, Abs(INT_MIN) is larger than INT_MAX, and so it is not possible to perform -1 / INT_MIN, as well as -1LL / INT64_MIN. It will crash your program, your emulator/sandbox and your server. So, be careful!

im3w1l · 3 years ago
This had me scratching my head for a while. Shouldn't -1 / INT_MIN just be 0? Indeed it should. INT_MIN / -1 is what causes the issue. I also tested doing that the latter just to be sure, and on my system, performing that division led to a result of INT_MIN rather than an exception.

Edit: Actually further tests with INT_MIN / - 1 did lead to a crash. Maybe the first result was due to constant folding or something.

Sharlin · 3 years ago
Like all signed overflow, it's undefined behavior in C and C++, so what exactly happens is up to the gods of optimizing compilers.
garaetjjte · 3 years ago
In case divisor is always positive, you can use these variants:

    int divF(int a, int b) { return a / b - (a % b < 0); }
    int divC(int a, int b) { return a / b + (a % b > 0); }
They should be faster as it avoids branching: https://godbolt.org/z/qrraj8s6j

fay59 · 3 years ago
> We could stop right here but this suffers from overflow limitations if translated into C.

FWIW, the final version also suffers from integer overflow limitations. If the difference between an and INT_MIN/INT_MAX (depending on whether you floor or ceil) is <= b/2, you will have integer overflow.

jepler · 3 years ago
I thought the same might be true, and a theorem prover agrees with us: https://gist.github.com/jepler/eda8401d1b613c44376dc8148fcf9...
ux · 3 years ago
Thank you for pointing this out. I added a note about this.
Const-me · 3 years ago
> transforming a float based algorithm to integers in order to make it bit-exact

Modern CPUs, and most programming languages, guarantee bit-exactness for floats most of the times, because they implement IEEE 754.

Relatively easy to break (approximate instructions like rsqrtps, FMA instructions, compiler switches like -ffast-math or /fp:fast, MXCSR rounding modes) but I think fixing bugs is probably easier than refactoring everything from FP into fixed-point.

BTW, multiplication/division/rounding instructions are much faster for floats. I think a fixed-point version will only be a performance win when the integers are 8 or 16 bits wide, and the algorithm is vectorizable.

ux · 3 years ago
Floating point operations are not deterministic in other situations: https://randomascii.wordpress.com/2013/07/16/floating-point-...

This can be terribly hard to manage in tests if you don't have a threshold infrastructure in place (which is hard and sometimes impossible to setup if you don't want to preserve huge references).

Also, and this is a bit off topic, even if I remained in the float domain for my use case, I would have to re-implement the lib math routines anyway (in my case cbrt and pow) because they are not implemented the same in all the libc, making them again victim to non determinism.

So far I observed better performance with integer arithmetic anyway, but that's not the reason that motivated me.

Also note that there are other situations were you may want to keep integers; typically in the kernel, or on arch were the floating point is not optimal or even available.

Const-me · 3 years ago
> Floating point operations are not deterministic in other situations

That article was written in 2013. A decade has passed, and in modern world these situations became rather rare.

x87 FPU is deprecated and not used by 64-bit programs. Returns values are passed in XMM0 register. Temporary values are either FP32 or FP64. Modern C++ compilers evaluate FP expression from left to right, and they don't use FMA unless explicitly allowed with -ffast-math or /fp:fast compiler switch. Programmers rarely using estimate instructions because modern CPUs complete precise division and square root in 10-15 cycles, little profit from the faster approximations.

The only variable thing which remains is MXCSR register in the thread state.

> I would have to re-implement the lib math routines anyway (in my case cbrt and pow)

Or you can copy-paste them from OpenBSD. The license is permissive, they are implemented in C without assembly, and the code quality is pretty good.

OneOneOneOne · 3 years ago
Thanks to the author for describing everything so clearly!
Yaa101 · 3 years ago
I wonder if there is a use case for this instead of fixed point math which is more precise than floating point math
andrepd · 3 years ago
As always, Hacker's Delight is a wonderful book for all integer arithmetic tricks and operations.