Virgil has a family of completely well-defined (i.e. no UB) fixed-size integer types with some hard-fought rules that I eventually got around to documenting here:
One of the key things is that values are never silently truncated (other than 2's-complement wrap-around that is built-in to arithmetic) or values changed; only promotions. The only sane semantics for over-shifts (shifts larger than the size of the type) is to shift the bits out, like a window.
The upshot of all that is that Virgil has a pretty sane semantics for fixed-size integers, IMHO. Particularly comparisons. Comparing signed and unsigned ints is not UB and cannot silently give the wrong answer; integers are always compared on the same number line. It takes only a maximum of one extra comparison to achieve this. AFAICT this is what achieved by the std::cmp* referenced in the article.
Looking at horror stories like this along with their "solutions" that are done by slapping more standard functions on top of the already existing piles, I'm glad that I do my personal programming in languages which 1) have meaningful numeric towers and automatically do promotion/demotion as appropriate, 2) don't implicitly convert signed integers into unsigned integers and error out instead.
With each standard revision, C++ is becoming more and more horrible as much as it strives to become a better and more useful language.
Lisp and Erlang. The latter doesn't have a numeric tower but has automatic bigint conversion and handling. The former is eager to error upon signed/unsigned type violations once you specify them.
I'm fully aware of pitfalls of c++ (and c), but I keep coming back to them as all the rest alternatives failed to provide the same speed, performance, and small size.
every language has its dark corner, none is perfect, same as C and C++. C and C++ are popular for decades is a fact, like it or not, data does not lie.
As a person working on big C++ telco codebases for a living, I'm fully aware that C++ is solving real problems and that -Weverything makes it somewhat possible to work with the language. At the same time, I'm appalled by the negative consequences of the "let's fix problems with the old and broken comparison operator by introducing a new, better comparison operator, since we need to maintain almost complete backwards compatibility" approach that C++ takes here.
I'm interested in solving real problems, but C++ deserves trash talking for this reason alone.
What's the alternative? Change the semantics of the comparison operator so that code looks cleaner? That means that when you read a comparison in source you have to know how it's going to be compiled in order to tell if the code is correct or not.
The usual arithmetic conversions linked from the article [0] are a bit different from the ones I know.
In particular, if int can represent all values of all operands, both of them will be converted to int.
Thus comparing unsigned short and short will usually (if short is narrower than int) do the right thing. C++Insights (which I didn't know! This is the real pearl of this article.) agrees with me [1].
The relevant part of your first link seems to be right at the beginning:
> If the operand passed to an arithmetic operator is integral or unscoped enumeration type, then before any other action (but after lvalue-to-rvalue conversion, if applicable), the operand undergoes integral promotion.
If there are already std functions "safe" for integral comparison, why not replace the < and > operators impl with those? Is it because it violates existing specs? If so, shouldn't the specs be considered bad and get revised?
I thought C and C++ were advertised as language that don't do a lot of hand-holding and requires programmers to know what they are doing, and many aspects of it such as memory management really reflects that. So why is there implicit integral type conversion? Why not require programmer to always explicitly convert like Rust does?
Forcing the programmer to convert integer types explicitly doesn't really help much. It makes it obvious that a conversion is happening, but that's it. People will simply add the required explicit casts without thinking about possible truncation or sign changes. UBSan has a very useful -fsanitize=implicit-conversion flag which can detect when truncation or sign changes occur, but this stops working when you make the cast explicit. So in practice, implicit casts actually allow more errors to be detected, especially in connection with fuzzing. Languages like Go or Rust would really need two types of casts to detect unexpected truncation.
https://github.com/titzer/virgil/blob/master/doc/tutorial/Fi...
One of the key things is that values are never silently truncated (other than 2's-complement wrap-around that is built-in to arithmetic) or values changed; only promotions. The only sane semantics for over-shifts (shifts larger than the size of the type) is to shift the bits out, like a window.
The upshot of all that is that Virgil has a pretty sane semantics for fixed-size integers, IMHO. Particularly comparisons. Comparing signed and unsigned ints is not UB and cannot silently give the wrong answer; integers are always compared on the same number line. It takes only a maximum of one extra comparison to achieve this. AFAICT this is what achieved by the std::cmp* referenced in the article.
With each standard revision, C++ is becoming more and more horrible as much as it strives to become a better and more useful language.
C++20 IS fixing many complaints or design-choices made in the past.
In terms of the article it casually mentions in the end using
'''-Werror -Wall -Wextra''' The unsigned signed comparisons will fail to compile. Most serious projects use these flags anyway.
I wonder if clang-tidy will also catch this?
every language has its dark corner, none is perfect, same as C and C++. C and C++ are popular for decades is a fact, like it or not, data does not lie.
I'm interested in solving real problems, but C++ deserves trash talking for this reason alone.
Deleted Comment
Most serious projects are opting in to being unable to compile on future compilers whenever new warnings are added?
Thus comparing unsigned short and short will usually (if short is narrower than int) do the right thing. C++Insights (which I didn't know! This is the real pearl of this article.) agrees with me [1].
[0] https://en.cppreference.com/w/cpp/language/operator_arithmet...
[1] https://cppinsights.io/lnk?code=I2luY2x1ZGUgPGNzdGRpbz4KCmlu...
> If the operand passed to an arithmetic operator is integral or unscoped enumeration type, then before any other action (but after lvalue-to-rvalue conversion, if applicable), the operand undergoes integral promotion.
Clicking "integral promotion" leads here: https://en.cppreference.com/w/cpp/language/implicit_conversi...
I thought C and C++ were advertised as language that don't do a lot of hand-holding and requires programmers to know what they are doing, and many aspects of it such as memory management really reflects that. So why is there implicit integral type conversion? Why not require programmer to always explicitly convert like Rust does?