That weird feeling when you realise that the people you hang out with form such a weird niche that something considered common knowledge among you is being described as "buried deep within the C standard".
What's noteworthy is that the compiler isn't required to generate a warning if the array is too small. That's just GCC being generous with its help. The official stance is that it's simply undefined behaviour to pass a pointer to an object which is too small (yes, only to pass, even if you don't access it).
And probably never will, because C++ compatibility with C beyond what was done initially, is to one be close as possible but not at the expense of better alternatives that the language already offers.
Thus std::array, std::span, std::string, std::string_view, std::vector, with hardned options turned on.
For the static thing, the right way in C++ is to use a template parameter,
template<typename T, int size>
int foo(T (&ary)[size]) {
return size;
}
Not surprising and not a "wart". C and C++ have diverged since the mid-90s and are two very different languages now. E.g. trying to build C code with a C++ compiler really doesn't make much sense anymore (since about 20 years).
A lot of "modern" C features (e.g. added after ca 1995) are unknown to C++ devs, I would have expected that at least the Linux kernel devs know their language though ;)
Pointer to array is not only type-safe, it is also objectively correct and should have always been the syntax used when passing in the address of a known, fixed size array. This is all a artifact of C automatically decaying arrays to pointers in argument lists when a array argument should have always meant passing a array by value; then this syntax would have been the only way to pass in the address of a array and we would not have these warts. Automatic decaying is truly one of the worst actual design mistakes of the language (i.e. a error even when it was designed, not the failure to adopt new innovations).
This guy is doing something else completely. In his words:
> In my testing, it's between 1.2x and 4x slower than Yolo-C. It uses between 2x and 3x more memory. Others have observed higher overheads in certain tests (I've heard of some things being 8x slower). How much this matters depends on your perspective. Imagine running your desktop environment on a 4x slower computer with 3x less memory. You've probably done exactly this and you probably survived the experience. So the catch is: Fil-C is for folks who want the security benefits badly enough.
And the reason why C has array-pointer decay is because that made it work more or less like B (which had to do it since it literally didn't have any type other than machine word).
There are perhaps only 3 numbers: 0, 1, and lots. A fair argument might be made that 2 also exists, but for anything higher, you need to think about your abstraction.
I’ve always thought it’s good practice for a system to declare its limits upfront. That feels more honest than promising ”infinity” but then failing to scale in practice. Prematurely designing for infinity can also cause over-engineering—like using quicksort on an array of four elements.
Scale isn’t a binary choice between “off” and “infinity.” It’s a continuum we navigate with small, deliberate, and often painful steps—not a single, massive, upfront investment.
That said, I agree the ZOI is a valuable guideline for abstraction, though less so for implementation.
Funny thing about that n[static M] array checking syntax–it was even considered bad in 1999, when it was included:
"There was a unanimous vote that the feature is ugly, and a good consensus that its incorporation into the standard at the 11th hour was an unfortunate decision." - Raymond Mak (Canada C Working Group), https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_205.htm
It wasn't considered bad, it was considered ugly and in the context given that is a major difference. The proposed alternative in that post to me is even more ugly so I would have agreed with the option that received the most support, to leave it as it was.
It was always considered bad not (just) because it's ugly, but because it hides potential problems and adds no safety at all: a `[static N]` parameter tells the compiler that the parameter will never be NULL, but the function can still be called with a NULL pointer anyway.
That's is the current state of both gcc and clang: they will both happily, without warnings, pass a NULL pointer to a function with a `[static N]` parameter, and then REMOVE ANY NULL CHECK from the function, because the argument can't possibly be NULL according to the function signature, so the check is obviously redundant.
See the example in [1]: note that in the assembly of `f1` the NULL check is removed, while it's present in the "unsafe" `f2`, making it actually safer.
Also note that gcc will at least tell you that the check in `f1()` is "useless" (yet no warning about `g()` calling it with a pointer that could be NULL), while clang sees nothing wrong at all.
That is all well. But thing_t is an array type which still decays to pointer.
It looks as if thing_t can be passed by value, but since it is an array, it sneakily isn't passed by value:
void catch_with_net(thing_t thing); // thing's type is actually "usnsigned char *"
// ...
unsigned char x[42]];
catch_with_net(x); // pointer to first element passed; type checks
What's noteworthy is that the compiler isn't required to generate a warning if the array is too small. That's just GCC being generous with its help. The official stance is that it's simply undefined behaviour to pass a pointer to an object which is too small (yes, only to pass, even if you don't access it).
https://godbolt.org/z/z9EEcrYT6
Thus std::array, std::span, std::string, std::string_view, std::vector, with hardned options turned on.
For the static thing, the right way in C++ is to use a template parameter,
-- https://godbolt.org/z/MhccKWocEIf you want to get fancy, you might make use of concepts, or constexpr to validate size at compile time.
https://news.ycombinator.com/item?id=45735877
> In my testing, it's between 1.2x and 4x slower than Yolo-C. It uses between 2x and 3x more memory. Others have observed higher overheads in certain tests (I've heard of some things being 8x slower). How much this matters depends on your perspective. Imagine running your desktop environment on a 4x slower computer with 3x less memory. You've probably done exactly this and you probably survived the experience. So the catch is: Fil-C is for folks who want the security benefits badly enough.
(from https://news.ycombinator.com/item?id=46090332)
We're talking about a lack of fat pointers here, and switching to GC and having a 4x slower computer experience is not required for that.
It is not limited to compile-time constants. Doesn't work in clang, sadly.
Clang does not support this syntax.
There are perhaps only 3 numbers: 0, 1, and lots. A fair argument might be made that 2 also exists, but for anything higher, you need to think about your abstraction.
I’ve always thought it’s good practice for a system to declare its limits upfront. That feels more honest than promising ”infinity” but then failing to scale in practice. Prematurely designing for infinity can also cause over-engineering—like using quicksort on an array of four elements.
Scale isn’t a binary choice between “off” and “infinity.” It’s a continuum we navigate with small, deliberate, and often painful steps—not a single, massive, upfront investment.
That said, I agree the ZOI is a valuable guideline for abstraction, though less so for implementation.
For reference: https://digitalmars.com/articles/C-biggest-mistake.html
You could just declare
and pass those around to get roughly the same effect."There was a unanimous vote that the feature is ugly, and a good consensus that its incorporation into the standard at the 11th hour was an unfortunate decision." - Raymond Mak (Canada C Working Group), https://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_205.htm
That's is the current state of both gcc and clang: they will both happily, without warnings, pass a NULL pointer to a function with a `[static N]` parameter, and then REMOVE ANY NULL CHECK from the function, because the argument can't possibly be NULL according to the function signature, so the check is obviously redundant.
See the example in [1]: note that in the assembly of `f1` the NULL check is removed, while it's present in the "unsafe" `f2`, making it actually safer.
Also note that gcc will at least tell you that the check in `f1()` is "useless" (yet no warning about `g()` calling it with a pointer that could be NULL), while clang sees nothing wrong at all.
[1] https://godbolt.org/z/ba6rxc8W5
The problem is that they are attractive for reducing repeated declarations:
That is all well. But thing_t is an array type which still decays to pointer.It looks as if thing_t can be passed by value, but since it is an array, it sneakily isn't passed by value:
Or are you just referring to the function where one defines it as apparently 'pass by value'?