There are a lot of features thrown into this language that don't seem worth the learning costs they incur. What are the problems you're really trying to fix? Focus on the things that are really important and impactful, and solve them; don't waste time on quirky features that just make the syntax more alien to C programmers.
* `if` / `case` / `choose` improvements look fine, though not that important.
* Exception handling semantics aren't defined.
* `with` is pointless and adds gratuitous complexity to the language.
* `fallthrough` / `fallthru` / `break` / `continue` are all just aliases for `goto`. It's not obvious to me that we really need them.
* Returnable tuples look very nice.
* Alternative declaration syntax looks like a nightmare. If we were redesigning C from the ground up, a different declaration syntax might be better, but mixing two syntaxes is a terrible, terrible idea.
* References. Why? They only add confusion.
* Can't make head or tail of what `zero_t` and `one_t` are about, or why they would be useful.
* Units (call with backquote): gratuitous syntax, unnecessary and confusing.
* Exponentiation operator: gratuitous and unnecessary.
Yeah, Ping, I agree. It reads like they missed the key lesson of C — in Dennis Ritchie's words, "A language that doesn't have everything is actually easier to program in than some that do." And some of the things they've added vitiate some of C's key advantages — exceptions complicate the control flow, constructors and destructors introduce the execution of hidden code (which can fail), and even static overloading makes it easy to make errors about what operations will be invoked by expressions like "x + y".
An interesting exercise might be to figure out how to do the Golang feature set, or some useful subset of it, in a C-compatible or mostly-C-compatible syntax.
I do like the returnable tuples, though, and the parametric polymorphism is pretty nice.
> Can't make head or tail of what `zero_t` and `one_t` are about, or why they would be useful.
I suspect it's the same problem C++ has/had (C++11 fixed it) with bools (see the safe bool idiom [0]). Basically treating a type like an integer (arithmetic object) and boolean (logical object) at the same time is problematic (especially for a "system" type meant for extending implicit system behavior). Because then I can do `if(BoolObject < 70)` when I only meant for `if(BoolObject)` to work (where "BoolObject" is some object evaluating to a bool, and by evaluating I mean coercing/casting).
Here it looks like they approached it by making 0/1 (effectively C's false/true) different types and relying on their simpler/more-powerful type system (e.g. because they don't have to worry about C++'s insane object system). Not a terrible idea if they were otherwise actually sticking to their goal of "evolving" C (most of their features are radical departures from the language like exceptions). C++11 solved it by clarifying how implicit explicit casting [sic] of rvalues works in certain keywords (which I strongly doubt anyone can say was the simpler way of solving the problem).
I bike-shedded in this thread about exponentiation. But taking a step back the bigger issue is there are so many poorly justified features thrown in.
I don't have the feeling that the authors appreciate the appeal of C as a simple language that maps closely to hardware features.
This is a big random collections of extensions that piqued some implementor's fancy. There is seemingly no effort at narrowing down to the cleanest or most important ideas. It totally kills the clean, simple aesthetics of the the base C languge.
discrim = b² - 4ac; // Standard notation
float discrim = pow(b, 2) - 4*a*c; // C
float discrim = b \ 2 - 4*a*c; // C∀
I would argue that these are presented here in descending order of readability.
Also its typing rules are really complicated; apply it to two integers and magically you are thrown into the floating-point world where you can never be completely certain of anything, but if you use an unsigned exponent then you stay safely in integer-land.
It looks like most people here are so eager to jump on the "this feature is good, this sucks, overal I'm not impressed"-bandwagon (with the typically unwarranted strong opinions that programmers always have when it comes to this) that they didn't bother to explore the rest of the website in more detail. Go to "people" page and you see that it's a language implemented by professors, PhDs and master students from the Programming Language group at Waterloo[0][1]. Scroll down and you'll see that a number of these features came from the master thesis of a student:
Alumni
Ph.D.
Glen Ditchfield, 1992
Thesis title: Contextual Polymorphism
Masters
Thierry Delisle, 2018.
Thesis title: Concurrency in C∀.
Rob Schluntz, 2017.
Thesis title: Resource Management and Tuples in C∀.
Rodolfo Gabriel Esteves, 2004.
Thesis title: Cforall, a Study in Evolutionary Design in Programming Languages.
Richard Bilson, 2003
Thesis title: Implementing Overloading and Polymorphism in Cforall
David W. Till, 1989
Thesis title: Tuples In Imperative Programming Languages.
USRA
Andrew Beach, Spring 2017.
Line numbering, Exception handling, Virtuals
So basically, it's a research language, more-or-less developed one student at a time.
Agree 100%. Improvements to C would be things like removing "undefined behavior", not adding more syntax sugar. If anything, C's grammar is already too bloated. (I'm looking at you, function pointer type declarations inside anonymized unions inside parameter definition lists.)
> Improvements to C would be things like removing "undefined behavior"
This nonsense again. I don't get this "undefined behavior" cliche. It seems it became fashionable for some people to parrot it like a mantra as a form of signaling. Undefined behavior just refers to something that is not covered by the international standard, and therefore doesn't exist nor should be used, but an implementation may offer implementation-specific behavior.
Can you explain why exceptions and operator overloading are "idiotic" things? Are you from the Go school of boilerplate-error-checking-code design, or something?
exceptions because in embedded contexts they may not always be a good idea (and C targets such contexts). overloading because it is too easy to abuse and as such it gets abused a lot by those who do not know better. The rest of us are then stuck decoding what the hell "operator +" means when applied to a "serial port driver" object
> 3) is missing a few real improvements (closures, although it is not clear whether the "nested routines" can be returned)
Ah, I wish Blocks[0] would have made to into the C language as a standard†... Although you can use them with clang already:
$ clang -fblocks blocks-test.c # Mac OS X
$ clang -fblocks blocks-test.c -lBlocksRuntime # Linux
Since closures are poor man's object, I had some fun with them to fake object-orientedness[1].
† or at least that the copyright dispute between Apple and the FSF for integration into GCC would have been resolved (copyright transferred to the FSF being required in spite of a compatible license).
Constructs like closures come at a cost. Function call abstraction and locality means hardware cannot easily prefetch, instruction cache misses, data cache misses, memory copying, basically, a lot of the slowness you see in dynamic languages. The point of C is to map as close to hardware as possible, so unless these constructs are free, better off without them and sticking to what CPUs can actually run at full speed.
Closures are logical abstractions and cost nothing, since they are logical. Naive runtime implementations of closures can of course be a bit slower than native functions, but so can be everything.
the suggested syntax is ridiculous. What is this punctuation soup?
void ?{}( S & s, int asize ) with( s ) { // constructor operator
void ^?{}( S & s ) with( s ) { // destructor operator
^x{}; ^y{}; // explicit calls to de-initialize
This has been tried many times before, and eventually all these attempts die a lonely death. Why use extensions anyway? If one desired the luxury of modern scripting languages, switch to C++, Rust, Go or one of the other alternatives the article mentions.
Because regardless how some of us might dislike C and its security related issues, the truth is that no one is ever going to rewrite UNIX systems in other language, nor the embedded systems where even C++ has issues gaining market share.
So if one finally manages to get a safer C variant that finally wins the hearts of UNIX kernels and embedded devs, it is a win for all, even those that don't care about C on their daily work.
Until it happens, that lower layer all IoT devices and cloud machines will be kept in C, and not all of them will be getting security updates.
I do not question the usefulness of C, I use it in my daily work. What I am saying is that most C developers that use the language day-in-day-out know quite well what they are doing, and don't need yet another non-standard way of writing the code. Safety is a good point, but the initiative doesn't even mention the word, and there is no reason to assume the C-for-All extension targets safety at all
The trouble with a "safer C variant" is that it must remove features, or at least more heavily constrain programs to a safer subset of the language. This makes it not backwards-compatible.
I think the only successful "subset of C" is MISRA.
What makes you think that a safer C variant would win the hearts of UNIX kernels and embedded devs any more than C++ (which started as just a C variant).
Which means that minimal requirements to win kernel and embedded devs is to integrate well with the rest of the C ecosystem, including myriad of C compilers and to be really well suited for low level work. This excludes pretty much all ideas, but meta languages that produce C code. Might even be necessary to promote the language itself not as a new language, but as a meta preprocessor for C to avoid alienating developers. But realistically this is not feasible nor necessary. There are much more feasible ideas to improve safety, than forcing half of the world to learn a lot of new things and change.
> Because regardless how some of us might dislike C and its security related issues, the truth is that no one is ever going to rewrite UNIX systems in other language
> So if one finally manages to get a safer C variant that finally wins the hearts of UNIX kernels and embedded devs
A safer variant wouldn't be C. What makes C great for OS development is that it is just a step above assembly and you as a developer are given tremendous amount of power to do good and evil. C#/Java are programming languages with training wheels and it's great for application development. But for low level coding required for OS, network stacks, databases, etc, you really have to take the training wheels off.
I suppose you can try and make the C type system more stringent, but then it wouldn't be C. And considering they are aiming for backwards compatibility with existing C and its immense code infrastructure, they will have to keep the "flaws" in c for all.
Time would be better spent making the libraries/kernel/etc sturdier but if they can pull it off and win the hearts and minds of OS developers, then so be it.
Also, people have been trying to sideline C for decades. Each attempt has only reinforced C's standing and reminded us why C is so essential for OS development. Anyone remember the ill-fated attempt by Sun with their JVM centered JavaOS?
* switch, if, choose and case extensions look good.
* I can see the justification for labelled break/continue, but looks pretty hairy. Might discourage rethinking and refactoring to something simpler.
* I'm wary of exceptions.
* I don't like the 'with' clauses.
* Weird to add syntax just for mutexes, but they integrate concurrency/coroutines later, so maybe it make sense.
* Tuples are generally useful, but C11's unnamed structs are generally good enough, ie. instead of [int, char] you can return "struct { int x0; char x1 }" or something.
* New declaration syntax is welcome, but the old syntax probably isn't going away, so I'm not sure it's a good idea.
* Constructors/destructors are good. Syntax looks weird though.
* Overloading is very welcome.
* Not sure about operators, but they have their uses.
* Polymorphism is welcome, though it looks a bit cumbersome, and it should come with a monomorphisation guarantee for C.
* Traits seem like too much for a C-like language. I can see the uses, and the compiler can optimize this well, but they're probably too powerful.
* Coroutines are cool.
* Streams look interesting, but the overloading of | will probably be confusing.
I'm more or less in agreement, but I just though it was worth adding that the tuple's could actually have a lot of merit, I think I'd like to see them (Though I'm not sure the syntax is perfect parsing wise. It might be smart to prefix them, like `tuple [int, char]` or something.).
It seems like anonymous struct's fill the void, but a big problem with anonymous struct's is their types are never equal to any other, even if all the members are the exact same. So that means that if you declare the function as returning `struct { int x0; char x1; }` directly, it's actually mostly unusable because it's impossible to actually declare a variable with the same type as the return type. Obviously, the fix is to forward declare the `struct` ahead of time in a header file somewhere and then just use that type name, but that gets annoying really fast when you end-up with a lot of them. The tuples would allow you to achieve the same thing, but with a less verbose syntax and would allow them to be considered the same type even without forward declaring them.
> So that means that if you declare the function as returning `struct { int x0; char x1; }` directly, it's actually mostly unusable because it's impossible to actually declare a variable with the same type as the return type.
Are you sure about that? I remember playing with this last year and structural equality seemed to work when returning structures from functions. I was using clang, so it could conceivably have been an extension... (edit: some online C compilers do indeed return an error in this case)
If that's the case, then just make anonymous structs employ structural type equality and you have better tuples.
GNU C is probably my favorite extension of C. There's a lot of good stuff in there. The vector extensions make it really easy to write platform agnostic SIMD code.
Please don't use GNU C, or any other non-standardized version of C. A huge part of the reason C that is so widespread is because it's a well defined standard implemented by many compilers for many platforms. GNU C is defined by its implementation, which is awful.
> Please don't use GNU C, or any other non-standardized version of C.
Everything standard was once non-standard ; if no one uses it it will never be standardised and we will be left with a poor status quo. For instance, there wouldn't be int8_t, etc... if people weren't using non-standard macros beforehand. Likewise for atomics, threads, etc.
I disagree with this. A lot of GNU C works on both, GCC and Clang which covers most platforms out there.
Those extensions are useful and allow better portability across architectures. E.g. SIMD extensions is much better than writing two implementations with NEON and SSE intrinsics.
Please do use GNU C if you're going to use C. The viral nature of the Linux kernel has forced GNU C to be an important de facto standard. Take advantage of that!
Why not ? If I don't care about portability because for example I'm writing a software that it's meant to be used only on Linux because it uses Linux specific libraries or system calls and I know that there gcc it's the standard I use the extension if they can simplify my code ?
I love using nested functions when I have to write state machine code. It's a hella a lot better than the old school way using macros to do the same thing.
Considering how hard it is to write truly exception-safe C++ and considering how major C++ code bases don't allow exceptions, adding exceptions to C does not seem like a good idea.
I've always liked the idea of djb's boringcc[0], except with different definitions of undefined based on what users were using C currently with. This would allow people to "upgrade" into boringcc with their current code bases. So with a single invocation of a compiler, you couldn't use more than one set of defined undefined behaviors.
I would love a gcc optimization level, like -Og which only applies optimizations that don't interfere with debugging information, where all undefined behavior is specified.
Does anyone know if undefined behavior is specified in CompCert? Or does CompCert simply not allow you to write programs with undefined behavior?
Whether exceptions are good or bad depends on what error handling strategy your product needs. For some software, it's better to try and recover no matter what. For others, complete failure is preferrable to operating with invalid state. Exceptions can be a blessing or a curse depending on what you need. Having them in your toolbox is certainly an advantage over having no choice.
>Having them in your toolbox is certainly an advantage over having no choice
I disagree. Dependencies, or coworkers, will use them despite your decision not to use them. When a dependency does use them, chances are the documentation is poor or non-existent.
"Recover no matter what" doesn't require exceptions. A common C idiom is to call a function like f(input, *err), where err points to memory where f can write error diagnostic info. Clunky, but I like how it makes the "exceptions" somewhat self-documented in the function signature.
> Considering how hard it is to write truly exception-safe C++
Is "writing truly exception-safe" something that necessary ? for me, the biggest benefit of exceptions is that I can have some code throw from anywhere and display a nice error pop-up message to my user which informs me of what went wrong and revert what was currently happening during the processing of the current event, since the "top-level" event loop is wrapped in a try-catch block. Often enough, the user can then just resume doing whatever he was working on.
Without continued development of the language, C will be unable to cope with the needs of modern programming problems and programmers; as a result, it will fade into disuse.
C11 is pretty nice! C99 is too. One might think that "almost once a decade" is kind of slow for updates, but M$ have enough trouble keeping up with the current schedule. Of course TFA describes a possible direction for C2x, but they could have a more charitable attitude...
That's one thing I'd like C to do. It was an adequate language in 1970, but in 2018, we have a few better approaches that have a chance to turn into viable alternatives to C.
Of course, I don't see C falling into disuse any time soon. The amount of critical code written in C is enormous, without a way in sight to reasonably replace. So keeping C in a good shape is important, whatever shortcomings the language may have.
Herb Sutter doesn't like C. That may be the reason that the compiler group at Microsoft don't put any effort into updating the C compiler to support newer features.
They do put a great deal of work into the C++ compiler, and seem to be doing a way, way, better job than they were in the late 90s.
That may be the other reason they don't update the C compiler with new features.
* `if` / `case` / `choose` improvements look fine, though not that important.
* Exception handling semantics aren't defined.
* `with` is pointless and adds gratuitous complexity to the language.
* `fallthrough` / `fallthru` / `break` / `continue` are all just aliases for `goto`. It's not obvious to me that we really need them.
* Returnable tuples look very nice.
* Alternative declaration syntax looks like a nightmare. If we were redesigning C from the ground up, a different declaration syntax might be better, but mixing two syntaxes is a terrible, terrible idea.
* References. Why? They only add confusion.
* Can't make head or tail of what `zero_t` and `one_t` are about, or why they would be useful.
* Units (call with backquote): gratuitous syntax, unnecessary and confusing.
* Exponentiation operator: gratuitous and unnecessary.
An interesting exercise might be to figure out how to do the Golang feature set, or some useful subset of it, in a C-compatible or mostly-C-compatible syntax.
I do like the returnable tuples, though, and the parametric polymorphism is pretty nice.
I suspect it's the same problem C++ has/had (C++11 fixed it) with bools (see the safe bool idiom [0]). Basically treating a type like an integer (arithmetic object) and boolean (logical object) at the same time is problematic (especially for a "system" type meant for extending implicit system behavior). Because then I can do `if(BoolObject < 70)` when I only meant for `if(BoolObject)` to work (where "BoolObject" is some object evaluating to a bool, and by evaluating I mean coercing/casting).
Here it looks like they approached it by making 0/1 (effectively C's false/true) different types and relying on their simpler/more-powerful type system (e.g. because they don't have to worry about C++'s insane object system). Not a terrible idea if they were otherwise actually sticking to their goal of "evolving" C (most of their features are radical departures from the language like exceptions). C++11 solved it by clarifying how implicit explicit casting [sic] of rvalues works in certain keywords (which I strongly doubt anyone can say was the simpler way of solving the problem).
[0] https://en.wikibooks.org/wiki/More_C%2B%2B_Idioms/Safe_bool
I don't have the feeling that the authors appreciate the appeal of C as a simple language that maps closely to hardware features.
This is a big random collections of extensions that piqued some implementor's fancy. There is seemingly no effort at narrowing down to the cleanest or most important ideas. It totally kills the clean, simple aesthetics of the the base C languge.
WHAT?
Also its typing rules are really complicated; apply it to two integers and magically you are thrown into the floating-point world where you can never be completely certain of anything, but if you use an unsigned exponent then you stay safely in integer-land.
Deleted Comment
[0] https://plg.uwaterloo.ca/~cforall/people
[1] https://plg.uwaterloo.ca/
1) has some downright idiotic things (exceptions, operator overloading)
2) has a few reasonable, but mostly inconsequential things (declaration inside if, case ranges)
3) is missing a few real improvements (closures, although it is not clear whether the "nested routines" can be returned)
This nonsense again. I don't get this "undefined behavior" cliche. It seems it became fashionable for some people to parrot it like a mantra as a form of signaling. Undefined behavior just refers to something that is not covered by the international standard, and therefore doesn't exist nor should be used, but an implementation may offer implementation-specific behavior.
Could you please provide a code snippet of this kind? Hard for me to visualize otherwise. Thanks.
Ah, I wish Blocks[0] would have made to into the C language as a standard†... Although you can use them with clang already:
Since closures are poor man's object, I had some fun with them to fake object-orientedness[1].† or at least that the copyright dispute between Apple and the FSF for integration into GCC would have been resolved (copyright transferred to the FSF being required in spite of a compatible license).
[0]: https://en.wikipedia.org/wiki/Blocks_%28C_language_extension...
[1]: https://github.com/lloeki/cblocks-clobj/blob/master/main.c#L...
the suggested syntax is ridiculous. What is this punctuation soup?
So if one finally manages to get a safer C variant that finally wins the hearts of UNIX kernels and embedded devs, it is a win for all, even those that don't care about C on their daily work.
Until it happens, that lower layer all IoT devices and cloud machines will be kept in C, and not all of them will be getting security updates.
I think the only successful "subset of C" is MISRA.
Although it's more along the lines of Plan9 - a unix-like system that ignores the bits of POSIX that really suck.
Now will they in this.
A safer variant wouldn't be C. What makes C great for OS development is that it is just a step above assembly and you as a developer are given tremendous amount of power to do good and evil. C#/Java are programming languages with training wheels and it's great for application development. But for low level coding required for OS, network stacks, databases, etc, you really have to take the training wheels off.
I suppose you can try and make the C type system more stringent, but then it wouldn't be C. And considering they are aiming for backwards compatibility with existing C and its immense code infrastructure, they will have to keep the "flaws" in c for all.
Time would be better spent making the libraries/kernel/etc sturdier but if they can pull it off and win the hearts and minds of OS developers, then so be it.
Also, people have been trying to sideline C for decades. Each attempt has only reinforced C's standing and reminded us why C is so essential for OS development. Anyone remember the ill-fated attempt by Sun with their JVM centered JavaOS?
* switch, if, choose and case extensions look good.
* I can see the justification for labelled break/continue, but looks pretty hairy. Might discourage rethinking and refactoring to something simpler.
* I'm wary of exceptions.
* I don't like the 'with' clauses.
* Weird to add syntax just for mutexes, but they integrate concurrency/coroutines later, so maybe it make sense.
* Tuples are generally useful, but C11's unnamed structs are generally good enough, ie. instead of [int, char] you can return "struct { int x0; char x1 }" or something.
* New declaration syntax is welcome, but the old syntax probably isn't going away, so I'm not sure it's a good idea.
* Constructors/destructors are good. Syntax looks weird though.
* Overloading is very welcome.
* Not sure about operators, but they have their uses.
* Polymorphism is welcome, though it looks a bit cumbersome, and it should come with a monomorphisation guarantee for C.
* Traits seem like too much for a C-like language. I can see the uses, and the compiler can optimize this well, but they're probably too powerful.
* Coroutines are cool.
* Streams look interesting, but the overloading of | will probably be confusing.
It seems like anonymous struct's fill the void, but a big problem with anonymous struct's is their types are never equal to any other, even if all the members are the exact same. So that means that if you declare the function as returning `struct { int x0; char x1; }` directly, it's actually mostly unusable because it's impossible to actually declare a variable with the same type as the return type. Obviously, the fix is to forward declare the `struct` ahead of time in a header file somewhere and then just use that type name, but that gets annoying really fast when you end-up with a lot of them. The tuples would allow you to achieve the same thing, but with a less verbose syntax and would allow them to be considered the same type even without forward declaring them.
Are you sure about that? I remember playing with this last year and structural equality seemed to work when returning structures from functions. I was using clang, so it could conceivably have been an extension... (edit: some online C compilers do indeed return an error in this case)
If that's the case, then just make anonymous structs employ structural type equality and you have better tuples.
https://gcc.gnu.org/onlinedocs/gcc/C-Extensions.html
Everything standard was once non-standard ; if no one uses it it will never be standardised and we will be left with a poor status quo. For instance, there wouldn't be int8_t, etc... if people weren't using non-standard macros beforehand. Likewise for atomics, threads, etc.
Those extensions are useful and allow better portability across architectures. E.g. SIMD extensions is much better than writing two implementations with NEON and SSE intrinsics.
Also, porting C is not that hard and does not require you to touch internals that much.
[0]: https://groups.google.com/forum/m/#!msg/boring-crypto/48qa1k...
Does anyone know if undefined behavior is specified in CompCert? Or does CompCert simply not allow you to write programs with undefined behavior?
I disagree. Dependencies, or coworkers, will use them despite your decision not to use them. When a dependency does use them, chances are the documentation is poor or non-existent.
"Recover no matter what" doesn't require exceptions. A common C idiom is to call a function like f(input, *err), where err points to memory where f can write error diagnostic info. Clunky, but I like how it makes the "exceptions" somewhat self-documented in the function signature.
Is "writing truly exception-safe" something that necessary ? for me, the biggest benefit of exceptions is that I can have some code throw from anywhere and display a nice error pop-up message to my user which informs me of what went wrong and revert what was currently happening during the processing of the current event, since the "top-level" event loop is wrapped in a try-catch block. Often enough, the user can then just resume doing whatever he was working on.
If you want your connections cleanly terminated, your temporary files removed, and your database transactions invalidated, yes.
Deleted Comment
C11 is pretty nice! C99 is too. One might think that "almost once a decade" is kind of slow for updates, but M$ have enough trouble keeping up with the current schedule. Of course TFA describes a possible direction for C2x, but they could have a more charitable attitude...
That's one thing I'd like C to do. It was an adequate language in 1970, but in 2018, we have a few better approaches that have a chance to turn into viable alternatives to C.
Of course, I don't see C falling into disuse any time soon. The amount of critical code written in C is enormous, without a way in sight to reasonably replace. So keeping C in a good shape is important, whatever shortcomings the language may have.
C compatibility is kept to the extent of ANSI C++ requirements.
ANSI C++14 requires C99 library compatibility and ANSI C++17 was updated for C11 library compatibility.
http://open-std.org/JTC1/SC22/WG21/docs/papers/2016/p0063r2....
They do put a great deal of work into the C++ compiler, and seem to be doing a way, way, better job than they were in the late 90s.
That may be the other reason they don't update the C compiler with new features.