Knowing C has made me a better and much more versatile programmer than I would be without it. For instance, a while back I had to diagnose a JVM core dumping sporadically, and I was able to use my knowledge of C to try to recover valid return addresses from a corrupted stack, figure out there was a signal handler block on the stack, then further unwind from there to find out the whole story of what likely caused the crash.
Knowing C also helps me write more performant code. Fast Java code ends up looking like C code written in Java. For instance, netty 4 implemented their own custom memory allocator to avoid heap pressure. Cassandra also has their own manual memory pools to try to improve performance, and VoltDB ended up implementing much of their database in C++. I've been able to speed up significant chunks of Java code by putting my "C" hat on.
I would recommend every college student learn C, and learn it from the perspective of an "abstract machine" language, how your C code will interact with the OS and underlying hardware, and what assembly a compiler may generate. I would consider learning C for pedagogical purposes to be much more important than C++.
> I would recommend every college student learn C, and learn it from the perspective of an "abstract machine" language, how your C code will interact with the OS and underlying hardware, and what assembly a compiler may generate.
Do note that these are two very different things. The C abstract machine as defined by the standard is sometimes so different from the actual machines you're going to run your code on, that you get these fine undefined behaviour things that everybody's on about.
Please learn both, indeed, and stress the differences; then highlight the advantages of C. C is a dangerous, but powerful tool, and the necessity of warning students about things should not inhibit learning C.
I feel the same. Even when debugging JavaScript or C# performance problems I often have insights from a C perspective that people who have only learned Java or JavaScript don't have. I haven't learned much assembly but I think some level of it would also be a very good thing.
Once whole operating systems are written in Rust maybe Rust will be the most important language but until then it's C. C++ doesn't really count. I don't think you can do any serious C++ without knowing C really well.
I was with you until the part about C++. I spent very little part of my career writing C(mostly embedded hobbies) but still managed to pick up all the important bits(memory layout, CPU caches, allocation costs, processor pipelines and the like) from C++.
Really it wasn't too long ago that the STL just wasn't good enough for high performance code and we rolled our own containers/pooling/everything. Modern C++ is a very much a different beast these days.
> "For instance, a while back I had to diagnose a JVM core dumping sporadically, and I was able to use my knowledge of C to try to recover valid return addresses from a corrupted stack, figure out there was a signal handler block on the stack, then further unwind from there to find out the whole story of what likely caused the crash."
Okay, but is that knowledge coming from coding in C, or it coming from debugging low-level code? I don't know what tool(s) you used to do that debugging, but if I assume for a second that you were using GDB, wasn't the main benefit of your past C experience (in this case) the exposure to tools like GDB, rather than your C knowledge being key in enabling you to dissect that JVM issue?
For teaching purposes in a university I don't know if rust would be better or not. Keep in mind that the goal isn't necessarily to learn C the language; it's to learn enough of the core language to explore things like:
* How the code is executed on modern hardware and operating systems.
* How various language features might be compiled to assembly
* Memory layouts for things like stack frames, how a simple heap allocator might work, array layouts, layouts for other structures like linked lists, etc.
* The relative cost of various operations and how to write code that is cache friendly.
* How debuggers generally work and how to debug code.
Could these things be taught equally well or better in an undergraduate seeing using rust? Honestly I don't know; I know very little about Rust. I can say that I think C++ is a worse language for this purpose because of the language's complexity and because of features that aren't well suited for the above purposes.
I have heard, for instance, that's it's actually very difficult in rust to write a linked list without dropping to unsafe code. I would consider this a bad thing for the above purposes.
My guess is that for this author's purposes, Rust is in the same place as C++. It can do the low level things, but you tend to spend more of your time working with higher level libraries that hide the nuts-and-blots details like allocation.
If you want to "think like a computer" then learn assembly, not C. I'm not joking.
In my experience young devs using C get very confused about wtf pointers are really about and what's the deal with the heap, stack, etc. Whereas they understand the concepts better in assembly despite the verbosity.
I'm not arguing for a deep understanding of assembly but if the goal is to "think like a computer" than at least a basic understanding is incredibly helpful. Or at least less confusing than C.
Learn ARM or some other RISC architecture and not x86/amd64 if you want to learn assembly. x86 and its forever backwards compatable Segmented Memory/CISC architecture makes asm programming painful (not to mention I have learned at least three different syntaxes for x86 asm : (1) MASM/BASM/TASM/NASM, (2) GNU AS (gas), (3) Amsterdam Compiler Kit -- the last two are meant for compiler backends and not human programmers).
I actually cut my teeth on TI's TMS34010 asm. Simple RISC (lot's of registers), flat address space, and bit addressable. I loved it.
Garrch, when I hear bit-addressable I get a twitch since a had to spend a night debugging a RISC system that was failing because a class of RAM didn't work with bit addressable instructions, the system was huge, the error cascaded strangely and the information was hidden somewhere in the 2000 page datasheet.
When I did my undergrad, Ada was the primary language of instruction for first year, with x86 assembler taught as a subject in the second semester.
The idea being to get us thinking about programming correctness from the beginning, then to teach how the machine worked once we had a good foundation.
By second year we were just expected to know C, and coming from assembler, things like pointers were something I never had a problem understanding.
Start high to get a good grounding, go low to understand how everything works, then return high to write code that has a good grounding in how it will be executed on the machine.
I'll give shit about #1 when someone invents a time machine and prevents Java from existing.
#2 is a great reason to learn Latin but the "influence of C" is identical to the influence of Fortran or Pascal or Algol and of equivalent magnitude to (what evolved into) Common Lisp or Scheme.
And #4 is true eventually but it only matters when the software is broken.
To be honest of the four listed reasons I only found the last to be compelling. I did however think the third point was worth commenting on as I don't think C is a good way to achieve that goal.
I partly agree with you. But its a fact that you can't just start learning assembly if you have never learned C in the first place. So, young devs, please start learning C first.
This is false. I have learned the basics of assembly without knowing C first. My experience before learning assembly was: Java, Python and some Objective-C (without knowing pointers). So if your point is that you need to learn programming first, then maybe that is true but I highly doubt it. But I can't refute that with my own experience.
Here wa my roadmap:
(1) I first learned assembly by learning computer architecture from Tanenbaum's book -- Structured Computer Organization -- and related course at the Vrije Universiteit Amsterdam. This taught me a toy architecture (Mic1) but it give me a rough idea of how assembly worked.
(2) Then, later I took a course in binary and malware analysis and all that we were required to do was to read x86 assembly in IDA and interact with it via GDB -- and using the command: layout asm, which gives a layout for all the register values and upcoming instructions.
And once I have a debugger available and understand it well enough, I can learn quite well since I can make little predictions and check if they are right or wrong, and so on.
In the 1980s, most software on 8-bit computers were written in assembly, as compilers for high level languages like C or Pascal were available but 1) used up too much of the (maximum) 64K available 2) generated terrible code.
C is tightly coupled and co-evolved with the Von Neumann architecture. If you understand C you can better understand that architecture, but it's far from the only one. Beyond the world of single core CPUs, systems rarely hyper-specialize to the point that the C/Von Neumann system has (focusing all energies on ALU throughput). And the larger (and more distributed) systems we build, the less they resemble Von Neumann machines.
So while it's realistic to embrace C for many tasks, it's wrong to convince yourself that "the rest follows" from C.
The biggest concrete types of Harvard architecture devices I know of are some older small microcontroller architectures (e.g. the PIC10 and PIC12 families) and purpose-built DSPs (e.g. Analog Devices SHARC and TI C6X families). I'm pretty sure that some GPU shaders are also Harvard architecture, but I've heard that mainstream vendors have moved to a von Neumann model.
I second this, I want to know some recommendations too. Are there university courses out there that teach non-Von Neumann architectures? And what are the application areas? I could Google myself but my experience with HN is that if someone knows a good course, then it is a good course. While with Google, not so much.
What architectures are (1) not incredibly niche, (2) have practical hardware in the wild, and (3) are so divergent from Von Neumann as to dramatically change the principles applicable to C?
I was thinking bigger than a single CPU. For example: 2 computers (even if they're both Von Neumann machines) together comprise a non-Von Neumann system. Some single-device examples are pretty common, like GPUs and NICs.
But what I really had in mind are systems that communicate by passing messages. Distributed systems certainly have an "architecture", but it spans many machines; and communication occurs via messages (RDMA being an exception).
Even modern CPUs contain non-Von Neumann features like multiple cores, pipelines, and out-of-order execution, so the line gets blurry. To a large extent modern CPUs enable C-style programming with a lot of contrivances to hide the fact that they're not quite Von Neumann anymore. Dealing with the different architecture becomes the compiler's job.
"Thinking in C" hinges on the idea that the size of memory is several orders of magnitude larger than the number of cores, and that you can only modify these words one at a time.
Of course those aren't really CPUs to be programmed with software, they're another level of abstraction down. But they're pretty common and hardware description languages are vastly different from C. The inherent massive parallelism of FPGAs & the resulting combinatorial-by-default (sequential only when explicitly declared) languages requires a very different way of thinking.
I still can point to the exact point in my college career that took me to a whole new level of understanding how to program in Java or Python to understanding how things like Garbage Collection, Threads, and other abstractions actually worked. It was an elective I took on Graphics Programming and it was all in C and OpenGL.
Being without GC, I learned the trade offs. Obviously, being able to explicitly allocate and free memory was a big deal. But at the same time, I learned to appreciate the complexity and pitfalls of GC.
Threads are great in Java (as are their parallels in other languages): this is the work to do, put the results here, and everything is cleaned up. Meanwhile, just trying to get the results back from a forked process was CRAZY. Implementing and managing my own shared memory to pass messages back and forth...wtfbbq.
"Think like a computer," not really. More like, "understand what all goes into a thread, an async call, or a garbage collector. Gain more understanding into the corner cases and exactly how much this thing is doing for you. So maybe setting something to null or pre-allocating objects to reduce heap work isn't that big a deal."
C taught me to be patient with higher level languages, to better understand what they were doing for me, be able to reason about how they may be doing these things wrong, and to think critically about how to coax these features into behaving the way I want them to.
Really any lower level language can teach these things. The point here is that C is a nearly universal language that can teach you to reason about these things.
I think learning C is a fine thing to do, when the alternative is to learn nothing at all, as knowing C is a quite useful skill. However, a more substantive question should be, "should you learn C next, or R?" or "should you learn C next, or neural networks"? or "should you learn C next, or Bayesian statistics"? It may be that C is more urgent and important to learn than any of those, but if you find that people are not, it is not necessarily the case that they don't see the usefulness of it. They may just have a finite amount of time, and learning Mandarin Chinese ranks higher for them, not least so they can better understand how natural language processing would work if you're involved with non-European languages.
Also, the blog throws in "and C++" in parentheses. C++ is another beast altogether -- it diverged from C a long time ago -- read Scott Meyer's "Effective Modern C++" if you think C and C++ bear any resemblance.
I feel like this article makes two strong arguments in favour of C:
1. You need to know C because C is popular, and
2. You need to know C because C is a lowish-level language.
I can't very well argue with the first point -- indeed, nearly everything fundamental is made in C or the C-like subset of C++ still -- but I oppose myself to the second.
I do think you should learn a lowish-level language, but there are good reasons to make that language something like Ada instead; something high-level enough to have concurrency primitives yet low-level enough to force you to consider the expected ranges of your integers and whether or not you want logical conjuctions to be short-circuiting. Something high-level enough to have RAII managed generic types, yet low-level enough to let you manipulate the bit layout of your records. High-level enough to have exceptions and a good module system for programming in the large yet low level enough to have a built in fixpoint type and access to machine code operations.
Unless you target C specifically for its popularity, there are other options out there, filling the same gap.
>>Ada instead; something high-level enough to have concurrency primitives yet low-level enough to force you to consider the expected ranges of your integers
In practice Ada programmers, because they actually have a proper method of expressing integer ranges, are going to be much more considerate of such things.
Your argument is compelling and crystallized similar thoughts I’ve had. If I ran a CS department I’d follow your recommendations. Practically speaking though another argument against Ada or whatever is drastically reduced reading options.
C++ and Rust address most of the issues presented in the article (with the exception of being able to read the source code of existing software).
My very controversial opinion is that C is obsolete for new programming projects. For example, everything that C can do, C++ can do and better/safer/faster. Rust is also getting there (if not there already). C is not Pareto optimal anymore.
C++ is really overrated, I wish less software was written in it. I'd wager software written in C++ is, on the whole, more buggy than software written in C. Some of the most successful software projects in the world are written in C and occupy the same niche as failing C++ competitors. Top 4 kernels are written in C: NT, Mach, Linux, BSD. C++ kernels like Haiku's kernel are either buggy footnotes or a pipe dream (Fuschia). Git leads comfortably ahead of Mercurial and far ahead of the (numerous) C++ alternatives: can you even name one without looking it up?
Rust (and collectivelly all languages that claim to replace C, or just have a significant cult following insisting that everything be rewritten in them) are fine languages on their own merit but I can't see them replacing C any time soon. The main problem with these is that they were designed by people who don't like C. People who do like C would probably love a "safer" alternative to it and similar to it, but languages like Rust just aren't appealing for the same use-cases. The only language that kind of works for this is Go, which was written by people who, on the whole, do like C.
> Some of the most successful software projects in the world are written in C and occupy the same niche as failing C++ competitors.
You can also find examples for the other way around, too: Web browsers are all written in (mostly) C++ for example.
Games are another example. John Carmack also admitted that switching from C to C++ was the right thing to do for Doom 3:
"Today, I do firmly believe that C++ is the right language for large, multi-developer projects with critical performance requirements, and Tech 5 is a lot better off for the Doom 3 experience." http://fabiensanglard.net/doom3/interviews.php
> Top 4 kernels are written in C: NT, Mach, Linux, BSD.
I have an issue with the idea of "teaching new practitioners X while telling them that X is not to be used seriously", for so many reasons.
1) It is doing a disservice to the students to tell them that you won't use much of what you are made to learn.
2) A nontrivial amount of people will attempt to use it seriously anyway, sometimes with disastrous results.
3) It usually indicates teacher laziness and/or disinteredness in finding alternative ways to teach.
4) Surprisingly often, when you are not supposed to use X, it is actually useless to learn X. (That is NOT the case in this situation -- there are reasons to learn C -- but the students can't know that unless told so.)
5) There are almost always better alternatives. I have never been unable to find a Y which fulfills the same criteria as X, yet in addition is also usable for serious things. It's just that sometimes you have to look a little harder for it.
In other words, I think it is a teacher's categorical responsibility to find a means to teach what they want in a way that is practically applicable by the students right away. And failing to do so should be taken as a strong hint that what they want to teach may not be the thing they should teach.
I wouldn't say this if I didn't firmly believe that you can teach most things in a way that grants the student immediate practical applications. And I don't say this in a political way -- of course everyone should have the liberty to teach whatever useless thing they want in whatever shitty manner they cn come up with. I'm viewing it more as requirements on any teacher who wants to call themselves good, or tell themselves they are doing their students a service and improving mankind.
Maybe depends on the project. Writing a web API in C almost certainly isn't pareto-optimal. A linux/raspberry-pi driver for your hobby-project that needs to interact with other low-level linux-based software however, C is probably the most efficient fit.
Rust (and maybe Go) do seem like very good long-term replacements for C, but imho it's too early to say that. Until there's a "full stack" of software (OS to user-space/desktop), you still kinda need to know C if you want to be able to understand how your whole computer works.
I don't see Go as any sort of replacement for C. The fact that it's garbage-collected means that it's really not suitable for use in embedded or kernel programming.
AFAICT, Rust was created to be a safer C, and Go was created to be a faster Python; completely separate domains.
I'd recommend Ada over C for embedded hobby applications; mostly because it will reduce the time you spend debugging by detecting more errors up front and forcing you to be very clear with what you are trying to say. Less frustration and more secure (matters now with internet of things).
But also because it makes it easier to deal with asynchronous actions and has facilities for higher level programming should there be room for it.
Knowing C also helps me write more performant code. Fast Java code ends up looking like C code written in Java. For instance, netty 4 implemented their own custom memory allocator to avoid heap pressure. Cassandra also has their own manual memory pools to try to improve performance, and VoltDB ended up implementing much of their database in C++. I've been able to speed up significant chunks of Java code by putting my "C" hat on.
I would recommend every college student learn C, and learn it from the perspective of an "abstract machine" language, how your C code will interact with the OS and underlying hardware, and what assembly a compiler may generate. I would consider learning C for pedagogical purposes to be much more important than C++.
Do note that these are two very different things. The C abstract machine as defined by the standard is sometimes so different from the actual machines you're going to run your code on, that you get these fine undefined behaviour things that everybody's on about.
Please learn both, indeed, and stress the differences; then highlight the advantages of C. C is a dangerous, but powerful tool, and the necessity of warning students about things should not inhibit learning C.
C is sharp, yes. It's also a universal interface. Almost every language has bindings for C, and C has bindings for almost everything.
Once whole operating systems are written in Rust maybe Rust will be the most important language but until then it's C. C++ doesn't really count. I don't think you can do any serious C++ without knowing C really well.
Really it wasn't too long ago that the STL just wasn't good enough for high performance code and we rolled our own containers/pooling/everything. Modern C++ is a very much a different beast these days.
Okay, but is that knowledge coming from coding in C, or it coming from debugging low-level code? I don't know what tool(s) you used to do that debugging, but if I assume for a second that you were using GDB, wasn't the main benefit of your past C experience (in this case) the exposure to tools like GDB, rather than your C knowledge being key in enabling you to dissect that JVM issue?
* How the code is executed on modern hardware and operating systems.
* How various language features might be compiled to assembly
* Memory layouts for things like stack frames, how a simple heap allocator might work, array layouts, layouts for other structures like linked lists, etc.
* The relative cost of various operations and how to write code that is cache friendly.
* How debuggers generally work and how to debug code.
Could these things be taught equally well or better in an undergraduate seeing using rust? Honestly I don't know; I know very little about Rust. I can say that I think C++ is a worse language for this purpose because of the language's complexity and because of features that aren't well suited for the above purposes.
I have heard, for instance, that's it's actually very difficult in rust to write a linked list without dropping to unsafe code. I would consider this a bad thing for the above purposes.
Dead Comment
In my experience young devs using C get very confused about wtf pointers are really about and what's the deal with the heap, stack, etc. Whereas they understand the concepts better in assembly despite the verbosity.
I'm not arguing for a deep understanding of assembly but if the goal is to "think like a computer" than at least a basic understanding is incredibly helpful. Or at least less confusing than C.
I actually cut my teeth on TI's TMS34010 asm. Simple RISC (lot's of registers), flat address space, and bit addressable. I loved it.
When I did my undergrad, Ada was the primary language of instruction for first year, with x86 assembler taught as a subject in the second semester.
The idea being to get us thinking about programming correctness from the beginning, then to teach how the machine worked once we had a good foundation.
By second year we were just expected to know C, and coming from assembler, things like pointers were something I never had a problem understanding.
Start high to get a good grounding, go low to understand how everything works, then return high to write code that has a good grounding in how it will be executed on the machine.
If the only goal. The OP listed four.
#2 is a great reason to learn Latin but the "influence of C" is identical to the influence of Fortran or Pascal or Algol and of equivalent magnitude to (what evolved into) Common Lisp or Scheme.
And #4 is true eventually but it only matters when the software is broken.
Here wa my roadmap:
(1) I first learned assembly by learning computer architecture from Tanenbaum's book -- Structured Computer Organization -- and related course at the Vrije Universiteit Amsterdam. This taught me a toy architecture (Mic1) but it give me a rough idea of how assembly worked.
(2) Then, later I took a course in binary and malware analysis and all that we were required to do was to read x86 assembly in IDA and interact with it via GDB -- and using the command: layout asm, which gives a layout for all the register values and upcoming instructions.
And once I have a debugger available and understand it well enough, I can learn quite well since I can make little predictions and check if they are right or wrong, and so on.
So while it's realistic to embrace C for many tasks, it's wrong to convince yourself that "the rest follows" from C.
Deleted Comment
But what I really had in mind are systems that communicate by passing messages. Distributed systems certainly have an "architecture", but it spans many machines; and communication occurs via messages (RDMA being an exception).
Even modern CPUs contain non-Von Neumann features like multiple cores, pipelines, and out-of-order execution, so the line gets blurry. To a large extent modern CPUs enable C-style programming with a lot of contrivances to hide the fact that they're not quite Von Neumann anymore. Dealing with the different architecture becomes the compiler's job.
"Thinking in C" hinges on the idea that the size of memory is several orders of magnitude larger than the number of cores, and that you can only modify these words one at a time.
Of course those aren't really CPUs to be programmed with software, they're another level of abstraction down. But they're pretty common and hardware description languages are vastly different from C. The inherent massive parallelism of FPGAs & the resulting combinatorial-by-default (sequential only when explicitly declared) languages requires a very different way of thinking.
Being without GC, I learned the trade offs. Obviously, being able to explicitly allocate and free memory was a big deal. But at the same time, I learned to appreciate the complexity and pitfalls of GC.
Threads are great in Java (as are their parallels in other languages): this is the work to do, put the results here, and everything is cleaned up. Meanwhile, just trying to get the results back from a forked process was CRAZY. Implementing and managing my own shared memory to pass messages back and forth...wtfbbq.
"Think like a computer," not really. More like, "understand what all goes into a thread, an async call, or a garbage collector. Gain more understanding into the corner cases and exactly how much this thing is doing for you. So maybe setting something to null or pre-allocating objects to reduce heap work isn't that big a deal."
C taught me to be patient with higher level languages, to better understand what they were doing for me, be able to reason about how they may be doing these things wrong, and to think critically about how to coax these features into behaving the way I want them to.
Really any lower level language can teach these things. The point here is that C is a nearly universal language that can teach you to reason about these things.
Also, the blog throws in "and C++" in parentheses. C++ is another beast altogether -- it diverged from C a long time ago -- read Scott Meyer's "Effective Modern C++" if you think C and C++ bear any resemblance.
1. You need to know C because C is popular, and
2. You need to know C because C is a lowish-level language.
I can't very well argue with the first point -- indeed, nearly everything fundamental is made in C or the C-like subset of C++ still -- but I oppose myself to the second.
I do think you should learn a lowish-level language, but there are good reasons to make that language something like Ada instead; something high-level enough to have concurrency primitives yet low-level enough to force you to consider the expected ranges of your integers and whether or not you want logical conjuctions to be short-circuiting. Something high-level enough to have RAII managed generic types, yet low-level enough to let you manipulate the bit layout of your records. High-level enough to have exceptions and a good module system for programming in the large yet low level enough to have a built in fixpoint type and access to machine code operations.
Unless you target C specifically for its popularity, there are other options out there, filling the same gap.
In practice Ada programmers, because they actually have a proper method of expressing integer ranges, are going to be much more considerate of such things.
My very controversial opinion is that C is obsolete for new programming projects. For example, everything that C can do, C++ can do and better/safer/faster. Rust is also getting there (if not there already). C is not Pareto optimal anymore.
Rust (and collectivelly all languages that claim to replace C, or just have a significant cult following insisting that everything be rewritten in them) are fine languages on their own merit but I can't see them replacing C any time soon. The main problem with these is that they were designed by people who don't like C. People who do like C would probably love a "safer" alternative to it and similar to it, but languages like Rust just aren't appealing for the same use-cases. The only language that kind of works for this is Go, which was written by people who, on the whole, do like C.
You can also find examples for the other way around, too: Web browsers are all written in (mostly) C++ for example.
Games are another example. John Carmack also admitted that switching from C to C++ was the right thing to do for Doom 3:
"Today, I do firmly believe that C++ is the right language for large, multi-developer projects with critical performance requirements, and Tech 5 is a lot better off for the Doom 3 experience." http://fabiensanglard.net/doom3/interviews.php
> Top 4 kernels are written in C: NT, Mach, Linux, BSD.
Parts of NT are written in C++ btw.
1) It is doing a disservice to the students to tell them that you won't use much of what you are made to learn.
2) A nontrivial amount of people will attempt to use it seriously anyway, sometimes with disastrous results.
3) It usually indicates teacher laziness and/or disinteredness in finding alternative ways to teach.
4) Surprisingly often, when you are not supposed to use X, it is actually useless to learn X. (That is NOT the case in this situation -- there are reasons to learn C -- but the students can't know that unless told so.)
5) There are almost always better alternatives. I have never been unable to find a Y which fulfills the same criteria as X, yet in addition is also usable for serious things. It's just that sometimes you have to look a little harder for it.
In other words, I think it is a teacher's categorical responsibility to find a means to teach what they want in a way that is practically applicable by the students right away. And failing to do so should be taken as a strong hint that what they want to teach may not be the thing they should teach.
I wouldn't say this if I didn't firmly believe that you can teach most things in a way that grants the student immediate practical applications. And I don't say this in a political way -- of course everyone should have the liberty to teach whatever useless thing they want in whatever shitty manner they cn come up with. I'm viewing it more as requirements on any teacher who wants to call themselves good, or tell themselves they are doing their students a service and improving mankind.
Rust (and maybe Go) do seem like very good long-term replacements for C, but imho it's too early to say that. Until there's a "full stack" of software (OS to user-space/desktop), you still kinda need to know C if you want to be able to understand how your whole computer works.
AFAICT, Rust was created to be a safer C, and Go was created to be a faster Python; completely separate domains.
But also because it makes it easier to deal with asynchronous actions and has facilities for higher level programming should there be room for it.