I've fully updated the article with new benchmarks.
A reader pointed out that the GCC 16 Docker container I originally used was built with internal compiler assertions enabled, skewing the data and unfairly penalizing GCC.
I've re-measured everything on a proper release build (Fedora 44), and the compile times are ~50% faster across the board.
The article now reflects the accurate numbers, and I've added an appendix showing the exact cost of the debug assertions.
"I can recompile the entire thing from scratch in ~4.3s. That’s around ~900 TUs, including external dependencies, tests, and examples"
In 30 years of using C++ this is the first time I've ever come across "translation unit" being abbreviated to TU and it took a bit of effort to figure out what the author was trying to say. Not sure why they felt the need to abbreviate this when they explain PCH for instance, which is a far more commonly used term.
Thought I'd add the context here to help anyone else out.
I've updated the article to say "translation unit" the first time "TU" is introduced. The data was also incorrect due to an oversight on my part, and it's now much more accurate and ~50% faster across the board.
> Language features like templates are not the issue – the Standard Library is.
What sins does STL commits that make it slow if templates themselves are not slow, and what kind of template code doesn't bloat compile times? In my experience, C++ libraries are usually one order of magnitude or more slower to compile than equivalent C ones, and I always chalked it upto the language.
libstdc++'s <print> is very heavy, reflection or not. AFAIK there is no inherent reason for it to be that heavy, fmtlib compiles faster.
<meta> is another question, it depends on string_view, vector, and possibly other parts. Maybe it's possible to make it leaner with more selective internal deps.
I don't know the exact details, but I have heard (on C++ Weekly, I believe) that it offers some advantages when linking code compiled with different compiler versions. That said, I normally avoid it and use fmtlib to avoid the extra compile time. So it isn't clear if it is a win to me. Header-only libraries are great on small projects, but on large codebases with 1000's of files, it really hits you.
It also bloats binary size if you statically link libc++ because of localization, regardless if you care for it. This wasn't true for fmtlib because it doesn't support localization. stringstream has this same problem, but it's one of many reasons embedded has stuck with printf.
I am very worried by feature creep in libc++ and libstdc++ and the harm that this inflicts on the wider C++ ecosystem. Transitive inclusion of large parts of the STL, and entangling of STL with core language features are both extremely bad. This should IMO be topic #1 of the committee but is barely even noticed. The mantra "It's okay, modules will save us" is naive and will not work.
I was somewhat horrified to discover that the STL ended up with a special role in the language spec. (IIRC, one of the tie-ins is initializer lists.)
IMHO it's far wiser to leave the standard library as something that isn't needed by the core language, and where users can (at least in principle) provide their own alternative implementation without needing compiler hacks.
I.e., those details are inherent in the definition of "library" in the C/C++ world.
Which languages have a standard library that contains nothing requiring intimate integration with the language implementation itself? I'm not aware of any, even limited-standard-library C included.
> I was somewhat horrified to discover that the STL ended up with a special role in the language spec. (IIRC, one of the tie-ins is initializer lists.)
The language specification is already larger than several classical tomes of fiction. A reader could choose to tuck in with the C++ spec, or with War and Peace.
And given how much of the language's spec is "The behavior is undefined when combining these two features," it's not really a tome that is safely ignored.
At this point, I cannot recommend C++ on any new project that some external factor such as safety certification (which "solves" the problem by adding yet more pages of stuff a developer must not do that the language syntactically supports and compiles but generates garbage output) isn't forcing my hand on.
As of 2026 C has eclipsed C++ in popularity on the TIOBE index; anecdotally, roboticists I've chatted with have told me they prefer to write core functionality as C modules and then weld them together into high-level behavior with a terser scripting DSL over trying to write the whole thing in C++ and hoping there's no undefined behavior hidden in the cracks that multiple layers of linters, sanitizers, and auto-certifiers have missed.
Yuck. I’ve already noticed compilation times increasing from C++17 to C++20, and this feature makes it much worse. I guess I’ll need to audit any reflection usage in third-party dependencies.
Please check the article again -- I made a mistake in the original measurements (the Docker image I used had GCC compiled in debug mode) and now the (correct) times are ~50% faster across the board.
Not free, still need to audit, but much better than before. Sorry.
Update: I had originally used a Docker image with a version of GCC built with assertions enabled, skewing the results. I am sorry for that.
Modules are actually not that bad with a proper version of GCC. The checking assertions absolutely crippled C++23 module performance in the original run:
Modules (Basic 1 type): from 352.8 ms to 279.5 ms (-73.3 ms)
Modules (AoS Original): from 1,077.0 ms to 605.7 ms (-471.3 ms, ~43% faster)
This is a pretty flippant response to a rather insightful point by someone who isn't exactly a newbie to the language. They understand very well the implications of move being nondestructive and the point they're making stands nevertheless.
I've fully updated the article with new benchmarks.
A reader pointed out that the GCC 16 Docker container I originally used was built with internal compiler assertions enabled, skewing the data and unfairly penalizing GCC.
I've re-measured everything on a proper release build (Fedora 44), and the compile times are ~50% faster across the board.
The article now reflects the accurate numbers, and I've added an appendix showing the exact cost of the debug assertions.
I sincerely apologize for the oversight.
In 30 years of using C++ this is the first time I've ever come across "translation unit" being abbreviated to TU and it took a bit of effort to figure out what the author was trying to say. Not sure why they felt the need to abbreviate this when they explain PCH for instance, which is a far more commonly used term.
Thought I'd add the context here to help anyone else out.
It's super common terminology for people around those spaces; they probably didn't even think about whether they should abbreviate it.
What sins does STL commits that make it slow if templates themselves are not slow, and what kind of template code doesn't bloat compile times? In my experience, C++ libraries are usually one order of magnitude or more slower to compile than equivalent C ones, and I always chalked it upto the language.
<meta> is another question, it depends on string_view, vector, and possibly other parts. Maybe it's possible to make it leaner with more selective internal deps.
Including <print> is still very heavy, but not as bad as before (from ~840ms to ~508ms)
I was somewhat horrified to discover that the STL ended up with a special role in the language spec. (IIRC, one of the tie-ins is initializer lists.)
IMHO it's far wiser to leave the standard library as something that isn't needed by the core language, and where users can (at least in principle) provide their own alternative implementation without needing compiler hacks.
I.e., those details are inherent in the definition of "library" in the C/C++ world.
std::type_info? std::is_trivially_...?
And given how much of the language's spec is "The behavior is undefined when combining these two features," it's not really a tome that is safely ignored.
At this point, I cannot recommend C++ on any new project that some external factor such as safety certification (which "solves" the problem by adding yet more pages of stuff a developer must not do that the language syntactically supports and compiles but generates garbage output) isn't forcing my hand on.
As of 2026 C has eclipsed C++ in popularity on the TIOBE index; anecdotally, roboticists I've chatted with have told me they prefer to write core functionality as C modules and then weld them together into high-level behavior with a terser scripting DSL over trying to write the whole thing in C++ and hoping there's no undefined behavior hidden in the cracks that multiple layers of linters, sanitizers, and auto-certifiers have missed.
Not free, still need to audit, but much better than before. Sorry.
I first created the module via:
And then benchmarked with: The only "include" was import std;, nothing else.These are the results:
- Basic struct reflection: 352.8 ms
- Barry's AoS -> SoA example: 1.077 s
Compare that with PCH:
- Basic struct reflection: 208.7 ms
- Barry's AoS -> SoA example: 1.261 s
So PCH actually wins for just <meta>, and modules are not that much better than PCH for the larger example. Very disappointing.
Modules are actually not that bad with a proper version of GCC. The checking assertions absolutely crippled C++23 module performance in the original run:
Modules (Basic 1 type): from 352.8 ms to 279.5 ms (-73.3 ms) Modules (AoS Original): from 1,077.0 ms to 605.7 ms (-471.3 ms, ~43% faster)
Please check the update article for the new data.
Deleted Comment
This talk just point that unique_ptr is not one of them due to the side effect of C++ move semantics being non destructive.
People that do not understand that should honestly stop to use it as an argument in favor of "there is no zero cost abstractions".