The performance observation is real but the two approaches are not equivalent, and the article doesn't mention what you're actually trading away, which is the part that matters.
The C++11 threadsafety guarantee on static initialization is explicitly scoped to block local statics. That's not an implementation detail, that's the guarantee.
The __cxa_guard_acquire/release machinery in the assembly is the standard fulfilling that contract. Move to a private static data member and you're outside that guarantee entirely. You've quietly handed that responsibility back to yourself.
Then there's the static initialization order fiasco, which is the whole reason the meyers singleton with a local static became canonical. Block local static initializes on first use, lazily, deterministically, thread safely. A static data member initializes at startup in an order that is undefined across translation units. If anything touches Instance() during its own static initialization from a different TU, you're in UB territory. The article doesn't mention this.
Real world singleton designs also need: deferred/configuration-driven initialization, optional instantiation, state recycling, controlled teardown. A block local static keeps those doors open. A static data member initializes unconditionally at startup, you've lost lazy-init, you've lost the option to not initialize it, and configuration based instantiation becomes awkward by design.
Honestly, if you're bottlenecking on singleton access, that's design smell worth addressing, not the guard variable.
> Honestly, if you're bottlenecking on singleton access, that's design smell worth addressing, not the guard variable.
There's a large group of engineers who are totally unaware of Amdahl's law and they are consequently obsessed with the performance implications of what are usually most non-important parts of the codebase.
I learned that being in the opposite group of people became (or maybe has been always) somewhat unpopular because it breaks many of the myths that we have been taught for years, and on top of which many people have built their careers. This article may or may not be an example of that. I am not reading too much into it but profiling and identifying the actual bottlenecks seems like a scarce skill nowadays.
You leveled up past a point a surprising number of people get stuck on essentially.
I feel likethe mindset you are describing is kind of this intermediate senior level. Sadly a lot of programmers can get stuck there for their whole career. Even worse when they get promoted to staff/principal level and start spreading dogma.
I 100 percent agree. If you can't show me a real world performance difference you are just spinning your wheels and wasting time.
On the flip side, it’s easy to get a bit stuck down the road by the mere fact that you have a singleton. Maybe you have amazing performance and very carefully managed safety, but you still have a single object that is inherently shared by all users in the same process, and it’s very very easy to end up regretting the semantic results. Been there, done that.
agreed. Strong emphasis on "profiling and identifying the actual bottleneck". Every benchmark will show a nested stack of performance offenders, but a solid interpretation requires a much deeper understanding of systems in general. My biggest aha moment yrs ago was when I realized that removing the function I was trying to optimize will still result in a benchmark output that shows top offenders and without going into too many details that minor perspective shift ended up paying dividends as it helped me rebuild my perspective on what benchmarks tell us.
The fact that he calls the generated code good/bad without discussing the semantic differences tells that the original author doesn't really know what he is talking about. That seems problematic to me as he is selling c++ online course.
I haven't written C++ in a long time, but isn't the issue here that the initialization order of globals in different translation units is unspecified? Lazy initialization avoids that problem at very modest cost.
I liked using singletons back in the day, but now I simply make a struct with static members which serves the same purpose with less verbose code. Initialization order doesn't matter if you add one explicit (and also static) init function, or a lazy initialization check.
> A bit like how java people insisted on making naive getFoo() and setFoo() to pretend that was different from making foo public
But it's absolutely different and sometimes it really matters.
I primarily work with C# which has the "property" member type which is essentially a first-class language feature for having a get and set method for a field on a type. What's nice about C# properties is that you don't have to manually create the backing field and implement the logic to get/set it, but you still have the option to do it at a later time if you want.
When you compile C# code (I expect Java is the essentially same) which accesses the member of another class, the generated IL/Bytecode is different depending on whether you're accessing a field, property or method.
This means that if you later find it would be useful to intercept gets or updates to a field and add some additional logic for some reason (e.g. you want to now do lazy initialization), if you naively change the field to a method/property (even with the same name), existing code compiled against your original class will now fail at runtime with something like a "member not found" exception. Consumers of your library will be forced to recompile their code against your latest version for things to work again.
By having getters and setters, you have the option of changing things without breaking existing consumers of your code. For certain libraries or platforms, this is the practical difference between being stuck with certain (now undesirable) behaviour forever or trivially being able to change it.
Focusing on micro-"optimizations" like this one do absolutely nothing for performance (how many times are you actually calling Instance() per frame?) and skips over the absolutely-mandatory PROFILE BEFORE YOU OPTIMIZE rule.
If a coworker asked me to review this CL, my comment would be "Why are you wasting both my time and yours?"
> If a coworker asked me to review this CL, my comment would be "Why are you wasting both my time and yours?"
If a coworker submitted a patch to existing code, I'd be right there with you. If they submitted new code, and it just so happened to be using this more optimal strategy, I wouldn't blink twice before accepting it.
Among C++ programmers "Best performance" isn't about actual optimization it's a brag with about the same semantic value as for Broadway and so it's not measurable. The woman who wins a Tony for "Best performance in a musical" isn't measurably faster, or smaller, she's just "best" according to some panel. Did audiences like it? Did the musical make money? Doesn't matter, she won a "Best performance" Tony.
Only those that have C++ tatoos and their whole career bound to mastering a single language, selling that knowledge.
Other of us, use it as tool required to integrate with existing products, language runtimes, and SDKs written in C++, which most likely won't get replaced anytime soon.
Getting into the weeds of what a compiler does with your code is fun.
People have been doing micro optimisations since computers became a thing, you benefit from them every day without realising - and seemingly not appreciating - it.
This is about constant vs dynamic initialization, not trivial vs nontrivial default construction. To be fair, the article doesn't claim this, but that's the comparison being made.
The standard allows to optimize away dynamic initialization, but AFAIK there are ABI implications of doing that, so compilers tend to not do that.
If you absolutely want to guarantee that a global is constant initialized, use "constinit" on the variable declarations too. It can also have some positive codegen effects on declarations of thread_locals.
Honestly the guard overhead is a non-issue in practice — it's one atomic check after first init. The real problem with the static data member approach is
initialization order across translation units. If singleton A touches singleton B during startup you get fun segfaults that only show up in release builds with a
different link order.
I ended up using std::call_once for those cases. More boilerplate but at least you're not debugging init order at 2am.
"it's one atomic check after first init" And that's slow :P [0] If you don't need to access it from multiple threads, cutting that out can mean a huge difference in a hot path.
Came here to say the same thing. Static is OK as long as the object has no dependencies but as soon as it does you're asking for trouble. Second the call_once approach. Another approach is an explicit initialization order system that ensures dependencies are set up in the right order, but that's more complex and only works for binaries you control.
The C++11 threadsafety guarantee on static initialization is explicitly scoped to block local statics. That's not an implementation detail, that's the guarantee.
The __cxa_guard_acquire/release machinery in the assembly is the standard fulfilling that contract. Move to a private static data member and you're outside that guarantee entirely. You've quietly handed that responsibility back to yourself.
Then there's the static initialization order fiasco, which is the whole reason the meyers singleton with a local static became canonical. Block local static initializes on first use, lazily, deterministically, thread safely. A static data member initializes at startup in an order that is undefined across translation units. If anything touches Instance() during its own static initialization from a different TU, you're in UB territory. The article doesn't mention this.
Real world singleton designs also need: deferred/configuration-driven initialization, optional instantiation, state recycling, controlled teardown. A block local static keeps those doors open. A static data member initializes unconditionally at startup, you've lost lazy-init, you've lost the option to not initialize it, and configuration based instantiation becomes awkward by design.
Honestly, if you're bottlenecking on singleton access, that's design smell worth addressing, not the guard variable.
There's a large group of engineers who are totally unaware of Amdahl's law and they are consequently obsessed with the performance implications of what are usually most non-important parts of the codebase.
I learned that being in the opposite group of people became (or maybe has been always) somewhat unpopular because it breaks many of the myths that we have been taught for years, and on top of which many people have built their careers. This article may or may not be an example of that. I am not reading too much into it but profiling and identifying the actual bottlenecks seems like a scarce skill nowadays.
I feel likethe mindset you are describing is kind of this intermediate senior level. Sadly a lot of programmers can get stuck there for their whole career. Even worse when they get promoted to staff/principal level and start spreading dogma.
I 100 percent agree. If you can't show me a real world performance difference you are just spinning your wheels and wasting time.
Many times taking a few extra ms, or God forbid 1s, is more than acceptable when there are humans in the loop.
One of the reasons I hate constructors and destructors.
Explicit init()/deinit() functions are much better.
Dead Comment
Dead Comment
A bit like how java people insisted on making naive getFoo() and setFoo() to pretend that was different from making foo public
But it's absolutely different and sometimes it really matters.
I primarily work with C# which has the "property" member type which is essentially a first-class language feature for having a get and set method for a field on a type. What's nice about C# properties is that you don't have to manually create the backing field and implement the logic to get/set it, but you still have the option to do it at a later time if you want.
When you compile C# code (I expect Java is the essentially same) which accesses the member of another class, the generated IL/Bytecode is different depending on whether you're accessing a field, property or method.
This means that if you later find it would be useful to intercept gets or updates to a field and add some additional logic for some reason (e.g. you want to now do lazy initialization), if you naively change the field to a method/property (even with the same name), existing code compiled against your original class will now fail at runtime with something like a "member not found" exception. Consumers of your library will be forced to recompile their code against your latest version for things to work again.
By having getters and setters, you have the option of changing things without breaking existing consumers of your code. For certain libraries or platforms, this is the practical difference between being stuck with certain (now undesirable) behaviour forever or trivially being able to change it.
Focusing on micro-"optimizations" like this one do absolutely nothing for performance (how many times are you actually calling Instance() per frame?) and skips over the absolutely-mandatory PROFILE BEFORE YOU OPTIMIZE rule.
If a coworker asked me to review this CL, my comment would be "Why are you wasting both my time and yours?"
In my view, the article is not about optimizing, but about understanding how things work under the hood. Which is interesting for some.
If a coworker submitted a patch to existing code, I'd be right there with you. If they submitted new code, and it just so happened to be using this more optimal strategy, I wouldn't blink twice before accepting it.
Other of us, use it as tool required to integrate with existing products, language runtimes, and SDKs written in C++, which most likely won't get replaced anytime soon.
Getting into the weeds of what a compiler does with your code is fun.
People have been doing micro optimisations since computers became a thing, you benefit from them every day without realising - and seemingly not appreciating - it.
https://compiler-explorer.com/z/Tsbz7nd44
This is about constant vs dynamic initialization, not trivial vs nontrivial default construction. To be fair, the article doesn't claim this, but that's the comparison being made.
The standard allows to optimize away dynamic initialization, but AFAIK there are ABI implications of doing that, so compilers tend to not do that.
If you absolutely want to guarantee that a global is constant initialized, use "constinit" on the variable declarations too. It can also have some positive codegen effects on declarations of thread_locals.
I ended up using std::call_once for those cases. More boilerplate but at least you're not debugging init order at 2am.
[0] https://stackoverflow.com/questions/51846894/what-is-the-per...