... Except the fact that printf/scanf use variadics, and the only reason why it stopped being a constant source of crash is the fact that compilers started recognizing it and validating format strings/complaining when you pass a non-literal string as a format.
<format> is instead 100% typesafe. If you pass the wrong stuff it won't compile, and as {fmt} shows you can even validate formats at compile time using just `constexpr` and no compiler support.
I mean, the function for straight printing is puts; I don't know why people keep using the much more complicated printf in cases where no formatting is involved.
Edit: OK, I guess puts includes a newline, so you'd need to use fputs if you don't want that (although this example includes one). Still, both of those are much less complicated than printf!
It is a very natural feature. Especially when you are writing mathematical code e.g. implementing different types of numbers, e.g. automatic differentiation, interval arithmeic, big ints, etc.
Overloading gives user defined types the expressiveness of internal types. Like all features, of they are used badly (e.g when + is overloaded to an operation which can hardly be interpreted as addition) it makes things worse. But you can write bad code in any language, using any methodology.
Operator overloading is a useful feature that saves a bunch of time and makes code way more readable.
You can quibble whether operator<<() is a good idea on streams and perhaps C++ takes the concept too far with operator,() but the basic idea makes a lot of sense.
string("hello ") + string("world");
complexNumber2 * complexNumber2;
for (int i : std::views::iota(0, 6)
| std::views::filter(even)
| std::views::transform(square))
someSmartPtr->methodOnWrapperClass();
“Fried shrimp should be removed from the all you can eat chinese buffay because i cant help myself from eating at least 20 of them in a single sitting and now i have stomach cramp”
How far are you prepared to take this stance, exactly? C has operators that are generic over both integral and floating point types. Was that a mistake? Did OCaml do it better?
For my part, I've been persuaded that generic operators like that are a net win for math-heavy code, especially vector and matrix math. Sure, C++ goes too far, but there are middle grounds that don't.
Operator overloading is critical for building ergonomic frameworks.
The modern web is built on overloading the . operator (e.g. ORMs like Rails and Django). We will never see a Tier-1 ORM in Golang simply because it lacks it.
I'll byte: complex numbers and matrix support is bad in languages without operator overloading. Why should only the primitive types of the language be privileged to proper math notation?
Not having operator overloading is anti-human. To think so highly of yourself that there is no other thing that can properly be the subject of the field operators (or other basic operators) is the height of hubris. The compiler typically must handle the operators on certain types due to the compilation target's semantics, but in reality, there's nothing special about these 'built-ins'.
Operators like +, -, /, *, etc have meanings independent of integers and floats and to not allow these meanings to be expressed is sad.
I've heard many programmers express this sentiment and what they actually are attempting to argue is that having overloads of these operators that do not respect the corresponding group, ring, or field laws is confusing. This I agree with. Operators should mainly follow the proper abstract semantics.
BS. I thought that Java already demonstrated to the world how dumb it is to disallow operator overloading altogether.
Allowing ANY operator to be overloaded was dumb, like C++ did, where you could do batshit crazy stuff like overloading unary & (addressof) or the comma operator (!), or stuff like the assignment operator (that actually opens a parenthesis about how copy/move semantics in C++ are a total hack that completely goes OT compared to this).
Sensible operator overloading makes a lot of sense, especially when combined with traits that clearly define what operations are supported and disallow arbitrary code to define new operators on existing types. Rust does precisely that, and IMHO it works great and provides a much nicer experience than Java's verbose mess of method chaining.
I'm on your side, but only after many years of being on the other side. I used to think they were "graceful" and "minimalist", and refused to acknowledge they can be the source of many surprises.
The Google C++ style guide has a very nice overview. There are only two pros listed, and large number of cons. And this document is old by Internet (dog) years -- at least 10 years.
Consider the humble + operator. In most compiled languages -- even those that don't support operator overloading -- it is in fact overloaded. int + int. long + long. float + float. double + double. pointer + int. Would every language be better with it?
Built in operators don't always map 1-1 to CPU instructions so don't appeal to that authority. There are still plenty of CPUs -- old and new -- without multiplication, division, or floating point support.
I disagree, it’s heavily abused but very useful for types where it’s obvious what the operation is (inherently mathematical types like vectors and matrices). I wrote a macro library for C that vector/matrix math in prefix notation with _Generic overloads and it’s still too clumsy to get used to.
I've never understood why people complain a lot about `std::cout << "string"`, if the problem is that this operator is used for bit shifting, simply stop thinking that way (genius I know), do you think of addition when you see `string + "concatenate"`? Operator overloading is awesome, and like everything in programming, if used correctly; constructing paths with / is sweet, and I find << with streams visually appealing and expressive, it's feeding data to the stdout/file/etc, same for `std::cin >> var`, data goes from the stdin to the variable.
In C++, with its templates, there are only a couple alternatives:
1. Operator overloading
2. Operator desugaring (e.g. __subscript__(), which substitutes the intrinsic function for basic types, but can also be defined for user defined types)
3. Writing templates with weird adaptors for primitive types.
Given that its design goal was to embed C, there were already operators that worked with various and mixed types. Adding (+.), etc., would have been unacceptable to the users. So, I think in general, for this language, it was good but, unfortunately, iostream made people think you should overload the behavioral expectation, too.
Operator overloading has been a feature of many languages dating back to the very concept of using operator notation in programming. I know of no language that has the * operator dedicated solely to a single type. Typically you have at least signed and unsigned overload, as well as various bit sizes (including larger than the machine word size), and floating point representations. Extending that to vectorial operations, arbitrary precision, and others only seem to make sense and to be going with the flow...
Most programming languages use infix notation for mathematical operations but polish notation for function calls. This creates an inconsistency. In languages, like LISP, that entirely use polish notation the inconsistency does not exist.
One could argue that if a programming language has this inconsistency, then one should at least try to be consistent with one's notation, i.e. for mathematical operations use infix notation (operator overloading).
Agree. It looks fun when I am writing the code and re-inventing abstract algebra and category theory types for classifying cat pictures. However then at some point I have to read someone else's code, even my own code weeks later and then I start cursing operator overloading.
Operator overloading is one of the cornerstones of generic programming in C++. And perhaps it is a failure of imagination on my part, but it’s difficult to think of a more elegant approach.
If you just need a nice print: fmtlib is a really nice c++23 style implementation without needing c++23 compiler support. Highly recommend it. It’s simple. It’s fast.
I think Barry under-estimates how long it will be before C++ programmers actually get the equivalent of #[derive(Debug)] and as a result the impact on ergonomics. But of course I might be wrong.
This works on my RHEL9-compatible for a .c file (using gcc). The type specifier for main is implicitly `int`. You get some warnings about implicit types and implicit declarations, but you get a binary that when executed writes "Hello, world".
Are there now in c++, after all these years, f-strings like python has, or at least something coming close? If not, I keep being at my disappointed state about c++.
Slightly off topic, but I recently learned that implementing the opposite of what you've asked for, bitshift stdout in python, is only a few lines of code:
My guess is you never had to parachute into a project using operator overloading in strange, inconsistent, and undocumented ways with no original maintainers to show you the ropes
I actually like operator overloading, but overloading the shift operators for I/O was still a mistake IMO. It's a mistake even if you ignore that it's a theoretical misuse (I/O and binary shifting have nothing to do with each other semantically). The operator precedence of the binary shift operators is just wrong for I/O.
First, includes either need to be wrapped in angle brackets (for files included from the include path passed to the compiler) or quotes (for paths relative to the current file).
Second, the whole standard library would be huge to pull in, so it is split into many headers, even for symbols in the top level of the std namespace.
C++ has namespacing which makes sense because this language has an enormous amount of available 3rd party libraries and without name spacing you can't help stepping on each others toes.
There are two ways you might want to have this work anyway despite namespacing. One option would be that you just import the namespace and get all the stuff in it, this is popular in Java for example, however in C++ this is a bit fraught because while you can import a namespace, you actually get everything from that namespace and all containing namespaces.
Because the C++ standard library defines a huge variety of symbols, if you do this you get almost all of those symbols. Most words you might think of are in fact standard library symbols in C++, std::end, std::move, std::array, and so on. So if you imported all these symbols into your namespace it's easy to accidentally trip yourself, thus it's usual not to do so at all.
Another option would be to have some magic that introduces certain commonly used features, Rust's preludes do this, both generally (having namespaces for the explicit purpose of exporting names you'd often want together) and specifically ("the" standard library prelude for your Edition is automatically injected into all your Rust software by default). C++ could in principle do something like this but it does not.
The std namespace is from the <print> standard header. It’s not just print because while you might want it in the global namespace, other people do not. For example, my code isn’t cli and doesn’t need to print to the cli, but perhaps I want to print to a printer or something else and have my own print function.
Leave un-namespaced identifiers to those that are declared in the current file and namespace everything else. If you really want, you’re free to add “using namespace std” or otherwise alias the namespace, but keeping standard library functions out of the global namespace as a default is a good thing! (In any language, not just C++)
C++ has been my main language for a very long time, but I've been a grumpy skeptic of C++ since around C++14 due to the language spec's total complexity. So I've mostly stuck with C++11.
But now that C++ has modules, concepts, etc., I'm starting to wonder if C++23 is worthwhile for new projects. I.e., the language-spec complexity is still there, but the new features might tip the balance.
I'd been thinking to walk away from C++ in favor of Rust for new projects. But now I might give C++23 a chance to prove itself.
C++ deserves its rep for complexity. But, it comes from a promise to avoid a version upgrade debacle in the style of Python 3. C++ promises that you will forever be able to interleave new-style code right into the middle of ancient, battle-tested old-style code.
To do that, it can only add features and never take them away. Instead, it adds features that deprecate the practice of PITA patterns that were common, necessary and difficult.
Like, SFINAE was necessary all over the place to make libraries "just work" the way users would expect. But, it is a PITA to write and and PITA to read. Now, constexpr if and auto return types can usually collapse all that scattered, implicit, templated pattern matching down to a few if statements. Adding those features technically made the standard more complicated. But, it made new code moving forward much simpler to understand.
Similarly: Before variadic templates, parameter packs and fold expressions, you had the hell of recursive templates. Auto lambdas make a lot of 1-off templates blend right into the middle of regular code. Deduction guides set up library writers to set you up to write
> But, it comes from a promise to avoid a version upgrade debacle in the style of Python 3.
There is a very wide middle ground between C++'s "Your horrific unsafe code from the 80s still compiles" and Python's "We changed the integer values common operations on strings return at runtime with absolutely no way to statically tell how to migrate the code".
In Dart, we moved to a sound static type system in 2.0, moved to non-nullable types in 2.13 (sound and defaulting to non-nullable!), and removed support for the pre-null safety type system in 3.0. We brought almost the entire ecosystem with us.
Granted, our userbase is much smaller than C++'s and the average age of a given Dart codebase is much younger.
But you can deprecate and remove old features without causing a decade of misery like Python did. You just need good language support for knowing which version of the language a given file is targeting, good static typing support, and good automated migration tooling. None of those is rocket science.
> To do that, it can only add features and never take them away. Instead, it adds features that deprecate the practice of PITA patterns that were common, necessary and difficult.
The result being that programmers have to learn every single one of those ways of doing things in order to read your coworkers code. Give me python 3 breaking changes any day
Most serious C++ developers will tell you that C++14 was basically a bug fix for some oversight in the C++11 spec. You should probably use it if you can if that’s the standard you’re happy with.
C++ 17 is the sweet spot for me. It has most of C++ 11/14, with many new features I use a ton: constexpr, <charconv>, <string_view>, [[maybe_unused]], new SFINAE feature (if constexpr), and more. (And this was considered a small release!?)
I guess I’m just right at home and therefore a bit reluctant to jump into C++20. That and the constantly changing “””correct way of doing things.”””
Really the cognitive load of modern Rust is no less than C++$RECENT in my experience. Both require a ton of prima-facie concepts before you can productively read code from other developers. Of en vogue languages, Zig is the only one that seems to view keeping the "metaphor flood" under control as a design goal, we'll see how things evolve.
But really, and I say this from the perspective of someone in the embedded world who still does most of his serious work in C and cares about code generation and linkage: I think the whole concept of these Extremely Heavy Systems Programming Languages is not long for this world. In modern systems, managed runtimes a-la Go/Swift/JVM/.NET and glue via crufty type-light environments like Python and Javascript are absolutely where the world has ended up.
And I increasingly get the feeling that those of us left arguing over which monstrosity to use in our shrinking domain are... kinda becoming the joke.
> Really the cognitive load of modern Rust is no less than C++$RECENT in my experience. Both require a ton of prima-facie concepts before you can productively read code from other developers.
Eh. I don't know if I agree. I've worked on a few large C++ codebases, and the cognitive load between Rust and C++ is incomparable.
The ownership/borrowing stuff is complexity you deal with implicitly in other systems languages, here's it's just more explicit and semi-automated most of the time by the compiler.
In C++ the terse thing is never the correct thing. If I'm using metaphors: a Rust sentence, in C++ usually has to be expressed through one or more paragraphs. The type system is so loose you have to do "mental" expansions half the time (or run the build and pray for no errors, or at the very least that the error is somewhat comprehensible[1]).
There's some low-level stuff that can be ugly (for various definitions of ugly), but that's every language. The low level bits of async are a bit wired, but once the concepts "click" it becomes fairly intuitive.
At least the ugly parts are cordoned behind library code for the most part, and rarely leak out.
I guess it could just boil down to familiarity, but it took me much less time to familiarise myself with Rust than it took me to familiarise myself with C++. We're talking months vs years to consider myself comfortable/adept at it. Although, maybe just some C++ or general programming wisdom transferred over?
[1]: This happens in Rust too, I must admit. But it's usually an odd situation that I've encountered once or twice with some very exotic types. In C++ it's the norm, and usually also reported with bizarre provenance
IMO the cognitive load of Rust is different in practice just from basic nuts and bolts things like having a ecosystem of libraries that culturally emphasizes things like portability and heavy testing, and a standard package manager and an existent module system (C++26 fingers crossed). I dislike Cargo but it's incredibly effective in the sense any Rust programmer can pick up another project and almost all of of the tools instantly work with no change. I mean, Rust is frankly most popular in the same space Go/Swift et cetera are, services and application layer programming, where those conveniences aren't taken for granted. I see it used way more for web services and desktop/application services/command line/middleware libraries than I do for, like, device drivers or embedded kernels. Those are limited enough in number anyway.
Really, the ergonomics of both the language features and standard tooling meant it always meant it was going to appeal to people and go places C++ would not, even if they in theory are both "systems languages" with large surface areas, and they overlap at this extreme end (kernels/embedded/firmware/etc) of the spectrum that little else fits into.
I don't write embedded software, but I see Rust breaking into many domains outside of systems and winning developer mind share everywhere. Seemingly fulfilling its promise of being able to target low-level and higher-level software domains (something Chris Lattner wanted for Swift, but it hasn't yet materialised on the low-level side, we'll see).
Actually if you look back to 8 and 16 bit home computers, with C, C++, Object Pascal, BASIC, Modula-2, full of inline Assembly, in many cases that being the biggest code surface, it is quite similar.
Nowadays C and C++ took the role of that inline Assembly, with higher level systems languages taking over the role of the former.
I think the future of these languages is largely as a target for code generation and transformation. Their legacy tooling-unfriendly designs is what's slowing this transition.
>I'd been thinking to walk away from C++ in favor of Rust for new projects. But now I might give C++23 a chance to prove itself.
Modules were a great step forward for C++, but one of the features I enjoy the most about Rust is that adding a library just works. Especially for single person projects, which are on the smaller side, it feels absolutely great having to do near zero build management.
I've always been aware of C++ (obviously!) but it seemed impenetrable to me coming from my experience with C# and JavaScript. So I was really pleasantly surprised when I tried it out a while ago using the C++20 standard: it felt entirely usable! But then I tried Rust, which felt like that plus a pile of even better stuff. It's been difficult to find a reason to go back since but I'm glad to see even more progress is being made.
There are a zillion reasons that C++ is still widely used (existing software, brand new software that relies heavily on existing libraries / game engines), so it's really nice to have lots of helpful features being added to the language and the standard library.
I'm ambivalent about Rust, but its best feature compared to C++ is a universal package manager and build system. I like vcpkg well enough, but it's not Cargo, it can't be.
The thing that really ticks me off about RUST is it has compiler optimizations that can’t be turned off. In C++, you can typically turn off all optimizations, including optimizing away unused variables, in most compilers. In RUST it’s part of the language specification that you cannot have an unused variable, which, for me kills the language for me since I like having unused variables for debug while I’m stepping through the code.
The tradeoff is C++ is an amazing language for building a type system and zero overhead abstractions, where Rust is extremely limiting in that area (orphan rule, constant evaluating const-generics, no implicit pedagogically equivalent type conversions, users don't define what move semantics mean, terrible variadic programming, macros can't interact with CTFE or generics, functions/lambdas cannot be const-generics). Some of that will probably improve over time, though, although metaprogramming in Rust advances slower than C++ since 2014 and it has to play catch up already.
I learned C++ in the late 90s and didn't touch it until recently and all the new stuff basically melted my brain. It feels like a different language now.
Indeed. I was working in/on C++ back in the late 80s and the 90s and I decided a few years ago to do a project with it.
I took the approach of learning it as a brand new language that I knew nothing about, and especially avoiding thinking about what I already knew about C++ and especially C. The result was a language that yes, is a pleasure to work in.
Of course some context could not be ignored: there are about six ways to do many things because of back compatibility. But that back compatibility is why so many people use it. I write in one style, and just know the others for reading other people's code.
As someone who learned JavaScript in the late 90s I feel the same way sometimes! If I'd gone away from the ecosystem and returned recently I think it would feel extremely alien.
Hopefully clang++ can catch up to the new standard faster as clangd the LSP is used in so many intellisense tools that depends on clang++ implementation. Even though c++23 will be compiling fine with newest g++, clangd will still complain all those new syntax probably for a few years ahead, at the moment quite a few c++20 features are still unsupported by clangd(which depends on clang++).
or, gcc can have its own c/c++ LSP, which is not the case.
Hoping compilers will get their C++20 modules implementation working well enough that we'll get C++23 standard library modules soon.
As an outside observer, it seems like progress is happening in spurts, but it feels kinda slow.
Last time I checked (which was roughly a year ago), even a toy project with modules slowed down IntelliSense to the point of it being unusable and having to be disabled. Do you know if it's better now?
For people who are proficient in other languages but haven't touched C++ since CS036 in undergrad, some 15+ years ago, what would be the best way to learn what "modern C++", both the code and the dev stack, looks like?
I'm a little 10 years out from writing C++ professionally and I found this cheat sheet[0] useful. Basically if you have an inkling of the concept you're looking for, just search on that cheat sheet to find the relevant new C++ thing. Specifically for me, we used Boost for smart pointers which are now part of the stdlib, and threads are now part of the stdlib as well.
I don't really learn stuff in a structured way so this might not be helpful at all, but a youtube walk got me into watching CPPCon talks (https://www.youtube.com/@CppCon) and I found them generally useful for getting an idea of what's going on in C++ and what all the magic words to research are.
When a bunch of people talk about weird gibberish like SFINAE it becomes easy to figure out it's something you should search for on wikipedia (https://en.wikipedia.org/wiki/SFINAE). note: SFINAE is simply a way to make overloading work better by failing gracefully if an overload fails to compile.
There's a series of talks called Back to Basics that seems to have quite a few talks every year where they discuss C++ features in general like move semantics or how testing works, etc. There have also been talks from the creators of CMake or the guys working on the recently added ranges proposal, so it does cover tooling as well.
I also quite enjoy following Jason Turner's C++ weekly series (https://www.youtube.com/@cppweekly) which also has quite a few episodes that are dedicated to new C++ features or new ways of handling old problems made available by the new features. They're generally quite short, each episode on average is 12 minutes.
Just looking down the list of videos I see this is also kind of a direct response to your question, from only 8 months ago.
https://youtu.be/VpqwCDSfgz0 [ C++ Weekly - Ep 348 - A Modern C++ Quick Start Tutorial - 90 Topics in 20 Minutes ]
For experimenting:
https://gcc.godbolt.org/ is a tool called compiler explorer, which is a really good place to experiment with toy code.
It's a site where you can write some code and let it execute, to see the results, as well as see what ASM it compiles down to for various compilers.
That last feature really helped me figure out whether the compiler really does pick up an optimisation I'm trying out. (and it's how I got really impressed by how powerful constexpr is (that's one of the new keywords))
For References:
Generally the https://en.cppreference.com site is a really well maintained wiki that has good explanations of every concept and standard library feature.
It sticks to the standard and has use examples and is heavily interlinked, as well as some concept pages to give an overview of certain topics like overloading, templates, etc. (they also have a SFINAE article https://en.cppreference.com/w/cpp/language/sfinae)
Deducing this seems like a drastic change to the language, not a minor incremental one. People will be doing CRTP with it without realizing or fully appreciating the consequences now.
When I was doing C++, one of my interview questions was an open ended one: "std::cout << "Hello world!" << endl;" What exactly is this doing, lets dive in to how it works.
You touch on kind of a lot here, and its pretty esoteric to even devs with 3-5 years experience. functors, operator overloading, namespaces, passing by reference to make the << chaining work, there is a lot you really have to know that is non-obvious. You can even jump into inheritance, templates and such if you really want to dive deep.
I thought this was normal after doing C++ for ~11 years, but when I finally broke out of the Stockholm syndrome and started a job working in other languages, I find it absurd how complex, or maybe a better way to put it is, how much language specific trivia you need to learn before you can even begin to unravel how a basic hello world! program really works.
My favourite C++ question is asking what std::move does. It's amazing the knots people twist themselves into explaining how it supposedly works, when in reality it's just a type cast to an r-value so that the move constructor/assignment gets called instead of the copy one.
If they do, there's no harm done. The new this inference stuff is just a brevity enhancement, yes?
You have to understand that it's a Sisyphean struggle to get people to use modern C++ features at all. You still see people passing around std::string instances by value and manually calling new and delete. They're ignorant of variants and think "final" is an advanced feature. I'm happy if they get as far as understanding even how to use CRTP.
There's a vast gulf in skill between the expert C++ practitioner who appreciates a blog post like the one linked and one who's been dragooned without training into writing some enterprise C++ thing. The latter relates to C++ as an inscrutable Lovecraftian god who might eat him slightly later if he makes the right cultist noises and does the right syntax dance.
There is yet another thing with deducing this: no more pointer-to-members:
```
class A {
int f(this A & self, float b);
};
```
Type of &A::f is int ()(A &, float), not int (A::)(float).
This is huge for template metaprogramming function deduction if you stick to it because that generated a lot of template instantiations to cover all cases.
I wouldn't call it harmless. The feature might just be brevity enhancement for CRTP (and I think it's incredibly well-designed for that), but the advertisement/education around it usually just mentions constness-agnosticism and code deduplication as the use cases, which are precisely the wrong cases for deducing this. CRTP was never the solution for those and that hasn't changed with the syntax -- because both of these have effects on code semantics, not just syntax. But I will bet most people using it will use it for those purposes, rather than as a briefer syntax for when they would've otherwise used CRTP.
It feels a lot like the push_back -> emplace_back thing, where the feature has important use cases, but plenty of people will use it merely because they're mistaught that it's the "modern way" to do things without being told it's a footgun that bypasses existing safety features in the language, and it becomes impossible to hold back the flood. And they get away with it the majority of the time, but every now and then they end up introducing subtle bugs that are painful to debug.
But hey it's the shiny modern hammer, so obviously you can't just let it sit there. People can't just see a modern hammer and let you watch it sit there until you actually need it. You have to use it (and people will tell you to use it during code reviews) or everyone will look down on you for being so old-fashioned and anti-"progress".
Probably thats also partly because they are being dumped into a large sprawling codebase already full of C++98 idioms. Even if you point them to the "newer sections" that are more modern, they will fall back to what they are working with all the time.
std::expected and a monadic interface for std::optional are very welcome changes. I've ended up with funky utility types to accomplish much the same thing in a couple projects, so an official interface is definitely nifty.
I remember reading that clang was finally shipping std as a module, albeit experimentally. So this ought to be an interesting couple of years for C++ -- though I suppose it remains to be seen to what degree the ecosystem will keep up with all these changes vs using headers/exceptions/the traditional ways of doing things.
<format> is instead 100% typesafe. If you pass the wrong stuff it won't compile, and as {fmt} shows you can even validate formats at compile time using just `constexpr` and no compiler support.
Some compilers can keep format strings type-safe, but they are going above and beyond the standard to make it happen.
Edit: OK, I guess puts includes a newline, so you'd need to use fputs if you don't want that (although this example includes one). Still, both of those are much less complicated than printf!
To be fair to C++, this is undefined behaviour in C until C23. Prior to that the () is interpreted as (...), varargs, and not (void).
So, some mutually beneficial cross pollination of ideas.
Overloading gives user defined types the expressiveness of internal types. Like all features, of they are used badly (e.g when + is overloaded to an operation which can hardly be interpreted as addition) it makes things worse. But you can write bad code in any language, using any methodology.
OK.
Operator overloading is a useful feature that saves a bunch of time and makes code way more readable.
You can quibble whether operator<<() is a good idea on streams and perhaps C++ takes the concept too far with operator,() but the basic idea makes a lot of sense.
For my part, I've been persuaded that generic operators like that are a net win for math-heavy code, especially vector and matrix math. Sure, C++ goes too far, but there are middle grounds that don't.
The modern web is built on overloading the . operator (e.g. ORMs like Rails and Django). We will never see a Tier-1 ORM in Golang simply because it lacks it.
Gamedev, AI also benefits heavily from it.
Operators like +, -, /, *, etc have meanings independent of integers and floats and to not allow these meanings to be expressed is sad.
I've heard many programmers express this sentiment and what they actually are attempting to argue is that having overloads of these operators that do not respect the corresponding group, ring, or field laws is confusing. This I agree with. Operators should mainly follow the proper abstract semantics.
Allowing ANY operator to be overloaded was dumb, like C++ did, where you could do batshit crazy stuff like overloading unary & (addressof) or the comma operator (!), or stuff like the assignment operator (that actually opens a parenthesis about how copy/move semantics in C++ are a total hack that completely goes OT compared to this).
Sensible operator overloading makes a lot of sense, especially when combined with traits that clearly define what operations are supported and disallow arbitrary code to define new operators on existing types. Rust does precisely that, and IMHO it works great and provides a much nicer experience than Java's verbose mess of method chaining.
The Google C++ style guide has a very nice overview. There are only two pros listed, and large number of cons. And this document is old by Internet (dog) years -- at least 10 years.
Ref: https://google.github.io/styleguide/cppguide.html#Operator_O...
Built in operators don't always map 1-1 to CPU instructions so don't appeal to that authority. There are still plenty of CPUs -- old and new -- without multiplication, division, or floating point support.
1. Operator overloading
2. Operator desugaring (e.g. __subscript__(), which substitutes the intrinsic function for basic types, but can also be defined for user defined types)
3. Writing templates with weird adaptors for primitive types.
Given that its design goal was to embed C, there were already operators that worked with various and mixed types. Adding (+.), etc., would have been unacceptable to the users. So, I think in general, for this language, it was good but, unfortunately, iostream made people think you should overload the behavioral expectation, too.
Most programming languages use infix notation for mathematical operations but polish notation for function calls. This creates an inconsistency. In languages, like LISP, that entirely use polish notation the inconsistency does not exist.
One could argue that if a programming language has this inconsistency, then one should at least try to be consistent with one's notation, i.e. for mathematical operations use infix notation (operator overloading).
Like with many things in C++, its another grenade, but when used appropriately is pretty great
In Scala, infix operators and methods are the same thing.
Or It's never been clear to me why there should be special "operator" things.Lunch, behind the cafeteria. Closed fists allowed, nothing below the belt.
OK, come to my glass labyrinth where you don't know which of a dozen images of me are the real instance.
You will soon appreciate obfuscation techniques.
https://brevzin.github.io/c++/2023/01/02/rust-cpp-format/
I think Barry under-estimates how long it will be before C++ programmers actually get the equivalent of #[derive(Debug)] and as a result the impact on ergonomics. But of course I might be wrong.
acktually, you should use println.
There's also a format-like thing using integers surrounded by curly braces, but it's horrific and I'm afraid to copy-paste it here.
Of all the things C++ annoys me with, I don't care that much about syntax.
https://en.cppreference.com/w/cpp/header/print
First, includes either need to be wrapped in angle brackets (for files included from the include path passed to the compiler) or quotes (for paths relative to the current file).
Second, the whole standard library would be huge to pull in, so it is split into many headers, even for symbols in the top level of the std namespace.
There are two ways you might want to have this work anyway despite namespacing. One option would be that you just import the namespace and get all the stuff in it, this is popular in Java for example, however in C++ this is a bit fraught because while you can import a namespace, you actually get everything from that namespace and all containing namespaces.
Because the C++ standard library defines a huge variety of symbols, if you do this you get almost all of those symbols. Most words you might think of are in fact standard library symbols in C++, std::end, std::move, std::array, and so on. So if you imported all these symbols into your namespace it's easy to accidentally trip yourself, thus it's usual not to do so at all.
Another option would be to have some magic that introduces certain commonly used features, Rust's preludes do this, both generally (having namespaces for the explicit purpose of exporting names you'd often want together) and specifically ("the" standard library prelude for your Edition is automatically injected into all your Rust software by default). C++ could in principle do something like this but it does not.
Leave un-namespaced identifiers to those that are declared in the current file and namespace everything else. If you really want, you’re free to add “using namespace std” or otherwise alias the namespace, but keeping standard library functions out of the global namespace as a default is a good thing! (In any language, not just C++)
C++ has been my main language for a very long time, but I've been a grumpy skeptic of C++ since around C++14 due to the language spec's total complexity. So I've mostly stuck with C++11.
But now that C++ has modules, concepts, etc., I'm starting to wonder if C++23 is worthwhile for new projects. I.e., the language-spec complexity is still there, but the new features might tip the balance.
I'd been thinking to walk away from C++ in favor of Rust for new projects. But now I might give C++23 a chance to prove itself.
To do that, it can only add features and never take them away. Instead, it adds features that deprecate the practice of PITA patterns that were common, necessary and difficult.
Like, SFINAE was necessary all over the place to make libraries "just work" the way users would expect. But, it is a PITA to write and and PITA to read. Now, constexpr if and auto return types can usually collapse all that scattered, implicit, templated pattern matching down to a few if statements. Adding those features technically made the standard more complicated. But, it made new code moving forward much simpler to understand.
Similarly: Before variadic templates, parameter packs and fold expressions, you had the hell of recursive templates. Auto lambdas make a lot of 1-off templates blend right into the middle of regular code. Deduction guides set up library writers to set you up to write
instead ofThere is a very wide middle ground between C++'s "Your horrific unsafe code from the 80s still compiles" and Python's "We changed the integer values common operations on strings return at runtime with absolutely no way to statically tell how to migrate the code".
In Dart, we moved to a sound static type system in 2.0, moved to non-nullable types in 2.13 (sound and defaulting to non-nullable!), and removed support for the pre-null safety type system in 3.0. We brought almost the entire ecosystem with us.
Granted, our userbase is much smaller than C++'s and the average age of a given Dart codebase is much younger.
But you can deprecate and remove old features without causing a decade of misery like Python did. You just need good language support for knowing which version of the language a given file is targeting, good static typing support, and good automated migration tooling. None of those is rocket science.
The result being that programmers have to learn every single one of those ways of doing things in order to read your coworkers code. Give me python 3 breaking changes any day
this is a problem, btw AFAIK c++ modernizations are few and rarely successful.
I guess I’m just right at home and therefore a bit reluctant to jump into C++20. That and the constantly changing “””correct way of doing things.”””
Don't forget it isn't only the language, the various implementations, standard library, breaking changes, and major 3rd party libraries.
So I embrace complexity, as the alternative is being stuck with something like Go.
But really, and I say this from the perspective of someone in the embedded world who still does most of his serious work in C and cares about code generation and linkage: I think the whole concept of these Extremely Heavy Systems Programming Languages is not long for this world. In modern systems, managed runtimes a-la Go/Swift/JVM/.NET and glue via crufty type-light environments like Python and Javascript are absolutely where the world has ended up.
And I increasingly get the feeling that those of us left arguing over which monstrosity to use in our shrinking domain are... kinda becoming the joke.
Eh. I don't know if I agree. I've worked on a few large C++ codebases, and the cognitive load between Rust and C++ is incomparable.
The ownership/borrowing stuff is complexity you deal with implicitly in other systems languages, here's it's just more explicit and semi-automated most of the time by the compiler.
In C++ the terse thing is never the correct thing. If I'm using metaphors: a Rust sentence, in C++ usually has to be expressed through one or more paragraphs. The type system is so loose you have to do "mental" expansions half the time (or run the build and pray for no errors, or at the very least that the error is somewhat comprehensible[1]).
There's some low-level stuff that can be ugly (for various definitions of ugly), but that's every language. The low level bits of async are a bit wired, but once the concepts "click" it becomes fairly intuitive.
At least the ugly parts are cordoned behind library code for the most part, and rarely leak out.
I guess it could just boil down to familiarity, but it took me much less time to familiarise myself with Rust than it took me to familiarise myself with C++. We're talking months vs years to consider myself comfortable/adept at it. Although, maybe just some C++ or general programming wisdom transferred over?
[1]: This happens in Rust too, I must admit. But it's usually an odd situation that I've encountered once or twice with some very exotic types. In C++ it's the norm, and usually also reported with bizarre provenance
Really, the ergonomics of both the language features and standard tooling meant it always meant it was going to appeal to people and go places C++ would not, even if they in theory are both "systems languages" with large surface areas, and they overlap at this extreme end (kernels/embedded/firmware/etc) of the spectrum that little else fits into.
Nowadays C and C++ took the role of that inline Assembly, with higher level systems languages taking over the role of the former.
Modules were a great step forward for C++, but one of the features I enjoy the most about Rust is that adding a library just works. Especially for single person projects, which are on the smaller side, it feels absolutely great having to do near zero build management.
Deleted Comment
If you want to mess around with machine learning, learn enough Python for it.
Id you want to make a mobile app, learn Swift or Kotlin.
I get that this crowd will nerd out on language specifics.. but at the end of the day if we're good engineers we should use the best tool for the job.
I'm ambivalent about Rust, but its best feature compared to C++ is a universal package manager and build system. I like vcpkg well enough, but it's not Cargo, it can't be.
I took the approach of learning it as a brand new language that I knew nothing about, and especially avoiding thinking about what I already knew about C++ and especially C. The result was a language that yes, is a pleasure to work in.
Of course some context could not be ignored: there are about six ways to do many things because of back compatibility. But that back compatibility is why so many people use it. I write in one style, and just know the others for reading other people's code.
Users of 1996's Java won't recognise Java 21, or 2001's C# versus C# 12.
or, gcc can have its own c/c++ LSP, which is not the case.
Asking for a friend.
[0] https://github.com/AnthonyCalandra/modern-cpp-features
There's a series of talks called Back to Basics that seems to have quite a few talks every year where they discuss C++ features in general like move semantics or how testing works, etc. There have also been talks from the creators of CMake or the guys working on the recently added ranges proposal, so it does cover tooling as well.
I also quite enjoy following Jason Turner's C++ weekly series (https://www.youtube.com/@cppweekly) which also has quite a few episodes that are dedicated to new C++ features or new ways of handling old problems made available by the new features. They're generally quite short, each episode on average is 12 minutes.
Just looking down the list of videos I see this is also kind of a direct response to your question, from only 8 months ago. https://youtu.be/VpqwCDSfgz0 [ C++ Weekly - Ep 348 - A Modern C++ Quick Start Tutorial - 90 Topics in 20 Minutes ]
For experimenting:
https://gcc.godbolt.org/ is a tool called compiler explorer, which is a really good place to experiment with toy code.
It's a site where you can write some code and let it execute, to see the results, as well as see what ASM it compiles down to for various compilers.
That last feature really helped me figure out whether the compiler really does pick up an optimisation I'm trying out. (and it's how I got really impressed by how powerful constexpr is (that's one of the new keywords))
For References:
Generally the https://en.cppreference.com site is a really well maintained wiki that has good explanations of every concept and standard library feature.
It sticks to the standard and has use examples and is heavily interlinked, as well as some concept pages to give an overview of certain topics like overloading, templates, etc. (they also have a SFINAE article https://en.cppreference.com/w/cpp/language/sfinae)
You touch on kind of a lot here, and its pretty esoteric to even devs with 3-5 years experience. functors, operator overloading, namespaces, passing by reference to make the << chaining work, there is a lot you really have to know that is non-obvious. You can even jump into inheritance, templates and such if you really want to dive deep.
I thought this was normal after doing C++ for ~11 years, but when I finally broke out of the Stockholm syndrome and started a job working in other languages, I find it absurd how complex, or maybe a better way to put it is, how much language specific trivia you need to learn before you can even begin to unravel how a basic hello world! program really works.
My favourite C++ question is asking what std::move does. It's amazing the knots people twist themselves into explaining how it supposedly works, when in reality it's just a type cast to an r-value so that the move constructor/assignment gets called instead of the copy one.
You have to understand that it's a Sisyphean struggle to get people to use modern C++ features at all. You still see people passing around std::string instances by value and manually calling new and delete. They're ignorant of variants and think "final" is an advanced feature. I'm happy if they get as far as understanding even how to use CRTP.
There's a vast gulf in skill between the expert C++ practitioner who appreciates a blog post like the one linked and one who's been dragooned without training into writing some enterprise C++ thing. The latter relates to C++ as an inscrutable Lovecraftian god who might eat him slightly later if he makes the right cultist noises and does the right syntax dance.
``` class A { int f(this A & self, float b); }; ```
Type of &A::f is int ()(A &, float), not int (A::)(float).
This is huge for template metaprogramming function deduction if you stick to it because that generated a lot of template instantiations to cover all cases.
It feels a lot like the push_back -> emplace_back thing, where the feature has important use cases, but plenty of people will use it merely because they're mistaught that it's the "modern way" to do things without being told it's a footgun that bypasses existing safety features in the language, and it becomes impossible to hold back the flood. And they get away with it the majority of the time, but every now and then they end up introducing subtle bugs that are painful to debug.
But hey it's the shiny modern hammer, so obviously you can't just let it sit there. People can't just see a modern hammer and let you watch it sit there until you actually need it. You have to use it (and people will tell you to use it during code reviews) or everyone will look down on you for being so old-fashioned and anti-"progress".
I remember reading that clang was finally shipping std as a module, albeit experimentally. So this ought to be an interesting couple of years for C++ -- though I suppose it remains to be seen to what degree the ecosystem will keep up with all these changes vs using headers/exceptions/the traditional ways of doing things.