And likening that to the Berlin wall, where people literally got shot dead, is honestly pretty disgusting.
And likening that to the Berlin wall, where people literally got shot dead, is honestly pretty disgusting.
> Code is read more often than written, and still needs to be reviewed, understood and maintained.
Which takes us back to the points above. AI is really good at generating repetitive patterns, like plain types, or code that implements a certain interface. If you reduce the cost of creating the verbose code [at write time] we can all enjoy the benefit of reduced complexity [at read time] without resorting to generics.
Also not saying this as an absolute truth, it is more nuanced than that for sure. But in the big picture, generics reduces the amount of code you have to write, at the cost of increased layers of abstraction, and steering away from the simplicity that make Go popular in the first place. Overall I'm not convinced it was a net positive, yet.
Though humans are very bad at reviewing repetitive patterns. I'd much rather review a single generic implementation, than 10 monomorphic ones, that look the same, but where I can't be sure and actually have to check.
So unless you are making the argument that generated code doesn't require review (compliance auditors would disagree) I would personally still much rather have generics.
Deleted Comment
Sure, that is C++ specific design decision. Just like Go made the design decision of not type checking interfaces leading to tens-of-thousands of lines of dummy checking concrete types against interfaces in popular Go repos.
I understand the design thinking even if I don't fully agree as a standard user of Go. Thanks for the detailed explanation in the blog.
Minor nitpick: It isn't all that difficult to come up with type structural/generic edge cases for ANY language compiler where compilation takes forever and times out in a playground. Here is a small program of ~100 lines leveraging Go Generics: https://go.dev/play/p/XttCbEhonXg
This will build for several minutes on your laptop if you use `go build`. It can be easily extended to several hours with a few modifications.
Fair point
template void foo(Dummy);
This can be done at the consumer side as well. I don't see a big deal of this. Dummy checks are common in Go too. For example, to check if a type satisfies an interface. var _ MyInterface = (*MyType)(nil)
var _ SomeInterface = GenericType[ConcreteType]{}
After all, Go checks that a type implements an interface only at the point where you assign or use it as that interface type.Thanks for your blog post. Unfortunately, the intentional limitations make the design space a massive headache and many times lead to very convoluted API. I would actually make the argument that it explodes complexity - for the developer, instead of constraining it.
But that just ensures that the code type-checks for `Dummy`. It doesn't ensure that the code type-checks for any type you can put into `foo`. And that's the point of type constraints: To give you the necessary and sufficient conditions under which a generic function can be used.
That is simply not the case with C++ templates and concepts. That doesn't mean you can't still like them. I'm not trying to talk you out of liking C++ or even preferring it over Go. I'm just trying to explain that C++ concepts where something that we looked at specifically and found that it has properties that we don't like. And that - to us - the fact that Go generics are limited in comparison is a feature, not a bug.
And let's not forget that despite specifically reducing the safety of concepts in this way, the design ended up being NP-complete anyways and you can make a compiler use functionally infinite memory and time to compile a very small program: https://godbolt.org/z/crK89TW9G
For a language like Go, that prides itself on fast compilation times it is simply unacceptable to require a SAT solver to type-check. Again, doesn't mean one has to dislike C++. But one should be able to acknowledge that it is reasonable to choose a different tradeoff.
> I would actually make the argument that it explodes complexity - for the developer, instead of constraining it.
The title is a pun. Because it is about the computational complexity of checking constraints.
struct S {
bool M() { return true; }
};
int main() {
S s;
foo(s); // this now will check foo<S>
}
Now you will get compile errors saying that the constraint is not satisfied and that there is no matching function for call to 'bar(S&)' at line 14.But it turns out that’s because they only ever tested it with types for which there is no conflict (obviously the conflicts can be more subtle than my example). And now a user instantiates it with a type that does trigger the conflict. And they get an error message, for code in a library they neither maintain nor even (directly) import. And they are expected to find that code and figure out why it breaks with this type to fix their build.
Or maybe someone changes one of the constraints deep down. In a way that seems backwards compatible to them. And they test everything and it all works fine. But then one of the users upgrades to a new version of the library which is considered compatible, but the build suddenly breaks.
These kind of situations are unacceptable to the Go project. We want to ensure that they categorically can’t happen. If your library code compiles, then the constraints are correct, full stop. As long as you don’t change your external API it doesn’t matter what your dependencies do - if your library builds, so will your users.
This doesn’t have to be important to you. But it is to the Go project and that seems valid too. And it explains a lot of the limitations we added.
struct S {
bool M() { return true; }
};
int main() {
S s;
foo(s); // this now will check foo<S>
}
Now you will get compile errors saying that the constraint is not satisfied and that there is no matching function for call to 'bar(S&)' at line 14.Exactly. That is what I said:
> because you need to know the actual type arguments used, regardless of what the constraints might say.
It is because type-checking concept code is NP complete - it is trivial to check that a particular concrete type satisfies constraints, but you can not efficiently prove or disprove that all types which satisfy one constraint also satisfy another. Which you must do to type-check code like that (and give the user a helpful error message such as “this is fundamentally not satisfiable, your constraints are broken”).
And it’s one of the shortcomings of C++ templates that Go was consciously trying to avoid. Go’s generics are intentionally limited so you can only express constraints for which you can efficiently do such proofs.
I described the details a while back: https://blog.merovius.de/posts/2024-01-05_constraining_compl...
No longer true after C++ 20. When you leverage C++20 concepts in templates, type-checking happens in the template body more precisely and earlier than with unconstrained templates.
In the below, a C++ 20+ compliant compiler tries to verify that T satisfies HasBar<T> during template argument substitution, before trying to instantiate the body
template<typename T>
requires HasBar<T>
void foo(T t) {
t.bar();
}
The error messages when you use concepts are also more precise and helpfully informative - like Rust genericsOr am I grossly holding this wrong?
And likening that to the Berlin wall, where people literally got shot dead, is honestly pretty disgusting.
Yes, it is literally an option, you dunce. There is no law requiring you to keep ownership of a business. You might not like that option very well, but it is an option, which is infinitely better than the denizens of the GDR got.
Man, this post got my blood boiling with its callous stupidity.