https://learn.microsoft.com/en-us/cpp/c-runtime-library/comp...
[1]: https://devblogs.microsoft.com/cppblog/c11-atomics-in-visual...
[2]: https://devblogs.microsoft.com/cppblog/c11-threads-in-visual...
Deleted Comment
https://learn.microsoft.com/en-us/cpp/c-runtime-library/comp...
[1]: https://devblogs.microsoft.com/cppblog/c11-atomics-in-visual...
[2]: https://devblogs.microsoft.com/cppblog/c11-threads-in-visual...
Although at least with 2d arrays I prefer to just use a 1d array and index it with [x * width + y], because one problem with multidimensional arrays in C is they need multiple allocations/frees.
edit: Isn't it just:
float (*arr)[m][n] = malloc(sizeof(*arr));
Source: https://stackoverflow.com/questions/36647286/are-c-multidime...
Meaning that without any kind of metadata, the GC has to assume that any kind of value on the stack or global memory segments is a possible pointer, but it cannot be sure about it, it might be just a numeric value that looks like a valid pointer to GC allocated data.
So any algorithm that needs to be certain about the exact data types, before moving the wrong data, is already off the table in regards to C.
See https://hboehm.info/gc/ for more info, including the references.
The GC for Objective-C failed, because of the underlying C semantics, it would never be better than a typical conservative GC, and there were routinely application crashes when mixing code compiled with GC and non-GC options.
Thus they picked up the next best strategy, which was to automate the Cocoa's retain/release message pairs, and sell that as being much better than GC, because performance and such, not because the GC approach failed.
Naturally, as proven by the complex interop layer in .NET with COM, given Objective-C evolution, it would also be much better for Swift to adopt the same approach, than creating a complex layer similar to CCW/RCW.
Now everyone that wasn't around for this, kind of believes and resells the whole "ARC because performance!" story.
Not as bad as Apple nowadays though, quite far from Inside Inside Macintosh days.
Glad to know about C23 features, as they went silent on C23 plans.
C++23 looks quite bad for anything that requires frontend changes, there are even developer connection issues for us to tell what to prioritise, as if it wasn't logically all of it. There is another one for C++26 as well.
Personally, I think that with the improvements on low level coding and AOT compilation from managed languages, we are reaching local optimum, where C and C++ are good enough for the low level glue, C23 and C++23 (eventually C++26 due to static reflection) might be the last ones that are actually relevant.
Similar to how although COBOL and Fortran standards keep being updated, how many ISO 2023 revision compliant compilers are you going to find out for portable code?
That's really unfortunate.
> Not as bad as Apple nowadays though, quite far from Inside Inside Macintosh days.
Funny story, I know a guy who wanted to write a personal Swift project for an esoteric spreadsheet format and the quality of the documentation of SwiftUI made him ragequit. After that, he switched to kotlin native and gtk and he is much happier.
> Personally, I think that with the improvements on low level coding and AOT compilation from managed languages, we are reaching local optimum, where C and C++ are good enough for the low level glue, C23 and C++23 (eventually C++26 due to static reflection) might be the last ones that are actually relevant.
I agree on the managed language thing but, I mean, the fact that other languages are getting more capable with low level resources does not mean that improvements in C/C++ are a bad idea and will not be used. In fact, I think that features like the transcoding functions in <stdmchar.h> in C2y (ironically those are relevant to the current HN post) are useful to those languages too! So even if C, C++ and fortran are just used for numerical kernels, emulators, hardware stuff, glue code and other "dirty" code advancements made to them are not going wasted.