Readit News logoReadit News
pama · a year ago
dang · a year ago
Related ongoing thread:

Memory Safety Without Lifetime Parameters - https://news.ycombinator.com/item?id=41899828

pjmlp · a year ago
That is an alternative proposal, due to the hate to this one.

Deleted Comment

netbioserror · a year ago
It's becoming increasingly obvious why Apple and some newer languages like Nim went with RC by default. You get safety, the near-elimination of memory-semantic clutter, AND deterministic performance that's good enough for most use cases.
pjmlp · a year ago
The only deterministic thing about reference counting is call sites.

There is nothing deterministic about how long those calls take, or stack size requirements, especially if they trigger cascade deletions, or are moved into a background thread to avoid such scenarios.

Reference counting optimizations slowly turn into a tracing GC.

marssaxman · a year ago
Indeed, "A Unified Theory of Garbage Collection" shows that tracing and counting are duals: https://dl.acm.org/doi/10.1145/1028976.1028982
pornel · a year ago
It's possible, but that's an exaggeration of a degenerate case. RC doesn't mean all data must be tangled into an unknowably large randomly connected web of objects.

The behavior is deterministic enough to be profiled and identified if it actually becomes an issue.

Identifying causes of pressure on a mark-and-sweep style GC is much more difficult, and depends on specialized GC instrumentation, not just a regular profiler.

In practice, you have predictable deallocation patterns for the vast majority of objects. The things that are "young generation" in a GC are the things that get deallocated right away in RC.

Time required to deallocate is straightforwardly proportional to the dataset being freed. This can be a predictable bounded size if you're able to control what is referencing what. If you can't do that, you can't use more constant-time alternatives like memory pools either, because those surprise references would be UAFs.

I've been there when Apple moved Cocoa from GC to ARC, and the UI stutters have disappeared. It's much more palatable to users to have RC cost happening in line with the work application is doing, than have it deferred to cause jank at unexpected times seemingly for no reason.

pizlonator · a year ago
Or you could use Fil-C++ and get memory safety without any changes. Unlike this proposal, fil-C++ can run real C and C++ programs safely today (including interesting stuff like CPython, OpenSSH, and SQLite).

I don’t buy that adding an extension that is safe if you use it will move the needle. But making the language safe wholesale is practical. We should do that instead.

felipefar · a year ago
Hard pass on the garbage collector. We don't need that, and the minimal GC support that was in the standard has been removed from C++23.
pizlonator · a year ago
> Hard pass on the garbage collector.

Why?

> We don't need that

You do if you want comprehensive use-after-free protection.

> and the minimal GC support that was in the standard has been removed from C++23.

Not related to what I'm doing. The support you cite is for users of the language to write garbage collectors "on top" of the language. Fil-C++'s garbage collector is hidden in the implementation's guts, "below" the language. Fil-C++ is compliant to C++ whether C++ says that GC is allowed or not.

pjmlp · a year ago
Unreal C++, C++/CLI, and V8 C++ do need one.

It should never have been there in first place, because it ignored their requirements, and thus it was never adopted by them or anyone else.

gmueckl · a year ago
From the github README:

> On the other hand, Fil-C is quite slow. It's ~10x slower than legacy C right now (ranging from 3x slower in the best case, xz decompress, to 20x in the worst case, CPython).

That performance loss is severe and makes the approaches totally uninteresting for a most serious use cases. Most applications written in C or C++ don't get to waste that many cycles.

pizlonator · a year ago
Those are old perf numbers. It’s sub-2x most of the time now, and I’m working on optimizations to make it even faster.

Note that at the start of this year it was 200x slower. I land speed ups all the time but don’t always update the readme every time I land an optimization. Perf is the main focus of my work on Fil-C right now.

elliotpotts · a year ago
Wow, Fil-C++ looks very interesting! I wonder what % of programs make its pointer tracking fail due to stuffing things in the higher bits, doing integer conversions and so on. It reminds me of CHERI.
pizlonator · a year ago
You can put stuff in the high and low bits of pointers in Fil-C so long as you clear them before access, otherwise the access itself traps.

Python does shit like that and it works fine in Fil-C, though I needed to make some minor adjustments a like a 50KB ish patch to CPython.

Svoka · a year ago
sorry, what's Fil-C++?
gdiamos · a year ago
Glad to see Sean Baxter is working on this
rfmi · a year ago
The concept of a plan compared with a working language A long time C++ programmer that went rusty
29athrowaway · a year ago
So everything is unsafe by default, until you turn it on. Great...
quotemstr · a year ago
Yes, but there's hope. Making safety opt-in is a necessary prerequisite of backwards compatibility. Enforcing the use of safety annotations, however, is something that linters can enforce, and every major C++ codebase uses one form of another of supererogatory checking. By enforcing safety via linter, we've transitioned a robustness problem to an ergonomic one. Specifying "safe" over and over is hideous, aesthetically.

I think C++ needs a broader "resyntaxing" --- something like what Elixer is to Erlang and Reason is to OCaml. Such a resyntaxing wouldn't change language semantics, but it would allow us to adopt new defaults --- ones that benefit from decades of hindsight. A C++ Elixer wouldn't only mark functions safe by default, but would probably make variables const by default too. And it would be 100% compatible with today's C++, since it would be the same language, merely spelled differently.

pizlonator · a year ago
Weird this got downvoted since this is a big deal.

The default matters. So long as the language makes it easy to write unsafe code, people will do it and there will be security bugs.

112233 · a year ago
Usability trumps safety every day.

You can upgrade your compiler to version A. It produces safe code. It also produces 500kB of error messages for each of your source files. You will not ship any new builds until you fix all of them.

Or you can pick version B. It compiles all your code with only few errors. It also supports opting-in to multiple safety features.

If your salary depends on your software project, which will you pick?

kreetx · a year ago
I think the reason is that you don't want existing codebases to start erroring with the default setting.
29athrowaway · a year ago
Safety should be opt-out not opt-in.

But this would break backwards compatibility and the C++people do not want that.

blastonico · a year ago
There will always be security bugs. Why do you think otherwise?
ChrisArchitect · a year ago
Earlier discussion in September: https://news.ycombinator.com/item?id=41528124
worik · a year ago
It is not just memory safety, it is thread safety to.

When I was working with Swift that was not available. Swift wraps "fork" in multi syllable function names and half a dozen variations. But it is (was?) still just fork.

I do not know about the other languages mentioned.

Rust shines on that front. It takes a bit of getting used to, but once you are it is awesome.

I loved C++ back in the day. I have left it there, where it belongs. It was a fantastically successful experiment, but move on now.