Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.
I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
In a function that inserts into 4 separate maps, and might fail between each insert, I'll add a scope exit after each insert with the corresponding erase.
Before returning on success, I'll dismiss all the scopes.
I suppose the tradeoff vs RAII in the mutex example is that with the guard you still need to actually call it every time you lock a mutex, so you can still forget it and end up with the unreleased mutex, whereas with RAII that is not possible.
Scope guards are neat, particularly since D has had them since 2006! (https://forum.dlang.org/thread/dtr2fg$2vqr$4@digitaldaemon.c...) But they are syntactically confusing since they look like a function invocations with some kind of aliased magic-value passed in.
A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.
You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?
You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
> The entire point of the article is that you cannot throw from a destructor.
You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.
Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
> Most of the languages that have finally clauses also have destructors.
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
I always wonder whether C++ syntax ever becomes readable when you sink more time into it, and if so - how much brain rewiring we would observe on a functional MRI.
It does... until you switch employers. Or sometimes even just read a coworker's code. Or even your own older code. Actually no, I don't think anyone achieved full readability enlightenment. People like me just hallucinated it after doing the same things for too long.
And yet, somehow Lisp continues to be everyone's sweetheart, even though creating literal new DSLs for every project is one of the features of the language.
In my opinion, C++ syntax is pretty readable. Of course there are codebases that are difficult to read (heavily abstracted, templated codebases especially), but it's not really that different compared to most other languages. But this exists in most languages, even C can be as bad with use of macros.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
It does get easy to read, but then you unlock a deeper level of misery which is trying to work out the semantics. Stuff like implicit type conversions, remembering the rule of 3 or 5 to avoid your std::moves secretly becoming a copy, unwittingly breaking code because you added a template specialization that matches more than you realized, and a million others.
This is correct - it does get easy to read but you are constantly considering the above semantics, often needing to check reference or compiler explorer to confirm.
Unless you are many of my coworkers, then you blissfully never think about those things, and have Cursor reply for you when asked about them (-:
"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.
A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
The oldest defer-like feature I can find reference to is the ON_BLOCK_EXIT macro from this article in the December 2000 issue of the C/C++ Users Journal:
I'll disagree here. I'd much rather have a Python-style context manager, even if it introduces a level of indentation, rather than have the sort of munged-up control flow that `defer` introduces.
I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.
Calling arbitrary callbacks from a destructor is a bad idea. Sooner or later someone will violate the requirement about exceptions, and your program will be terminated immediately. So I'd only use this pattern in -fno-exceptions projects.
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
Throwing exceptions from destructors is "fine", except when the destructor is executed by stack unwinding (triggered by an earlier exception), in which case throwing will terminate the program with std::terminate.
This is a good “how C++ does it” explanation, but I think it’s more accurate to say destructors implement finally-style cleanup in C++, not that they are finally. finally is about operation-scoped cleanup; destructors are about ownership. C++ just happens to use the same tool for both.
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost.
Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)
The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.
Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.
(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
This part is not correct. I can't speak for the other languages, but in Python the exception that is originally thrown is the one that creates the traceback. If the finally block also throws an exception, then the traceback includes that as additional information. The author includes an addendum, yet he is still wrong about which exception is first raised.
The traceback is actually shown based on the last-thrown exception (that thrown from the finally in this example), but includes the previous "chained exceptions" and prints them first. From CPython docs [1]:
> When raising a new exception while another exception is already being handled, the new exception’s __context__ attribute is automatically set to the handled exception. An exception may be handled when an except or finally clause, or a with statement, is used. [...] The default traceback display code shows these chained exceptions in addition to the traceback for the exception itself. [...] In either case, the exception itself is always shown after any chained exceptions so that the final line of the traceback always shows the last exception that was raised.
So, in practice, you will see both tracebacks. However, if you, say, just catch the exception with a generic "except Exception" or whatever and log it without "__context__", you will miss the firstly thrown exception.
The existence of two different patterns each with their own pitfalls is why we can’t have nice things. Finally shouldn’t return a value. Simply a void expression. Exception driven API’s need to be snuffed out.
If your method throws, mark it as such as force me to handle the exception if it does, do not return a non-value value in a finally.
Using Java as the example shows just how far we have come with this thinking, why old school Java style exception handling sucks and why C++ by proxy does too.
It’s difficult to break old mental habits but it’s easier when the compiler yells at you for doing bad things.
I was hoping absl::Cleanup would get a shoutout. I worked hard to make it ergonomic and performant. For those looking for something (imo) better than the standard types, check it out!
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...
https://dlang.org/articles/exception-safe.html
https://dlang.org/spec/statement.html#ScopeGuardStatement
Yes, D also has destructors.
In a function that inserts into 4 separate maps, and might fail between each insert, I'll add a scope exit after each insert with the corresponding erase.
Before returning on success, I'll dismiss all the scopes.
I suppose the tradeoff vs RAII in the mutex example is that with the guard you still need to actually call it every time you lock a mutex, so you can still forget it and end up with the unreleased mutex, whereas with RAII that is not possible.
https://github.com/isocpp/CppCoreGuidelines/issues/2203
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor
You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
If you can't, it's not remotely "basically the same as C++ RAII".
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
Unless you are many of my coworkers, then you blissfully never think about those things, and have Cursor reply for you when asked about them (-:
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.
> whether C++ syntax ever becomes readable when you sink more time into it,
Yes, and the easy approach is to learn as you need/go.
(1) Why doesn't it look like C++?
(2) Why does it look so much like C++?
Dead Comment
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
https://jacobfilipp.com/DrDobbs/articles/CUJ/2000/cexp1812/a...
A similar macro later (2006) made its way into Boost as BOOST_SCOPE_EXIT:
https://www.boost.org/doc/libs/latest/libs/scope_exit/doc/ht...
I can't say for sure whether Go's creators took inspiration from these, but it wouldn't be surprising if they did.
https://docs.rs/defer-rs/latest/defer_rs/
Deleted Comment
Deleted Comment
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
What exactly are you referring to?
Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)
The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.
Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.
(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
The traceback is actually shown based on the last-thrown exception (that thrown from the finally in this example), but includes the previous "chained exceptions" and prints them first. From CPython docs [1]:
> When raising a new exception while another exception is already being handled, the new exception’s __context__ attribute is automatically set to the handled exception. An exception may be handled when an except or finally clause, or a with statement, is used. [...] The default traceback display code shows these chained exceptions in addition to the traceback for the exception itself. [...] In either case, the exception itself is always shown after any chained exceptions so that the final line of the traceback always shows the last exception that was raised.
So, in practice, you will see both tracebacks. However, if you, say, just catch the exception with a generic "except Exception" or whatever and log it without "__context__", you will miss the firstly thrown exception.
[1]: https://docs.python.org/3.14/library/exceptions.html#excepti...
In Java the following is perfectly valid:
try { throw new IllegalStateException("Critical error"); } finally { return "Move along, nothing to see here"; }
try { throw new IllegalStateException("Critical error"); } catch(Exception) { return "Move along, nothing to see here"; }
The existence of two different patterns each with their own pitfalls is why we can’t have nice things. Finally shouldn’t return a value. Simply a void expression. Exception driven API’s need to be snuffed out.
If your method throws, mark it as such as force me to handle the exception if it does, do not return a non-value value in a finally.
Using Java as the example shows just how far we have come with this thinking, why old school Java style exception handling sucks and why C++ by proxy does too.
It’s difficult to break old mental habits but it’s easier when the compiler yells at you for doing bad things.