The submitted title is missing the salient keyword "finally" that motivates the blog post. The actual subtitle Raymond Chen wrote is: "C++ says “We have try…finally at home.”"
In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"
To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."
So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
Yeah it's a huge mistake IMO. I see it fucking up titles so frequently, and it flies in the face of the "do not editorialise titles" rule:
[...] please use the original title, unless it is misleading or linkbait; don't editorialize.
It is much worse, I think, to regularly drastically change the meaning of a title automatically until a moderator happens to notice to change it back, than to allow the occasional somewhat exaggerated original post title.
As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
Personally, I would rather we have a lower bar for killing submissions quickly with maybe five or ten flags and less automated editorializing of titles.
A better approach would be to not so aggressively modify headlines.
Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
It has been up with the incorrect title for over 7 hours now. That's most of the Hacker News front-page lifecycle. The system for correcting bad automatic editorialisation clearly isn't working well enough.
Oh, come on man! These are trivial bugs. Whoever noticed it first should have sent the email to the mods. I did it before i posted my previous comment and i now see that the title has been changed appropriately.
It's rare to see the mangling heuristics improve a title these days. There was a specific type of clickbait title that was overused at the time, so a rule was created. And now that the original problem has passed, we're stuck with it.
I'm curious about the actual origin now, given that a quick search shows only vague references or claim it is recent, but this meme is present in Eddie Murphys "Raw" from 1987, so it is at least that old.
> So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.
It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
Calling arbitrary callbacks from a destructor is a bad idea. Sooner or later someone will violate the requirement about exceptions, and your program will be terminated immediately.
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.
A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.
You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?
Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
> Most of the languages that have finally clauses also have destructors.
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`.
using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting:
using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)
I think Java has something similar called try with resources.
I don't view finalizers and destructors as different concepts. The notion only matters if you actually need cleanup behavior to be deterministic rather than just eventual, or you are dealing with something like thread locals. (Historically, C# even simply called them destructors.)
There's a huge difference in programming model. You can rely on C++ or Rust destructors to free GPU memory, close sockets, free memory owned through an opaque pointer obtained through FFI, implement reference counting, etc.
I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
Technically CPython has deterministic destructors, __del__ always gets called immediately when ref count goes to zero, but it's just an implementation detail, not a language spec thing.
It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.
I always wonder whether C++ syntax ever becomes readable when you sink more time into it, and if so - how much brain rewiring we would observe on a functional MRI.
It does... until you switch employers. Or sometimes even just read a coworker's code. Or even your own older code. Actually no, I don't think anyone achieved full readability enlightenment. People like me just hallucinated it after doing the same things for too long.
And yet, somehow Lisp continues to be everyone's sweetheart, even though creating literal new DSLs for every project is one of the features of the language.
Lisp doesnt have much syntax to speak of. All of the DSLs use the same basic structure and are easy to read.
Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
The syntax and the concepts (const, move, copy, etc) are orthogonal. You could possibly write a lisp / s-exp syntax for c++ and all it would make better would be the macros in the preprocessor. The DSL doesn't have to be hard to read if it uses unfamiliar/uncommon project specific concepts.
Well-designed abstractions do that in every language. And badly designed ones do the opposite, again in all languages. There's nothing special about Lisp here
Sure but it's you who singled out Lisp here. The whole point of DSL is designing a purpose formalism that makes a particular problem easy to reason about. That's hardly a parallel to ever-growing vocabulary of standard C++.
In my opinion, C++ syntax is pretty readable. Of course there are codebases that are difficult to read (heavily abstracted, templated codebases especially), but it's not really that different compared to most other languages. But this exists in most languages, even C can be as bad with use of macros.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.
I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.
A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.
It's easy to demonstrate that destructors run after evaluating `return` in Rust:
struct PrintOnDrop;
impl Drop for PrintOnDrop {
fn drop(&mut self) {
println!("dropped");
}
}
fn main() {
let p = PrintOnDrop;
return println!("returning");
}
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.
It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
func getInt() -> Int {
let i: Int // declared but not
// defined yet!
defer { return i }
// all code paths must define i
// exactly once, or it’s a compiler
// error
if foo() {
i = 0
} else {
i = 1
}
doOtherStuff()
}
At $((job-1)) I think I used it a grand total of once, and I had to comment it heavily to explain what was going on. I think you need 3 things to be the case for it to help:
- Multiple return branches
- A complex, branchy bit of code to find a value you’re looking for, where some of the branches return early
- The actual return value is derived from said value, but the same way every time
I think it was something like:
func callFoo() -> FooResult {
let fooParam: Int
defer {
let actualResult: FooResult = foo(fooParam)
// ... more stuff you don’t want to repeat ...
return actualResult
}
if bar() {
fooParam = 1
} else {
fooParam = 0
// early return: actual return value is done in defer
return;
}
doOtherStuff()
}
So for any return path we want to call `foo()` with some Int we’ve calculated, and return that result. But we don’t want to repeat that logic in every early return. And we don’t want it to be possible to forget to define the Int before returning.
The magic is that a code path can just do a bare `return` statement, and it causes the defer to run which then actually does the logic to return the final value, and you don’t have to repeat that logic more than once.
In fact it’s easier to think of it as the `defer` being the whole point of the function (call foo() with some value, do some processing with it, then return the result), and then everything after the defer is all about finding the right argument to call foo() with. Swift will make sure all possible code paths assign to the value exactly one time, which is nice.
Without the defer here, you’d have to call foo() and do the “more stuff you don’t want to repeat” section in every place you want to return, which gets messy and error-prone. Or you’d have to restructure everything to not have any early returns (so even more if statements) but that can easily become less readable.
Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.
This is a good “how C++ does it” explanation, but I think it’s more accurate to say destructors implement finally-style cleanup in C++, not that they are finally. finally is about operation-scoped cleanup; destructors are about ownership. C++ just happens to use the same tool for both.
It's a snowclone based on the meme, "Mom, can we get <X>? No, we have <X> at home." : https://www.google.com/search?q=%22we+have+x+at+home%22+meme
In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"
To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."
So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.
It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
https://github.com/isocpp/CppCoreGuidelines/issues/2203
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
I think Java has something similar called try with resources.
I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
They are fundamentally different concepts.
See Destructors, Finalizers, and Synchronization by Hans Boehm - https://dl.acm.org/doi/10.1145/604131.604153
Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor
You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.
> whether C++ syntax ever becomes readable when you sink more time into it,
Yes, and the easy approach is to learn as you need/go.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
https://docs.rs/defer-rs/latest/defer_rs/
- Multiple return branches
- A complex, branchy bit of code to find a value you’re looking for, where some of the branches return early
- The actual return value is derived from said value, but the same way every time
I think it was something like:
So for any return path we want to call `foo()` with some Int we’ve calculated, and return that result. But we don’t want to repeat that logic in every early return. And we don’t want it to be possible to forget to define the Int before returning.The magic is that a code path can just do a bare `return` statement, and it causes the defer to run which then actually does the logic to return the final value, and you don’t have to repeat that logic more than once.
In fact it’s easier to think of it as the `defer` being the whole point of the function (call foo() with some value, do some processing with it, then return the result), and then everything after the defer is all about finding the right argument to call foo() with. Swift will make sure all possible code paths assign to the value exactly one time, which is nice.
Without the defer here, you’d have to call foo() and do the “more stuff you don’t want to repeat” section in every place you want to return, which gets messy and error-prone. Or you’d have to restructure everything to not have any early returns (so even more if statements) but that can easily become less readable.
how old is this post that 3.2 is "now"?