Tangent: I love Swift's new typed throws! One frustration with Python is you can't--just by looking at function signature--tell that it raises exceptions. With Swift (and afaik, Java) you would write it in the function definition, but still, you wouldn't know what _type_ of exception to handle. Now that problem is solved!
More generally though, I wish we could avoid having exceptions to begin with. What's the reason behind their prevalence in almost every language?
Typed throws (of errors, swift does not support exceptions) are for pretty specific usage scenarios - there was a years-long recurring discussion around adding them to the language (which I was an opponent).
They are useful within your own module, e.g. when they aren't part of an exported API, as an alternative to other error handling methods.
You can support them in generic utility methods like map, because you will just rethrow the typed throw.
Otherwise, it is meant for systems programming or use in embedded environments - basically the 'leaf code' that doesn't have upstream variability or independent upstream dependencies. It is meant to indicate things like 'errno' in a POSIX API. I actually feel it is a poor functional fit there; the valid error results differ both across errno-setting functions and across UNIX implementations of a particular function. The symbolic assignment (semantic error code to errno numeric value) is also variable across implementations. However, this allows the compiler to know errors are an int-sized stack type, vs a witness of a heap-allocated object.
To look at it differently, errors are meant either for recovery or for indicating a general failure for code to attempt to clean up from. Once you delegate an error to other code by rethrowing it, you no longer are conveying proper knowledge for recovery - so there's no purpose to having it be typed; all you can really do is try to fail gracefully.
A specific example - something like Swift Data is a poor fit for typed throws, because the database layer itself is adaptable. My Application doesn't know whether a failure is due to the local disk being out of space or a transient network issue connecting to a remote server, and attempting to recover from these in my application code is a pretty bad pattern because I'm baking in assumptions about a particular configuration of Swift Data across my application code (or into applications which use my module).
My opposition to the feature is that it is never required (you can indicate failure in the return signature via something like Result), adds overall language complexity, and is likely to be misused in cases where it doesn't provide value - it just adds ABI complexity over simply documenting expected Error types.
Also nice: a throw in Swift is like a return! It is setting the exception in a specific register and then returns.
The ‘throws’ signature indicates to the caller two things: the user needs to use some exception handling, and the callsite needs to handle the special return register in case it is filled with an exception
This way you do not need to do expensive stack walking
>What's the reason behind their prevalence in almost every language?
Unexpected errors are inescapable when you consider OOMs and other things so its almost required to support that. Ending scopes and bubbling up an error without a ton of unwrap boilerplate is actually really compelling.
I'd also say there is a bit of a 'when you have a hammer…" going on. Java in particular spent a good portion of its formative years with relatively few language features, so there was a lot of cobbling them together in interesting ways to solve new classes of problems.
Exceptions became a way to capture diagnostic context for where an error happened and a (limited) view of state, so a lot of Java server applications use them as part of their issue reporting and diagnosis process.
Other languages may expect say the application to halt on unexpected errors, and to build tools to locally evaluate the core dump (such as crash reports and symbolication in Apple-land).
This does have some influence on developers in terms of the ramifications of a failure (failed request vs stopped application/server), so it can have interesting effects in how seriously a team evaluates potential errors and designs for recovery/cleanup.
OOMs are usually a bad thing to make into an error because there is no way to recover from them. You might as well just kill the process instead of making people try to handle it.
In Java, typed exceptions are now considered an antipattern, because it caused problems in multiple parts of the language such as in lambdas. Results from rust tend to be more composable.
Do you know of any blogs or writing on this that I could read?
Naively, untyped exceptions in Swift always felt like an obvious type system loophole that we should be closing as soon as possible.
I trust that the "typed exceptions are an antipattern" people know what they're talking about, but I really don't understand the reasoning behind that position.
An exception allows you to handle exceptional errors at any level, without having to handle errors, or write a single line of code, at every other level. You can assume perfection, and put that catch at the level of the abstraction where there's an actual concern for the exceptional, keeping all the other code simple.
I've always worked strongly in the physical world side of software, like network stacks, test equipment, robots, etc, so I can't see a sane alternative, that doesn't involve in increasing the LOC by 20%, and being error prone (it's trivial to accidentally eat an exception if you're relying on returned values).
If you don't want to bubble up the exception to the user, that's trivial too. You just catch it at whatever level you choose, handle whatever, then return something nice.
> More generally though, I wish we could avoid having exceptions to begin with. What's the reason behind their prevalence in almost every language?
Because nobody really wants low-level stuff like divide by zero, memory errors, index out of bounds wrapped as a `Result type ? Both Rust and Go have panics. Every function would effectively return a `Result` type since any non-constant expression can hypothetically return an error.
> What's the reason behind their prevalence in almost every language?
More often than not it's not the caller that is responsible for handling errors/exceptions.
When you force the caller to take care of every single error, you end up with unreadable boilerplate code which hides the actual logic. There's a reason why Rust ended up with the `?` syntax sugar.
On top of that exceptions will occur. You can't pretend they won't and kill the app if they do. Again, even Rust and Go ended up adding handlers for their brain-dead panics.
Exceptions (when wielded correctly) end up simplifying your program. You develop for the happy path (mostly), and let code at the higher level of hierarchy make decisions about unhappy paths. That's how you get Erlang's supervision trees (https://erlang.org/documentation/doc-4.9.1/doc/design_princi...)
IMO the problem with tools that are great "when used correctly" is that if they don't force that "correctly" part
or the feature works in such a way that people just fall into correctly due to path-of-least-resistance, then people don't use it correctly. This, again IMO, is why people have problems with Exceptions. It is that they don't have these qualities and they are almost universally used incorrectly... thus the new languages have eschewed them much like they eschewed heavy handed OO abstractions. They were tried and found to be lacking for their intended purpose and alternatives are being tried.
I’m not familiar with Rust, but Swift has three options and it sounds like it may be similar.
In Swift code that throws when called must have “try” in front of it, making it really obvious where that’s going on. Your three options:
try - calls the code and either returns the like normal or an error that you’re forced to handle in a catch.
try? - calls the code and returns the value or nil (null value) if an error is thrown.
try! - calls the code and returns the value. If the function throws an error your app panics.
It’s quite nice. You can choose to handle the error when you need to. If you don’t really care about the specifics of the error and just want to treat any kind as a failure try? cleans up your code.
And try! lets you avoid writing boilerplate when you know it’s impossible for the error to be thrown but the compiler can’t deduce that from the source alone.
The article isn't about Swift; it's about the history of concurrency on mac's dating the to the last millennium.
The article doesn't even really explain the modern push - the difference between threads and fibers, or between actors and concurrency domains, or most critically between x86 and ARM in their memory models.
> Apple’s first Macs with dual processors came in 2000
There was also the Power Macintosh 9500/180MP back in 1996, equipped with two 180 MHz PowerPC 604e CPUs.
MacOS didn't support multi processors natively (it couldn't schedule different programs on different CPUs or anything), so it was only useful as basically a "Photoshop accelerator"
More generally though, I wish we could avoid having exceptions to begin with. What's the reason behind their prevalence in almost every language?
They are useful within your own module, e.g. when they aren't part of an exported API, as an alternative to other error handling methods.
You can support them in generic utility methods like map, because you will just rethrow the typed throw.
Otherwise, it is meant for systems programming or use in embedded environments - basically the 'leaf code' that doesn't have upstream variability or independent upstream dependencies. It is meant to indicate things like 'errno' in a POSIX API. I actually feel it is a poor functional fit there; the valid error results differ both across errno-setting functions and across UNIX implementations of a particular function. The symbolic assignment (semantic error code to errno numeric value) is also variable across implementations. However, this allows the compiler to know errors are an int-sized stack type, vs a witness of a heap-allocated object.
To look at it differently, errors are meant either for recovery or for indicating a general failure for code to attempt to clean up from. Once you delegate an error to other code by rethrowing it, you no longer are conveying proper knowledge for recovery - so there's no purpose to having it be typed; all you can really do is try to fail gracefully.
A specific example - something like Swift Data is a poor fit for typed throws, because the database layer itself is adaptable. My Application doesn't know whether a failure is due to the local disk being out of space or a transient network issue connecting to a remote server, and attempting to recover from these in my application code is a pretty bad pattern because I'm baking in assumptions about a particular configuration of Swift Data across my application code (or into applications which use my module).
My opposition to the feature is that it is never required (you can indicate failure in the return signature via something like Result), adds overall language complexity, and is likely to be misused in cases where it doesn't provide value - it just adds ABI complexity over simply documenting expected Error types.
This way you do not need to do expensive stack walking
Unexpected errors are inescapable when you consider OOMs and other things so its almost required to support that. Ending scopes and bubbling up an error without a ton of unwrap boilerplate is actually really compelling.
Exceptions became a way to capture diagnostic context for where an error happened and a (limited) view of state, so a lot of Java server applications use them as part of their issue reporting and diagnosis process.
Other languages may expect say the application to halt on unexpected errors, and to build tools to locally evaluate the core dump (such as crash reports and symbolication in Apple-land).
This does have some influence on developers in terms of the ramifications of a failure (failed request vs stopped application/server), so it can have interesting effects in how seriously a team evaluates potential errors and designs for recovery/cleanup.
Naively, untyped exceptions in Swift always felt like an obvious type system loophole that we should be closing as soon as possible.
I trust that the "typed exceptions are an antipattern" people know what they're talking about, but I really don't understand the reasoning behind that position.
I've always worked strongly in the physical world side of software, like network stacks, test equipment, robots, etc, so I can't see a sane alternative, that doesn't involve in increasing the LOC by 20%, and being error prone (it's trivial to accidentally eat an exception if you're relying on returned values).
If you don't want to bubble up the exception to the user, that's trivial too. You just catch it at whatever level you choose, handle whatever, then return something nice.
Swift is getting typed throws now because the untyped ones require memory allocation, and so aren't suitable for embedded programming.
Because nobody really wants low-level stuff like divide by zero, memory errors, index out of bounds wrapped as a `Result type ? Both Rust and Go have panics. Every function would effectively return a `Result` type since any non-constant expression can hypothetically return an error.
More often than not it's not the caller that is responsible for handling errors/exceptions.
When you force the caller to take care of every single error, you end up with unreadable boilerplate code which hides the actual logic. There's a reason why Rust ended up with the `?` syntax sugar.
On top of that exceptions will occur. You can't pretend they won't and kill the app if they do. Again, even Rust and Go ended up adding handlers for their brain-dead panics.
Exceptions (when wielded correctly) end up simplifying your program. You develop for the happy path (mostly), and let code at the higher level of hierarchy make decisions about unhappy paths. That's how you get Erlang's supervision trees (https://erlang.org/documentation/doc-4.9.1/doc/design_princi...)
In Swift code that throws when called must have “try” in front of it, making it really obvious where that’s going on. Your three options:
try - calls the code and either returns the like normal or an error that you’re forced to handle in a catch.
try? - calls the code and returns the value or nil (null value) if an error is thrown.
try! - calls the code and returns the value. If the function throws an error your app panics.
It’s quite nice. You can choose to handle the error when you need to. If you don’t really care about the specifics of the error and just want to treat any kind as a failure try? cleans up your code.
And try! lets you avoid writing boilerplate when you know it’s impossible for the error to be thrown but the compiler can’t deduce that from the source alone.
If only it was a problem to begin with ;)
The article doesn't even really explain the modern push - the difference between threads and fibers, or between actors and concurrency domains, or most critically between x86 and ARM in their memory models.
Would someone please write that article?
libdispatch source code is an enlightening read for those interested in diving deeper into the C Runtime powering Swift concurrency.
There was also the Power Macintosh 9500/180MP back in 1996, equipped with two 180 MHz PowerPC 604e CPUs.
MacOS didn't support multi processors natively (it couldn't schedule different programs on different CPUs or anything), so it was only useful as basically a "Photoshop accelerator"