In my experience, the "best" code (defining "best" as some abstract melange of "easy to reason about", "easy to modify", "easy to compose", and "easy to test") ends up following the characteristics outlined by the sum of these three essays — strictly and rigorously elevating exceptions/failures/nulls to first-class types and then pushing them as high in the stack as possible so callers _must_ deal with them.
What constitues the "best" code depends on the incidental complexity of the problems you're trying to solve. Great code is when you have just enough of all those things, but have too much or too little and the code is worse.
You're right, of course — there are parts of my codebase that flagrantly disregard these rules, and did so for good reasons that I don't regret.
But I've found that while "everything is relative and should be situated in the context of the problem you're trying to solve" is a useful truism, it makes for poor praxis. It's hard to improve existing code or develop newer engineers without _some_ set of compasses and heuristics for what "good code" is, and once you develop that set the patterns and strategies for implementing "good code" naturally follows.
I hold these two essays similarly high in influence for myself. The pipeline/railway oriented programming really made it click about how to use first-class types to deal with error cases elegantly.
Unfortunately, a lot of languages make it difficult to have the compiler enforce exhaustiveness.
The author has conflated two concepts into "maybe function". Parsing is "maybe" in the sense that the parser will either return your object or fail. But it doesn't have to do any hidden, surprising behaviour like the "if (!loggedIn) {" line in the article.
The kicker here is that the author implemented a functor and called it a monad. So of course readers are going to think "the monad approach" is confusing and stay away.
I mean even if you implement a more standard Monad interface plenty of functional programmers still find working with Monads to be ugly. It's really not a solved area.
Can’t disagree more. Solution 1 just presents risk that some calls getUser() without doing the log in check. Then what happens?
It is false that getUser being a “maybe function” forces the other functions like getFriends to be maybe functions. Don’t let them take null in their arguments. Force the caller to deal with the null when it is returned by getUser.
This looks too easy, the first solution. If there is no logged on user, which User object is fetchUser going to return? Which friends? At the top level, if I were to forget to check if someone is logged in, who knows what would happen here.
I've worked on codebases where people were so allergic to the "billion dollar mistake" of nulls, that they created empty objects to return instead of returning null. This bit us in the ass a couple of times, e.g., when caller code was mistakenly passing the wrong ID variable into a fetch method, and just happily continued working and writing garbage into the DB, because it did not realize that its fetch had actually failed. It took data from the empty result object and happily continued its computation with it.
> This looks too easy, the first solution. If there is no logged on user, which User object is fetchUser going to return? Which friends? At the top level, if I were to forget to check if someone is logged in, who knows what would happen here.
It feels like the most likely thing to happen is that the `getUser()` call would throw a Null Pointer Exception?
I think the author is avoiding the pitfall of the NullObject pattern applied incorrectly with solution #1 because they're not masking the 'null-ness' in the code further down, they're just assuming that `null` will never get passed as a value. If it is, code blows up & then gets patched.
I’ve had limited success with the null object pattern but there is one case that it worked really well for me. I worked on a feature that was highly dynamic and users could compose reports selecting data points from tangentially related models. Null objects were a really helpful pattern because it was hard to anticipate how models would be composed and if a developer made a mistake it was hard to notice there was no effect. Our null objects would raise exceptions in development and explain what you need to change but wouldn’t prevent execution in production.
You could easily argue we should have just presented this exception to the user in all cases but this is where we landed. It’s probably the only case this pattern was beneficial for me.
Another option is Exceptions. The function either does what it's supposed to, or freaks out.
You can remove the null checks and the software will raise a null pointer exception. In the first example, could raise a NotLoggedInException.
It's still a maybe function, but you have a mechanism for expressing the why-notness of the function run, as opposed to returning a generic null.
As an aside, I prefer the "Unless" model of thinking vs the "Maybe" model of thinking. It's biased towards success. It presumes that the function is most likely to do something unless a precheck fails. filterBestFriendsUnless vs maybeFilterBestFriends. getUserUnless vs maybeGetUser. If we go this far down the rabbit hole, we can assume there's always an "unless". Programs run out of memory, stacks have limited depth. There are maybe conditions for which we can not account.
I think that's true for checked exceptions; in Typescript, I'd rather see that a function may return a null, rather than get surprised by a possible exception that's not telegraphed.
I think that's my biggest problem with exceptions. I have to rely on the doc comments to figure out whether a method can throw exceptions and which and when. And who knows if that covers all the possible exceptions from all the code that method relies on. It entirely sidesteps the type system and means I can't rely on the input/output types when using a method.
What the exceptional case is depends on the what the pre- and post-conditions of the function are. If a function assumes that the user is logged in then the user not being logged in is indeed exception. Not to say that it is good design though. That function is quite fragile like this. If it must assume that a user is logged in, then it could easily require a user to be given as an argument which will remove the whole possibility.
In this case, it's not being used for basic control flow. It's a prerequisite of the function that the user is logged in - and you violated that so it's an error. Returning null masks the reason why that happened.
As others already said: you shouldn't even be able to call this function when your pre-requisites for calling it are violated, ideally. You can achieve that by putting this function inside some sort of object which can't be created without a logged in user. If you don't have that, you can't ask for user information.
The proliferation of conditional "maybe" functions is a sign that your call graph is contrived and unnatural. You shouldn't be checking "userLoggedIn == true" in each and every accessor function. Ideally, such checks should bubble up towards the top of the call stack, and be performed once in an event loop iteration. The calling code should make sure that some basic prerequisites are met.
I use maybe functions a lot for things like "maybeShowReminderDialog". The conditions for displaying the reminder are wrapped in this maybe function.
Surely that's simpler than specifying those conditions before every call to show this dialog, resulting in plenty of duplicated code. And if those conditions change, there is only one place I need to update it.
Of course I can make a single operation to check those conditions like "shouldShowReminder", but that too is doubling the surface area of this code.
I see the merit of the argument here but disagree with the absolutist stance against "maybe" functions.
I would argue that the vast majority of functions in real world software are maybe functions in that they can fail. You need to be able to deal with failure. Not only can the user not be logged in, there can be a network issue, etc that makes even downstream functions fail.
Also, you have to deal with developer mistakes and what happens when they call incorrectly. This can be something as simple as getting the first element of a collection. What happens when the collection is empty? You can adopt the C++ approach of “undefined behavior” but it turns out to be dangerous.
Monads provide a nice disciplined way to dealing with this and composing together functions that can potentially fail.
Thankfully, newer languages such as providing support for monads and older languages are evolving features/libraries for monadic error handling.
1. Parse, don't validate (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...)
2. Pipeline-oriented programming (https://fsharpforfunandprofit.com/pipeline/)
In my experience, the "best" code (defining "best" as some abstract melange of "easy to reason about", "easy to modify", "easy to compose", and "easy to test") ends up following the characteristics outlined by the sum of these three essays — strictly and rigorously elevating exceptions/failures/nulls to first-class types and then pushing them as high in the stack as possible so callers _must_ deal with them.
But I've found that while "everything is relative and should be situated in the context of the problem you're trying to solve" is a useful truism, it makes for poor praxis. It's hard to improve existing code or develop newer engineers without _some_ set of compasses and heuristics for what "good code" is, and once you develop that set the patterns and strategies for implementing "good code" naturally follows.
Unfortunately, a lot of languages make it difficult to have the compiler enforce exhaustiveness.
Parsing is determining whether you should do it or not- it's about setting up a boundary from which you never attempt something that would be a maybe.
* Monads naturally arise out of many problems in programming.
* But I don't want my language to support monads.
* So here's something you can do to stay in denial about how much you need monads.
At least this example only involves writing hard-to-analyse code and doesn't lead to you trying to invent green threads.
It is false that getUser being a “maybe function” forces the other functions like getFriends to be maybe functions. Don’t let them take null in their arguments. Force the caller to deal with the null when it is returned by getUser.
I've worked on codebases where people were so allergic to the "billion dollar mistake" of nulls, that they created empty objects to return instead of returning null. This bit us in the ass a couple of times, e.g., when caller code was mistakenly passing the wrong ID variable into a fetch method, and just happily continued working and writing garbage into the DB, because it did not realize that its fetch had actually failed. It took data from the empty result object and happily continued its computation with it.
It feels like the most likely thing to happen is that the `getUser()` call would throw a Null Pointer Exception?
I think the author is avoiding the pitfall of the NullObject pattern applied incorrectly with solution #1 because they're not masking the 'null-ness' in the code further down, they're just assuming that `null` will never get passed as a value. If it is, code blows up & then gets patched.
You could easily argue we should have just presented this exception to the user in all cases but this is where we landed. It’s probably the only case this pattern was beneficial for me.
You can remove the null checks and the software will raise a null pointer exception. In the first example, could raise a NotLoggedInException.
It's still a maybe function, but you have a mechanism for expressing the why-notness of the function run, as opposed to returning a generic null.
As an aside, I prefer the "Unless" model of thinking vs the "Maybe" model of thinking. It's biased towards success. It presumes that the function is most likely to do something unless a precheck fails. filterBestFriendsUnless vs maybeFilterBestFriends. getUserUnless vs maybeGetUser. If we go this far down the rabbit hole, we can assume there's always an "unless". Programs run out of memory, stacks have limited depth. There are maybe conditions for which we can not account.
Surely that's simpler than specifying those conditions before every call to show this dialog, resulting in plenty of duplicated code. And if those conditions change, there is only one place I need to update it.
Of course I can make a single operation to check those conditions like "shouldShowReminder", but that too is doubling the surface area of this code.
I see the merit of the argument here but disagree with the absolutist stance against "maybe" functions.
Also, you have to deal with developer mistakes and what happens when they call incorrectly. This can be something as simple as getting the first element of a collection. What happens when the collection is empty? You can adopt the C++ approach of “undefined behavior” but it turns out to be dangerous.
Monads provide a nice disciplined way to dealing with this and composing together functions that can potentially fail.
Thankfully, newer languages such as providing support for monads and older languages are evolving features/libraries for monadic error handling.
There is only one safe(ish) way to deal with programmer errors: crash. Hopefully loudly and early enough so it gets discovered in testing.
Predicting every possible failure reason for a function is impossible. Every function is a maybe function.