Let me illustrate a common code organization issue some programmers run into on their first day. The novice writes
terminal.print("Hello world")
Then they decide they want to make the text red, so they edit their program to
terminal.print("Hello world")
terminal.setPrintColor("red")
And then they're confused that it didn't come out red. They haven't internalized that the first line of code happens before the second. They just get a soup of code on the screen that kind of has the ingredients for a program, and expect the computer to do what they want.
I find this _extremely_ surprising???But one doesn't even need to learn category theory. I assume that everybody has learned abstract algebra in high school, monoid, rings, groups, vector spaces and all that. A monad is just another kind of a structure like that. If you have studied abstract algebra in school then it should take 5 seconds to read the definition of a monad, a minute to understand it, and perhaps 10 minutes to see how various things such as errors or lists form monads.
Learning category theory, or indeed any sort of math from Wikipedia is an absolute futile endeavour.
I think there is a definite "step up" in complexity between the structures of abstract algebra such as monoids, rings, groups and vector spaces, and monads. All of those algebraic structures are basically just sets equipped with operations satisfying some equations. Monads are endofunctors equipped with natural transformations satisfying some equations. "Endofunctor" and "natural transformation" are considerably more advanced and abstract concepts than "set" and "operation", and they are concepts that belong to category theory (so I don't see how you can read and understand the definition of a monad without that basic level of category theory).
Your time estimates also seem wildly optimistic. A common rule of thumb is that reading a maths textbook at your level takes around an hour per page. I think the definition of a monad can be compared to one page of a textbook. So I'd say it'll take on the order of hours to read and understand the definition of a monad, and that's assuming you're already entirely comfortable with the pre-requisite concepts (natural transformations, etc.)
Seriously, I've read things about lists and nondeterminism a few times in this thread, and I can't help but wonder if "you guys" (functional programming nerds, maybe?) use the word "nondeterministic" different than the rest of the world?
If not, I'd love a good explanation about what makes lists non-deterministic, and why we would want that, and why they seem to be perfectly deterministic in imperative programming languages.
Think of a computation as a process of changing state. At a given point in time, the computer is in a certain state, the current state. The computation can be described in terms of a function that acts on the current state.
In a deterministic computation, the function takes in the current state, and produces a single state as output which will be the state the computer enters on the next tick.
In a non-deterministic computation, the function takes in the current state and produces a set of states as output. These states are all the possible states the computer might enter on the next tick. We don't know (or just don't care) which one of these states it will enter.
You can model a non-deterministic computation as a deterministic one, by using a list `currentStates` to store the set of all possible current states of the computation. At each "tick", you do `currentStates = flatMap(nextStates, currentStates)` to "progress" the computation. In the end `currentStates` will be the set of all possible end states (and you could do some further processing to choose a specific end state, e.g. at random, if you wish, but you could also just work with the set of end states as a whole).
It's in this sense that "a list is a non-deterministic result", although this is really just one thing a list can represent; a list is a generic data structure which can represent all sorts of things, one of which is a non-deterministic result.
People sometimes think that's a silly thing to ponder: it's obviously obvious! But at most places I've worked, we spend lots of time defining the technical skills required for a job and handwave the rest.
I guess people assume "they'll know it when they see it". But there's a lot of ambiguity. Parent comment suggests that being comfortable sitting in a conference room for an hour is an important part of their job. In some workplaces that would be an odd requirement. I've worked at places where the important thing was being able to go away and make progress on something for a few weeks.
I suspect there are people with autism reading these threads and feeling disheartened. It would be easy to leave with the impression that neurotypical people expect you to make all the effort and they won't try to meet you half way. Some workplaces are like that. But in all the talk about neurotypical vs neurodivergent, it's easy to forget that neurotypical people are a varied lot, just like neurodivergent people. Workplaces are a varied lot too.
In this respect interviewing is a bit like LeetCode. LeetCode problems and writing code to satisfy business requirements are both "coding" but they're quite different kinds of coding; someone being able to do the former is probably good evidence they can do the latter, but there are also plenty of people who can do the latter without being able to do the former. So it is, in my view, with interviewing vs. interacting with people on the job.
Did the author overthink it or are the simple solutions not correct?
Right???
It's not impossible, it just has zero probability of occurring.
Is it? Where can I read a proof? I have a feeling it’s uncountable set but would be happy to see a proof one way or another.
(The latter statement holds because for any given n, the set X_n of all strings of length n is finite. So you can count the members of X_0, then count the members of X_1, and so on, and by continuing on in that way you'll eventually count out all members of X. You never run out of numbers to assign to the next set because at each point the set of numbers you've already assigned is finite (it's smaller in size than X_0, ..., X_n combined, for some n).
In fact, even if you allow countably infinitely many phonemes to be used, the X_n sets will still be countable, if not finite, and in that case their union is still countable: to see that, you can take enumerations of each set put them together as columns an a matrix. Even though the matrix is infinite in both dimensions, it has finite diagonals, so you can enumerate its cells by going a diagonal at a time, like this (the numbers reflect the cells' order in the numeration):
1 3 6 10 15
2 5 9 14
4 8 13
7 12
11
However if you allow sentences to be countably infinitely long, then even when you only have finitely many phonemes, the set of all sentences will be uncountable, because in that case each countably infinitely long sentence can be mapped to a real number represented as an expansion in some base, and you can apply Cantor's diagonal argument. The "just count out each X_n separately" argument doesn't work in this case because it only applies to the sentences of finite length.)
Obviously, since f(f) = f it should be a -> a as well. But to infer it without actually evaluating it, using the standard Hindley-Milner algorithm, we reason as follows: the two f's can have different specific types, so we introduce a new type variable b. The type of the first f will be a substitution instance of a -> a, the type of the second f will be a substitution instance of b -> b. We introduce a new type variable c for the return type, and solve the equation (a -> a) = ((b -> b) -> c), using unification. This gives us the substitution {a = (b -> b), c = (b -> b)}, and so we see that the return type c is b -> b.
But if we use pattern matching rather than unification, the variables in one of the two sides of the equation (a -> a) = ((b -> b) -> c) are effectively treated as constants referring to atomic types, not variables. Now if we treat the variables on the left side as constants, i.e. we treat a as a constant, we have to match b -> b to the constant a, which is impossible; the type a is atomic, the type b -> b isn't. If we treat the variables on the right side as constants, i.e. we treat b and c as constants, then we have to match a to both b -> b and c, and this means our substitution will have to make b -> b and c equal, which is impossible given that c is an atomic type and b -> b isn't.