This is a bit defeatist. Parsing the definition in your head is only the first level of understanding you can have about a mathematical structure. You don't really understand something until you can reinvent it (and in particular give a plausible answer to "why these axioms and not others?")
For example, to motivate groups, you could introduce the concept of a symmetry as a mapping from an object to itself that doesn't change its properties, introduce the idea of an isomorphism as a mapping with an inverse (where f and g being inverse means they compose to identity maps), put them together and postulate that a reasonable formalization of a symmetry is an automorphism (isomorphism from an object to itself), note that isomorphisms are closed under composition, and arrive at the definition of a group by considering sets of symmetries closed under finite composition (thinking of identity maps as the composition of 0 morphisms).
I'm sure there's a similarly conceptual way to motivate monads in functional programming. Hyland and Power have papers on algebraic theories of "effectful" operations and how they give rise to (finitary) monads, as one starting point.
Indeed the typical wisdom in math is that the best way to understand a group (and the reason we usually care about one) is its actions.
There's some quote to the effect of: "Just as with men, a group is known by its actions." This was the only reference I could find off the top of my head https://gowers.wordpress.com/2011/11/06/group-actions-i/, but I've definitely heard the sentiment repeated in a couple different texts.
The sentiment expressed in this article might useful to hear if you're solely interested in learning how to use monad transformers or whatever in Haskell, eg, I know a fair amount of math, but very little category theory, and manage use monads just fine without any deep intuition as to why they're an important formalism. But if you want to develop your problem-solving skills, for example, I think "quiet contemplation" of axioms is pretty much always worse than working through several examples and absorbing some good exposition.
If I remember correctly, you could forget the "inverses" bit and define a group action with only the monoid corresponding to the group.
So not all the "groupiness" is captured by actions.
Also, Bill Thurston's beautiful "On Proof and Progress in Mathematics" [1] contains several ways to understand the derivative, in addition to it being a look into what a very distinguished mathematician thought of the nature of the mathematical pursuit. Pretty similar ideas, explored in depth.
Even the brightest brain will have been stuck on something which, after the penny has dropped, will seem trivial.
I agree with the author that there is a danger of "dumbing down" concepts which risks casting a lower ceiling on our capacity for lofty thoughts. There is another risk, though, that as someone becomes overfamiliar with a subject, they lose the ability to empathise with people who cannot immediately grasp the same concept. See [0] for a mathematician's reflections on revisiting his own DPhil thesis five years on.
I would counter that making abstract concepts accessible, broadens the reach of concepts which allows for the greater potential of new concepts being synthesised. Put in another way, the more people that have some grasp of an idea, the more humanity will progress.
Using Group Theory (strictly Point Groups) as an example, there are plenty of Chemists out there who are doing great work building catalytic organometallic compounds who use Group Theory. I'm sure plenty of them wouldn't be able to tell you what a Group was nor even express what the general basis of a Group are.
So yes, while there is a need to push ourselves our capacity to think increasingly abstract, we should also not be afraid to ruthlessly simplify as the world is big enough that individuals will derive benefit from diluted concepts for generations to come. What's more is that as concepts get simplified, the next generation can short-circuit their learning to achieve greater understanding at an earlier stage in their lives.
As I've always thought: if every human is born roughly equal, thank god we haven't had to "re-derive" making fire every generation otherwise how would we ever have time in our lifespans to learn about quantum mechanics and more.
Indeed. I think the point about allowing people to understand the subject at various stages in their lives very apt. We apply pedagogical simplification to almost everything we teach children anyways, but it's quite possible that we could be much more far reaching in our attempts. It would be interesting to see a world where children are taught basic abstractions of group theory.
Mathematics is one of those fields where it is difficult to simplify. Period. But in addition, the practitioners often actively resist this simplification. Mathematics is firmly rooted in proof, but not all mathematics education needs to be firmly rooted in proofs, and for that matter, not all mathematics education must be rooted in mechanistic repetition of solution techniques. There is perhaps a way to endow people with these concepts when they couldn't otherwise arrive at them through rigorous proving.
>> There is perhaps a way to endow people with these concepts when they couldn't otherwise arrive at them through rigorous proving.
Indeed it would be great if there is such a way. But sadly, without proofs, these concepts are half baked and are thus harmful as they give a false sense of understanding and thence the more harmful overconfidence.
I remember the great thought that applies to entire math education (and also to education in general) - "There is no royal road to geometry." [1]
> not all mathematics education must be rooted in mechanistic repetition of solution techniques.
Actually, coming up with proofs often requires creativity. If anything, the emphasis on memorizing formulas and solution procedures comes from an aversion to teaching proofs.
I am coming to believe that computational thinking != mathematical thinking.
I'm able to reason logically in code because I'm able to reason logically about things in the real world and natural language supports that. I was able to code (early teens) and think computationally long before I ever knew there was such a thing as formal logic and truth tables and all that.
And let's consider names. In Ruby I knew what the module Enumerable did before I ever read the docs. Monoid? I don't think so. And I have some math background. What about all the potentially very talented programmers that have a non-math background? I don't buy that we need the beginnings of 3rd-level math to be decent programmers. In fact, I'll go further. I think think that the programming landscape is too math-oriented. I know someone who had a curiosity about recursion so I started explaining the concept via factorials as one does. Said person gets little panic attack. Our default explanations oughtn't involve math. Our names should avoid high-level math concepts divorced from everyday speech and concepts. By all means use aliases if you want but give the rest of us something to hang our conceptual hats on.
> What about all the potentially very talented programmers that have a non-math background?
If they don't know what something means, they can educate themselves. I learned about monoids in my first course in linear algebra at the university in the first month. My math background at that point was "finished highschool". Just because it has a name that does not appear in everyday language (unlike enumerable) it does not mean the concept is difficult to grasp.
> I am coming to believe that computational thinking != mathematical thinking.
How exactly are you defining “computational thinking”? Is it “running the calculation yourself, as if you were the computer”? “Putting yourself in the computer's shoes”? “Empathizing with the computer”?
> And let's consider names. In Ruby I knew what the module Enumerable did before I ever read the docs. Monoid? I don't think so.
Every technical discipline has a jargon, precisely because their practitioners need to express ideas that can't be conveyed using only everyday language. Programming is not an exception.
And you picked a particularly bad example, because, while there exist mathematical objects whose definitions are very obscure and hard to get an intuition for, monoids aren't among them.
> I know someone who had a curiosity about recursion so I started explaining the concept via factorials as one does.
Recursion is just self-reference. For example, you could've told said person “your ancestors are your parents and their ancestors” - a perfectly good example of a recursive definition.
I'm not against all jargon, I'm advocating for less mathematical jargon and more ordinary language jargon.
If I didn't know about functions your explanation of “your ancestors are your parents and their ancestors” would give me some insight into the nature of recursion but not a whole lot.
Yes and No to this article. Yes, abstraction is important, and yes yes, the point of mathematical abstraction is not to be vague, but to create a level of expression where one can be precise.
No, the point of abstraction can be to be vague, and to delimit the boundary of a concept that is not quite clear (yet). It is not a mathematical abstraction then, but something you are experimenting with to find the right kind of laws that are supposed to govern that concept. Most real-world programs contain many such concepts for which either no good mathematical abstraction exists, or for which nobody has had yet the time and money to find a good mathematical abstraction.
The reader monad is like read-only thread-local storage. (State or ST would be thread local storage). It provides a command to get the "current state"
function f() {
var m = getState().chain(state => {
// ...
})
return m;
}
or with generators acting like do syntax replacement:
var f = wrapped(function* () {
var state = yield getState();
// ...
return;
});
At the end you get an object that contains an unexecuted sequencing of computations that you can run with any state. For example you could run it from a HTTP route handler with the current user contained in the "state":
var m = f();
m.runWith({currentUser: request.user})
and now you have access to the user without passing it around everywhere.
All the other analogies in this thread are wrong in terms of what the real API looks like. The real API looks almost exactly like Promises, with `chain` instead of `then`.
This in turn also explains why monad transformers are so important. Having access to just the current user w/o passing it around isn't very useful if e.g. you can't do IO, but having both - now we're talking!
I do not understand how the reader monad is different than promise.
It seems the reader monad just delays execution of some code until you call 'runWith' and you can define state variables throughout your code before calling runWith.
Reader fulfills a bunch of OO patterns. I like to think of it as "Environment", "Capability", "Configuration", etc., and is related in a way to Dependency Injection.
Basically, you have some computation/object that isn't complete without some "setup info" which, from the point of view of the code being written, is "just there". Like if you had a program that needed access to a particular interface, but you didn't want to manually pass it to every function that cares about it; instead you'd make an object with that interface as its state and write your program as methods on that object. That's just Reader in disguise.
In fact, C++ methods are basically in the Reader monad; they all have an implicit "this" argument which the compiler plumbs around for you.
well, a 'reader' monad is a monad over a Function[A, B]. So yes, when you say 'the reader monad', what you (probably?) mean is the instance it operates on, which would be a function.
You use the reader monad the thread the same input to multiple functions that may also take additional intput, and also have their own outputs.
You have to keep in mind that since Monads are so common and useful, many languages have syntactic support for them. C#'s LINQ query syntax, scala for expressions and haskell 'do' are all 'monadic comprehension'.
But partial functions are not always as clean as you think they are. When composing with pure functions for example, you'll save a lot of boilerplate lambdas if you use the monad.
The Reader Monad lets you abstract away local state. You're on the right track with partial functions, but using a monad is cleaner (at least in Haskell...and I'd guess anywhere else). Essentially using the monad instead will let you remove all of the redundant function application you'd have to do otherwise. If you've managed to abstract this step away into a library, take a look at your abstraction. It's probably equivalent to Reader.
I don't think this is on the right track. Obviously, learning is a matter of personal preference and experience, but trying to teach someone about groups by explaining the definition of a group defeats the whole purpose of even studying groups in the first place. The power of Group theory, and of mathematical abstraction in general, is that it allows us to generalize and extend existing concepts. I agree, that it would be ridiculous to rename all "groups" to "clocks", just as it would be ridiculous to rename all rectangles to squares. But for a student who has never seen a rectangle that is not a square before, it would be unfair and counterproductive for you to ask that student to put aside the example of a square when studying rectangles. For teaching purposes, it is rarely wrong to draw analogies between the generalized object and its everyday analogue.
I like using moves on Rubik's Cube to convey an example of a group. The idea of an identity becomes one of cancellation, there are obvious subgroups, etc.
For example, to motivate groups, you could introduce the concept of a symmetry as a mapping from an object to itself that doesn't change its properties, introduce the idea of an isomorphism as a mapping with an inverse (where f and g being inverse means they compose to identity maps), put them together and postulate that a reasonable formalization of a symmetry is an automorphism (isomorphism from an object to itself), note that isomorphisms are closed under composition, and arrive at the definition of a group by considering sets of symmetries closed under finite composition (thinking of identity maps as the composition of 0 morphisms).
I'm sure there's a similarly conceptual way to motivate monads in functional programming. Hyland and Power have papers on algebraic theories of "effectful" operations and how they give rise to (finitary) monads, as one starting point.
There's some quote to the effect of: "Just as with men, a group is known by its actions." This was the only reference I could find off the top of my head https://gowers.wordpress.com/2011/11/06/group-actions-i/, but I've definitely heard the sentiment repeated in a couple different texts.
The sentiment expressed in this article might useful to hear if you're solely interested in learning how to use monad transformers or whatever in Haskell, eg, I know a fair amount of math, but very little category theory, and manage use monads just fine without any deep intuition as to why they're an important formalism. But if you want to develop your problem-solving skills, for example, I think "quiet contemplation" of axioms is pretty much always worse than working through several examples and absorbing some good exposition.
Here's a recent article that I found very though-provoking: http://cognitivemedium.com/invisible_visible/invisible_visib.... They quote some mathematicians discussing their visual intuition for one property of groups.
So not all the "groupiness" is captured by actions.
Also, Bill Thurston's beautiful "On Proof and Progress in Mathematics" [1] contains several ways to understand the derivative, in addition to it being a look into what a very distinguished mathematician thought of the nature of the mathematical pursuit. Pretty similar ideas, explored in depth.
Edit: Apparently this is linked to in TFA :)
[1]: https://arxiv.org/abs/math/9404236
I agree with the author that there is a danger of "dumbing down" concepts which risks casting a lower ceiling on our capacity for lofty thoughts. There is another risk, though, that as someone becomes overfamiliar with a subject, they lose the ability to empathise with people who cannot immediately grasp the same concept. See [0] for a mathematician's reflections on revisiting his own DPhil thesis five years on.
I would counter that making abstract concepts accessible, broadens the reach of concepts which allows for the greater potential of new concepts being synthesised. Put in another way, the more people that have some grasp of an idea, the more humanity will progress.
Using Group Theory (strictly Point Groups) as an example, there are plenty of Chemists out there who are doing great work building catalytic organometallic compounds who use Group Theory. I'm sure plenty of them wouldn't be able to tell you what a Group was nor even express what the general basis of a Group are.
So yes, while there is a need to push ourselves our capacity to think increasingly abstract, we should also not be afraid to ruthlessly simplify as the world is big enough that individuals will derive benefit from diluted concepts for generations to come. What's more is that as concepts get simplified, the next generation can short-circuit their learning to achieve greater understanding at an earlier stage in their lives.
As I've always thought: if every human is born roughly equal, thank god we haven't had to "re-derive" making fire every generation otherwise how would we ever have time in our lifespans to learn about quantum mechanics and more.
[0]https://medium.com/@fjmubeen/ai-no-longer-understand-my-phd-...
Mathematics is one of those fields where it is difficult to simplify. Period. But in addition, the practitioners often actively resist this simplification. Mathematics is firmly rooted in proof, but not all mathematics education needs to be firmly rooted in proofs, and for that matter, not all mathematics education must be rooted in mechanistic repetition of solution techniques. There is perhaps a way to endow people with these concepts when they couldn't otherwise arrive at them through rigorous proving.
Indeed it would be great if there is such a way. But sadly, without proofs, these concepts are half baked and are thus harmful as they give a false sense of understanding and thence the more harmful overconfidence.
I remember the great thought that applies to entire math education (and also to education in general) - "There is no royal road to geometry." [1]
[1] https://en.wikipedia.org/wiki/Royal_Road#A_metaphorical_.E2....
Actually, coming up with proofs often requires creativity. If anything, the emphasis on memorizing formulas and solution procedures comes from an aversion to teaching proofs.
I am coming to believe that computational thinking != mathematical thinking.
I'm able to reason logically in code because I'm able to reason logically about things in the real world and natural language supports that. I was able to code (early teens) and think computationally long before I ever knew there was such a thing as formal logic and truth tables and all that.
And let's consider names. In Ruby I knew what the module Enumerable did before I ever read the docs. Monoid? I don't think so. And I have some math background. What about all the potentially very talented programmers that have a non-math background? I don't buy that we need the beginnings of 3rd-level math to be decent programmers. In fact, I'll go further. I think think that the programming landscape is too math-oriented. I know someone who had a curiosity about recursion so I started explaining the concept via factorials as one does. Said person gets little panic attack. Our default explanations oughtn't involve math. Our names should avoid high-level math concepts divorced from everyday speech and concepts. By all means use aliases if you want but give the rest of us something to hang our conceptual hats on.
If they don't know what something means, they can educate themselves. I learned about monoids in my first course in linear algebra at the university in the first month. My math background at that point was "finished highschool". Just because it has a name that does not appear in everyday language (unlike enumerable) it does not mean the concept is difficult to grasp.
You just proved my point.
How exactly are you defining “computational thinking”? Is it “running the calculation yourself, as if you were the computer”? “Putting yourself in the computer's shoes”? “Empathizing with the computer”?
> And let's consider names. In Ruby I knew what the module Enumerable did before I ever read the docs. Monoid? I don't think so.
Every technical discipline has a jargon, precisely because their practitioners need to express ideas that can't be conveyed using only everyday language. Programming is not an exception.
And you picked a particularly bad example, because, while there exist mathematical objects whose definitions are very obscure and hard to get an intuition for, monoids aren't among them.
> I know someone who had a curiosity about recursion so I started explaining the concept via factorials as one does.
Recursion is just self-reference. For example, you could've told said person “your ancestors are your parents and their ancestors” - a perfectly good example of a recursive definition.
If I didn't know about functions your explanation of “your ancestors are your parents and their ancestors” would give me some insight into the nature of recursion but not a whole lot.
To use a lever do I have to understand physics?
No, the point of abstraction can be to be vague, and to delimit the boundary of a concept that is not quite clear (yet). It is not a mathematical abstraction then, but something you are experimenting with to find the right kind of laws that are supposed to govern that concept. Most real-world programs contain many such concepts for which either no good mathematical abstraction exists, or for which nobody has had yet the time and money to find a good mathematical abstraction.
https://github.com/fantasyland/fantasy-land
Also - does anyone show me a practical usecase of the reader monad ? cant you just use partial functions ?
if I am not mistaken isn't reader monad just :
foobar.config ({'name':'hello'});
foobar.run ({'age':'34'});
All the other analogies in this thread are wrong in terms of what the real API looks like. The real API looks almost exactly like Promises, with `chain` instead of `then`.
This in turn also explains why monad transformers are so important. Having access to just the current user w/o passing it around isn't very useful if e.g. you can't do IO, but having both - now we're talking!
It seems the reader monad just delays execution of some code until you call 'runWith' and you can define state variables throughout your code before calling runWith.
Basically, you have some computation/object that isn't complete without some "setup info" which, from the point of view of the code being written, is "just there". Like if you had a program that needed access to a particular interface, but you didn't want to manually pass it to every function that cares about it; instead you'd make an object with that interface as its state and write your program as methods on that object. That's just Reader in disguise.
In fact, C++ methods are basically in the Reader monad; they all have an implicit "this" argument which the compiler plumbs around for you.
You use the reader monad the thread the same input to multiple functions that may also take additional intput, and also have their own outputs.
You have to keep in mind that since Monads are so common and useful, many languages have syntactic support for them. C#'s LINQ query syntax, scala for expressions and haskell 'do' are all 'monadic comprehension'.
The resulting class can also have other variables and inputs.
Is that a correct analogy ?
But partial functions are not always as clean as you think they are. When composing with pure functions for example, you'll save a lot of boilerplate lambdas if you use the monad.
https://news.ycombinator.com/item?id=10880839