I remember reading a Perl 6 design document back in the day that made a convincing argument that operator precedence should be a partial instead of a total order. The language was supposed to have user defined operators. The idea was that you would make some declarations that your new operator would bind e.g. "between * and +", or "tighter than *". The system would build up a minimal partial order that satisfied these constraints.
One of nice things about such a system is its compositionality: If you mix and match custom operators from different modules, you wouldn't get a conflict or a random precedence order, instead, you'd simply be forced to disambiguate using parentheses.
Contrast this with Haskell, which also has custom operators, but only a fixed number of precedence slots (10 IIRC), and some libraries really suffer from the fact that there's no "good one" available that works for the intended usage.
Swift works the way this article describes, except that instead of the rules being built in to the compiler, they are defined by the standard library, and you can define your own precedence levels and associativity for custom operators.
I don't know Swift, but does this mean there are expressions in Swift where the compiler returns an error, because the expression is ambiguous without additional parentheses?
(My understanding of the article is that it mainly argues for _partial_ precedence, not so much that precedence/associativity can be defined in the program - the latter is also the case in other languages, e.g., Haskell.)
My programming teacher told me: "operator precedence is complicated, always use parenthesis if you have multiple operators in a single expression". I simply do that, and nobody complained so far :)
I would not personally go that far. I may avoid parentheses for whatever is already commonly not put between parentheses in the code base I work on and would default to parentheses otherwise.
But another tool to work around this issue is to just give names to sub expressions by storing them into constants. Long expressions quickly get unreadable anyway.
Writing it is annoying, yeah, but it is more comprehensible. Since code is read more than it's written, the annoyance of writing it is a fine price to pay. Also more annoying than writing it is having to debug subtle bugs caused by incorrect understanding of precedence on the part of the programmer.
I do the same as well as break up the expression with intermediate variables to help explain to someone else (or myself three years down the line) what I was thinking. Any compiler/interpreter (for a language you’r using where speed is important) is going to flatten it anyway and so long expressions aren’t going to make your code faster!
I do the same. I'm pretty confident with my BIMDAS, less so with which operators are left- vs. right-associative in what language (and I hop around languages a lot so this is a bit of a pain). My general rule is if I have to look it up, then chances are so will the next guy (who in my line of work may not even be a coder at all and will be properly flummoxed), and chucking in brackets is a double win because it saves us both time.
It's rather disturbing that you're being taught that way. Parentheses are used to denote that the operators are used in a way that is different from their normal precedence, so it would be a surprise if they weren't.
As the saying goes, "the fish rots from the head"...
Or just use sexpressions. I don’t mean that flippantly either. I truly believe all of mathematics should utilize them as well. This way you learn the basics in grade school and have the syntactic skills needed moving forward.
I remember being confronted with an HP calculator for the first time in college. I understood RPN just fine and knew that I wanted to do, e.g., 6 7 ×, but turning that into actual keystrokes on the calculator was not immediately obvious. I think I ended up doing something like 6 ⎆ 7 ⎆ × ⎆ instead of 6 ⎆ 7 × and got the understandable garbage result. I ended up doing a big chunk of my freshman physics homework doing the math by hand because I didn't own a suitable calculator.
They also suck to read. Operator precedence is complicated, but actually increases readability once you are used to them, so much so that the parens seem to get in the way. Which is weird, why could that be?
Perhaps the extra tree parsing required by the brain is more than made up for by some kind of visual compression economy going through the eyes?
How about relation chaining, such as with inequalities? Is there some nice S-expression notation for, say, "0 < x ≤ y < 1"? In many programming languages you would have to write "0 < x and x <= y and y < 1", but in mathematics this kind of repetition would quickly become very cumbersome.
I find infix operators and their standard mathematical precedence rules perfectly intuitive. Equally good are the extended precedence rules of languages like R, which were designed by professional users of mathematics.
I hate reading S-expressions and reverse polish notation. To me, proposals that we write in those notations are like saying that we should write assembly code. I think "No, we have compilers so that humans can write human-readable code rather than being forced to cater to the machine."
I do too, and it's fine as long as you (a) have a relatively small number of operators, and (b) use one language, or several that have conforming precedence.
Does "<<" in C have the same precedence as "shl" in Pascal? How does it compare to multiplication, of which it is (essentially) a specialization? Does a<b<c in C mean what it does in Math? What about Python?
When I learned APL (and J and K), the first instinctive response to the lack of operator precedence was wtf? -- all same precedence, all "right associative" / "right to left" / "left of right" / "long right scope" (same meaning, different terms).
But after using it for a day, I realized all the other programming languages have it wrong. Math notation gets a pass because you handwrite it, so a set of rules that minimizes writing does make sense. Not so for programming languages.
APL/J/K have tens, perhaps even a hundred, operators -- so there isn't really any other practical way. But it just works so well, that it puts the Algol/C/Pascal decision to copy math in an unfavorable light.
...and yet the narrow-minded often tend to call it and the APL family "unreadable". It definitely has a learning curve, but as the analogy goes, someone who knows only English or some other non-Latin-based language seeing Chinese for the first time would probably have the same impression.
Solved this problem years ago for Algol68 (which allows you to specify relative precedence) by parsing with an LALR generated parser that left the resulting shift/reduce conflict to be resolved at run time (by looking at the operator on the stack and the one just scanned and comparing their current precedences)
One of nice things about such a system is its compositionality: If you mix and match custom operators from different modules, you wouldn't get a conflict or a random precedence order, instead, you'd simply be forced to disambiguate using parentheses.
Contrast this with Haskell, which also has custom operators, but only a fixed number of precedence slots (10 IIRC), and some libraries really suffer from the fact that there's no "good one" available that works for the intended usage.
https://github.com/apple/swift/blob/3ea9e9e55281b9957d2b5486...
(My understanding of the article is that it mainly argues for _partial_ precedence, not so much that precedence/associativity can be defined in the program - the latter is also the case in other languages, e.g., Haskell.)
Deleted Comment
But another tool to work around this issue is to just give names to sub expressions by storing them into constants. Long expressions quickly get unreadable anyway.
Deleted Comment
Always code for someone to read your code later.
As the saying goes, "the fish rots from the head"...
RPN might be a better starting point for a revolution.
Perhaps the extra tree parsing required by the brain is more than made up for by some kind of visual compression economy going through the eyes?
sexprs without pair matching are horrid, but it's a solved problem since a few decades (paredit was made by zeus)
https://github.com/codr7/ampl
I find infix operators and their standard mathematical precedence rules perfectly intuitive. Equally good are the extended precedence rules of languages like R, which were designed by professional users of mathematics.
I hate reading S-expressions and reverse polish notation. To me, proposals that we write in those notations are like saying that we should write assembly code. I think "No, we have compilers so that humans can write human-readable code rather than being forced to cater to the machine."
Does "<<" in C have the same precedence as "shl" in Pascal? How does it compare to multiplication, of which it is (essentially) a specialization? Does a<b<c in C mean what it does in Math? What about Python?
When I learned APL (and J and K), the first instinctive response to the lack of operator precedence was wtf? -- all same precedence, all "right associative" / "right to left" / "left of right" / "long right scope" (same meaning, different terms).
But after using it for a day, I realized all the other programming languages have it wrong. Math notation gets a pass because you handwrite it, so a set of rules that minimizes writing does make sense. Not so for programming languages.
APL/J/K have tens, perhaps even a hundred, operators -- so there isn't really any other practical way. But it just works so well, that it puts the Algol/C/Pascal decision to copy math in an unfavorable light.
Write it out clearly and use extra () clarity to be completely obvious what is meant.