I find software engineers spend too much time focused on notation. Maybe they are right to do so and notation definitely can be helpful or a hindrance, but the goal of any mathematical field is understanding. It's not even to prove theorems. Proving theorems is useful (a) because it identifies what is true and under what circumstances, and (b) the act of proving forces one to build a deep understanding of the phenomenon under study. This requires looking at examples, making a hypothesis more specific or sometimes more general, using formal arguments, geometrical arguments, studying algebraic structures, basically anything that leads to better understanding. Ideally, one understands a subject so well that notation basically doesn't matter. In a sense, the really key ingredient are the definitions because the objects are chosen carefully to be interesting but workable.
If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Someone quoted von Neumann about getting used to mathematics. My interpretation always was that once is immersed in a topic, slowly it becomes natural enough that one can think about it without getting thrown off by relatively superficial strangeness. As a very simple example, someone might get thrown off the first time they learn about point-set topology. It might feel very abstract coming from analysis but after a standard semester course, almost everyone gets comfortable enough with the basic notions of topological spaces and homeomorphisms.
One thing mathematics education is really bad at is motivating the definitions. This is often done because progress is meandering and chaotic and exposing the full lineage of ideas would just take way too long. Physics education is generally far better at this. I don't know of a general solution except to pick up appropriate books that go over history (e.g. https://www.amazon.com/Genesis-Abstract-Group-Concept-Contri...)
Understanding new math is hard, and a lot of people don't have a deep understanding of the math they use. Good notation has a lot of understanding already built-in, and that makes math easier to use in certain ways, but maybe harder to understand in other ways. If you understand something well enough, you are either not troubled by the notation, because you are translating it automatically into your internal representation, or you might adapt the notation to something that better suits your particular use case.
Notation makes a huge difference. I mean, have you TRIED to do arithmetic with Roman numerals?
>If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Seeing the relationships between objects is partly why math has settled on a terse notation (the other reason being that you need to write stuff over and over). This helps up to a point, but mainly IF you are writing the same things again and again. If you are not exercising your memory in such a way, it is often easier to try to make sense of more verbose names. But at all times there is tension between convenience, visual space consumed, and memory consumption.
I haven't thought about or learned a systematic way to add roman numerals. But, I would argue that the difference is not notation but a fundamental conceptual advance of representing quantities by b (base) objects where each position advances by a power of b and the base objects let one increment by 1. The notation itself doesn't really make a difference. We could call X=1, M=2, C=3, V=4 and so on.
I also don't know what historically motivated the development of this system (the Indian system). Why did the Romans not think of it? What problems were the Indians solving? What was the evolution of ideas that led to the final system that still endures today?
I don't mean to underplay the importance of notation. But good notation is backed by a meaningfully different way of looking at things.
Considering that post-arithmetic math rarely use numbers at all, and even ancient Greeks use lots of lines and angles instead of numbers, I don't think Roman numerals would really hold math that much.
> One thing mathematics education is really bad at is motivating the definitions.
I was annoyed by this in some introductory math lectures where the prof just skipped explaining the general idea and motivation of some lemmata and instead just went through the proofs line by line.
It felt a bit like being asked to use vi, without knowing what the program does, let alone knowing the key combinations - and instead of a manual, all you have is the source code.
> If the idea is that the right notation will make getting insights easier, that's a futile path to go down on.
I agree whole heartedly.
What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
> What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
They do.
The purpose of papers is to teach working mathematicians who are already deeply into the subject something novel. So of course only novel or uncommon notation is introduced in papers.
Systematic textbooks, on the other hand, nearly always introduce a lot of notation and background knowledge that is necessary for the respective audience. As every reader of such textbooks knows, this can easily be dozens or often even hundreds of pages (the (in)famous Introduction chapter).
> What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
They already do this. That is how we all learn notation. Not sure what you mean by numerically though, a lot of concepts cannot be defined numerically.
Math rarely emphasize on this. You either have talent and you get intuition for free or you're average and you swim as much as you can until the next floater. It's sad because the internal and external value is immense
I think this would be extremely valuable:
“We need to focus far more energy on understanding and explaining the basic mental infrastructure of mathematics—with consequently less energy on the most recent results.”
I’ve long thought that more of us could devout time to serious maths problems if they were written in a language we all understood.
> I’ve long thought that more of us could devout time to serious maths problems if they were written in a language we all understood.
That assumes it’s the language that makes it hard to understand serious math problems. That’s partially true (and the reason why mathematicians keep inventing new language), but IMO the complexity of truly understanding large parts of mathematics is intrinsic, not dependent on terminology.
Yes, you can say “A monad is just a monoid in the category of endofunctors” in terms that more people know of, but it would take many pages, and that would make it hard to understand, too.
Players of magic the gathering will say a creature "has flying" by which they mean "it can only be blocked by other creatures with reach or flying".
Newcomers obviously need to learn this jargon, but once they do, communication is greatly facilitated by not having to spell out the definition.
Just like games, the definitions in mathematics are ethereal and purely formal as well, and it would be a pain to spell them out on every occasion. It stems more from efficient communication needs then from gatekeeping.
You expect the players of the game to learn the rules before they play.
He separates conceptual understanding from notational understanding— pointing out that the interface of using math has a major impact on utility and understanding. For instance, Roman numerals inhibit understanding and utilization of multiplication.
Better notational systems can be designed, he claims.
Yeah, I don't want to be uncharitable, but I've noticed that a lot of stem fields make heavy use of esoteric language and syntax, and I suspect they do so as a means of gatekeeping.
I understand that some degree of formalism is required to enable the sharing of knowledge amongst people across a variety of languages, but sometimes I'll read a white paper and think "wow, this could be written a LOT more simply".
> Yeah, I don't want to be uncharitable, but I've noticed that a lot of stem fields make heavy use of esoteric language and syntax, and I suspect they do so as a means of gatekeeping.
I think you're confusing "I don't understand this" with "the man is keeping me down".
All fields develop specialized language and syntax because a) they handle specialized topics and words help communicate these specialized concepts in a concise and clear way, b) syntax is problem-specific for the same reason.
See for example tensor notation, or how some cultures have many specialized terms to refer to things like snow while communicating nuances.
> "wow, this could be written a LOT more simply"
That's fine. A big part of research is to digest findings. I mean, we still see things like novel proofs for the Pythagoras theorem. If you can express things clearer, why aren't you?
I'm surprised at how could you get at this conclusion. Formalisms, esoteric language and syntax are hard for everyone. Why would people invest in them if their only usefulness was gatekeeping? Specially when it's the same people who will publish their articles in the open for everyone to read.
A more reasonable interpretation is that those fields use those things you don't like because they're actually useful to them and to their main audience, and that if you want to actually understand those concepts they talk about, that syntax will end up being useful to you too. And that a lack of syntax would not make things easier to understand, just less precise.
>
I understand that some degree of formalism is required to enable the sharing of knowledge amongst people across a variety of languages, but sometimes I'll read a white paper and think "wow, this could be written a LOT more simply".
OK, challenge accepted: find a way to write one of the following papers much more simply:
Fabian Hebestreit, Peter Scholze; A note on higher almost ring theory
What I want to tell you with these examples (these are, of course, papers which are far above my mathematical level) is: often what you read in math papers is insanely complicated; simplifying even one of such papers is often a huge academic achievement.
My opinion on this is that in mathematics the material can be presented in a very dry and formal way, often in service of rigor, which is not welcoming at all, and is in fact unnecessarily unwelcoming.
But I don’t believe it to be used as gatekeeping at all. At worst, hazing (“it was difficult for me as newcomer so it should be difficult to newcomers after me”) or intellectual status (“look at this textbook I wrote that takes great intellectual effort to penetrate”). Neither of which should be lauded in modern times.
I’m not much of a mathematician, but I’ve read some new and old textbooks, and I get the impression there is a trend towards presenting the material in a more welcoming way, not necessarily to the detriment of rigor.
The saying, "What one fool can do, another can," is a motto from Silvanus P. Thompson's book Calculus Made Easy. It suggests that a task someone without great intelligence can accomplish must be relatively simple, implying that anyone can learn to do it if they put in the effort. The phrase is often used to encourage someone, demystify a complex subject, and downplay the difficulty of a task.
In this modern era of easily accessible knowledge, how gate keepy is it though? It's inscrutable at first glance, but ChatGPT is more than happy to explain what the hell ℵ₀, ℵ₁, ♯, ♭, or Σ mean, and you can ask it to read the arxiv pdf and have it explain it to you.
I say the same thing about the universe. There is some gate keeping going on there. My 3 inch chimp brain at the age of 3 itself was quite capable of imagining a universe. No quantum field equations required. Then by 6 I was doing it in minecraft. And by 10 I was doing it with a piano. But then they started wasting my time telling me to read Kant.
A lot of people here suggesting they'd be great mathematicians if only it wasn't for the pesky notation. What they are missing is that the notation is the easy part..
Not at all. Over and over I find really intimidating math notation actually represents pretty simple concepts. Sigma notation is a good example of this. Hmm, giant sigma or sum()?
Imagine how much unnecessary time would be added to a course about series if the lecturer had to write sum() instead of ∑ every time. If you find it hard to remember that ∑ means sum, math might not be for you, and that’s fine.
Wait until you learn about integration. Measures, limits and the quirks of uncountable spaces don't become simpler once you call the operation integrate().
It's like saying that learning Arabic is the easy part of writing a great Saudi novel. True, but you have to understand that being literate is the price of admission. Clearly you consider yourself very facile with mathematical notation but you might have some empathy for the inumerate. Not everyone had the good fortune of great math teachers or even the luxury of attending a good school. I believe there is valid frustration borne out of poor mathematical education.
Well yeah, but this empathy and frustration is simply misplaced. I have empathy for people who didn't get good education, and they should be frustrated towards their bad schooling. Math notation is simply the wrong target.
If they can't see that, it's hard to think they have much chance with the actual math. "A mathematician is a person who knows how to separate the relevant from the irrelevant", a saying I was told in school.
The fact that there is a precise analogy between how Ax + s = b works when A is a matrix and the other quantities are vectors, and how this works when everything is scalars or what have you, is a fundamental insight which is useful to notationally encode. It's good to be able to readily reason that in either case, x = A^(-1) (b - s) if A is invertible, and so on.
It's good to be able to think and talk in terms of abstractions that do not force viewing analogous situations in very different terms. This is much of what math is about.
Well, obviously they will be confused because you jumped from a square of numbers to a bunch of operations. They’d be equally confused if you presented those operations numerically. I am not sure what it is you want to prove with that example. I am also not sure that a child can actually understand what a matrix is if you just show them some numbers (i.e., will they actually understand that a matrix is a linear transformer of vectors and the properties it has just by showing them some numbers?)
> This is so wrong it can only come from a place of inexperience and ignorance.
Thanks for the laughs :D
> Show a child a matrix numerically and they can understand it, show them Ax+s=b, and watch the confusion.
Show a HN misunderstood genius Riemann Zeta function as a Zeta() and they think they can figure out it's zeros. Show it as a Greek letter and they'll lament how impossible it is to understand.
I love math but the symbology and notations get in my way. 2 ideas:
1. Can we reinvent notation and symbology? No superscripts or subscripts or greek letters and weird symbols? Just functions with input and output? Verifiable by type systems AND human readable
2. Also, make the symbology hyperlinked i.e. if it uses a theorem or axiom that's not on the paper - hyperlink to its proof and so on..
Notation an symbology comes out of a minmax optimisation. Minimizing complexity maximizing reach. As with every local critical point, it is probably not the only state we could have ended at.
For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.
And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.
Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.
(1) I always tell my students that if they don't understand why things are done a certain way, that they should try to do it in the way most natural to them and then iterate to improve it. Eventually they will settle on something very similar to most common practice.
(2) Higher-level proofs are using so many ideas simultaneously that doing this would be tantamount to writing Lean code from scratch: painful.
1. I work in finance and here people sometimes write math using words as variable names. I can tell you it gets extremely cumbersome to do any significant amount of formula manipulation or writing with this notation. Keep in mind that pen and paper are still pretty much universally used in actual mathematical work and writing full words takes a lot of time compared to single Greek letters.
Large part of math notation is to compress the writing so that you can actually fit a full equation in your vision.
Also, something like what you want already exists, see e.g. Lean: https://lean-lang.org/doc/reference/latest/. It is used to write math for the purpose of automatically proving theorems. No-one wants to use this for actually studying math or manually proving theorems, because it looks horrible compared to conventional mathematics notation (as long as you are used to the conventional notation).
I'd love getting rid of all the weird symbols in favor of clear text functions or whatever. As someone who never learnt all the weird symbols its really preventing me from getting into math again... It is just not intuitive.
Those are used since it makes things easier, if you write everything out basically nobody would manage to learn math, that is how it used to be and then everything got shortened and suddenly average people could learn calculus.
I'm not sure that symbols are the thing actually keeping you away. Clear text functions might not be as clear, as it will be harder to scan and it will still contain names that you might not be familiar with. Those "weird symbols" are not there because people liked to make weird symbols. No one likes them, it's just that it makes things easier to understand.
Probably not. The conventional math notation has three major advantages over the "[n]o superscripts or subscripts or [G]reek letters and weird symbols" you're proposing:
1. It's more human-readable. The superscripts and subscripts and weird symbols permit preattentive processing of formula structures, accelerating pattern recognition.
2. It's familiar. Novel math notations face the same problem as alternative English orthographies like Shavian (https://en.wikipedia.org/wiki/Shavian_alphabet) in that, however logical they may be, the audience they'd need to appeal to consists of people who have spent 50 years restructuring their brains into specialized machines to process the conventional notation. Aim t3mpted te rait qe r3st ev q1s c0m3nt 1n mai on alterned1v i6gl1c orx2grefi http://canonical.org/~kragen/alphanumerenglish bet ai qi6k ail rez1st qe t3mpt8cen because, even though it's a much better way to spell English, nobody would understand it.
3. It's optimized for rewriting a formula many times. When you write a computer program, you only write it once, so there isn't a great burden in using a notation like (eq (deriv x (pow e y)) (mul (pow e y) (deriv x y)) 1), which takes 54 characters to say what the conventional math notation¹ says in 16 characters³. But, when you're performing algebraic transformations of a formula, you're writing the same formula over and over again in different forms, sometimes only slightly transformed; the line before that one said (eq (deriv x (pow e y)) (deriv x x) 1), for example². For this purpose, brevity is essential, and as we know from information theory, brevity is proportional to the logarithm of the number of different weird symbols you use.
We could certainly improve conventional math notation, and in fact mathematicians invent new notation all the time in order to do so, but the direction you're suggesting would not be an improvement.
People do make this suggestion all the time. I think it's prompted by this experience where they have always found math difficult, they've always found math notation difficult, and they infer that the former is because of the latter. This inference, although reasonable, is incorrect. Math is inherently difficult, as far as anybody knows (an observation famously attributed to Euclid) and the difficult notation actually makes it easier. Undergraduates routinely perform mental feats that defied Archimedes because of it.
> ... It's optimized for rewriting a formula many times.
It's not just "rewriting" arbitrarily either, but rewriting according to well-known rules of expression manipulation such as associativity, commutativity, distributivity of various operations, the properties of equality and order relations, etc. It's precisely when you have such strong identifiable properties that you tend to resort to operator-like notation in any formalism (including a programming language) - not least because that's where a notion of "rewriting some expression" will be at its most effective.
(This is generally true in reverse too; it's why e.g. text-like operators such as fadd() and fmul() are far better suited to the actual low-level properties of floating-point computation than FORTRAN-like symbolic expressions, which are sometimes overly misleading.)
I was writing a small article about [Set, Set Builder Notation, and Set Comprehension](https://adropincalm.com/blog/set-set-builder-natatio-set-com...) and while i was investigating it surprised me how many different ways are to describe the same thing. Eg: see all the notation of a Set or a Tuple.
One last rant point is that you don't have "the manual" of math in the very same way you would go on your programming language man page and so there is no single source of truth.
I find it strange to compare "math" with one programming language. Mathematics is a huge and diverse field, with many subcommunities and hence also differing notation.
Your rant would be akin to this if the sides are reversed: "It's surprising how many different ways there are to describe the same thing. Eg: see all the notations for dictionaries (hash tables? associative arrays? maps?) or lists (vectors? arrays?).
You don't have "the manual" of programming languages.
"
I wrote about overlapping intervals a while ago, and used what I thought was the standard math notation for closed and half-open intervals. From comments, I learned that half-open intervals are written differently in french mathematics: https://lobste.rs/s/cireck/how_check_for_overlapping_interva...
"The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration... the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it... yet finally it surrounds the resistant substance."
A. Grothendieck
Understanding mathematical ideas often requires simply getting used to them
Mathematics is hard when there is not much time invested in processing the core idea.
For example, Dvoretzky-Rogers theorem in isolation is hard to understand.
While more applications of it appear
While more generalizations of it appear
While more alternative proofs of it appear
it gets more clear. So, it takes time for something to become digestible, but the effort spent gives the real insights.
Last but not least is the presentation of this theorem. Some authors are cryptic, others refactor the proof in discrete steps or find similarities with other proofs.
Yes it is hard but part of the work of the mathematician is to make it easier for the others.
Exactly like in code. There is a lower bound in hardness, but this is not an excuse to keep it harder than that.
Mathematics is such an old field, older than anything except arguably philosophy, that it's too broad and deep for anyone to really understand everything. Even in graduate school I often took classes in things discovered by Gauss or Euler centuries before. A lot of the mathematical topics the HN crowd seems to like--things like the Collatz conjecture or Busy Beavers--are 60, 80 years old. So, you end up having to spend years specializing and then struggle to find other with the same background.
All of which is compounded by the desire to provide minimal "proofs from the book" and leave out the intuitions behind them.
> A lot of the mathematical topics the HN crowd seems to like--things like the Collatz conjecture or Busy Beavers--are 60, 80 years old.
Do you know the reason for that? The reason is that those problems are open and easy to understand. For the rest of open problems, you need an expert to even understand the problem statement.
I'll argue for astronomy being the oldest. Minimal knowledge would help pre-humans navigate and keep track of the seasons. Birds are known to navigate by the stars.
I would argue that some form of mathematics is necessary for astronomy, for “astronomy” as defined as anything more than simply recognizing and following stars.
The desire to hide all traces where a proof comes from is really a problem and having more context would often be very helpful. I think some modern authors/teachers are nowadays getting good at giving more context. But mostly you have to be thankful that the people from the minimalist era (Bourbaki, ...) at least gave precise consistent definitions for basic terminology.
Mathematics is old, but a lot of basic terminology is surprisingly young. Nowadays everyone agrees what an abelian group is. But if you look into some old books from 1900 you can find authors that used the word abelian for something completely different (e.g. orthogonal groups).
Reading a book that uses "abelian" to mean "orthogonal" is confusing, at least until you finally understand what is going on.
>>[...] at least gave precise consistent definitions for basic terminology.
Hopefully interactive proof assistants like Lean or Rocq will help to mitigate at least this issue for anybody trying to learn a new (sub)field of mathematics.
actually a lot of minimal proof expose more intuition than older proofs people find at first. I find it usually not extremely enlightening reading the first proofs of results, counterintuitively.
> Mathematics is such an old field, older than anything except arguably philosophy
If we are already venturing outside of scientific realm with philosophy, I'm sure fields of literature or politics are older. Especially since philosophy is just a subset of literature.
If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Someone quoted von Neumann about getting used to mathematics. My interpretation always was that once is immersed in a topic, slowly it becomes natural enough that one can think about it without getting thrown off by relatively superficial strangeness. As a very simple example, someone might get thrown off the first time they learn about point-set topology. It might feel very abstract coming from analysis but after a standard semester course, almost everyone gets comfortable enough with the basic notions of topological spaces and homeomorphisms.
One thing mathematics education is really bad at is motivating the definitions. This is often done because progress is meandering and chaotic and exposing the full lineage of ideas would just take way too long. Physics education is generally far better at this. I don't know of a general solution except to pick up appropriate books that go over history (e.g. https://www.amazon.com/Genesis-Abstract-Group-Concept-Contri...)
>If the idea is that the right notation will make getting insights easier, that's a futile path to go down on. What really helps is looking at objects and their relationships from multiple viewpoints. This is really what one does both in mathematics and physics.
Seeing the relationships between objects is partly why math has settled on a terse notation (the other reason being that you need to write stuff over and over). This helps up to a point, but mainly IF you are writing the same things again and again. If you are not exercising your memory in such a way, it is often easier to try to make sense of more verbose names. But at all times there is tension between convenience, visual space consumed, and memory consumption.
I also don't know what historically motivated the development of this system (the Indian system). Why did the Romans not think of it? What problems were the Indians solving? What was the evolution of ideas that led to the final system that still endures today?
I don't mean to underplay the importance of notation. But good notation is backed by a meaningfully different way of looking at things.
I was annoyed by this in some introductory math lectures where the prof just skipped explaining the general idea and motivation of some lemmata and instead just went through the proofs line by line.
It felt a bit like being asked to use vi, without knowing what the program does, let alone knowing the key combinations - and instead of a manual, all you have is the source code.
I agree whole heartedly.
What I want to see is mathematicians employ the same rigor of journalists using abbreviations: define (numerically) your notation, or terminology, the first time you use it, then feel free to use it as notation or jargon for the remainder of the paper.
They do.
The purpose of papers is to teach working mathematicians who are already deeply into the subject something novel. So of course only novel or uncommon notation is introduced in papers.
Systematic textbooks, on the other hand, nearly always introduce a lot of notation and background knowledge that is necessary for the respective audience. As every reader of such textbooks knows, this can easily be dozens or often even hundreds of pages (the (in)famous Introduction chapter).
They already do this. That is how we all learn notation. Not sure what you mean by numerically though, a lot of concepts cannot be defined numerically.
A little off topic perhaps, but out of curiosity - how many of us here have an interest in recreational mathematics? [https://en.wikipedia.org/wiki/Recreational_mathematics]
That assumes it’s the language that makes it hard to understand serious math problems. That’s partially true (and the reason why mathematicians keep inventing new language), but IMO the complexity of truly understanding large parts of mathematics is intrinsic, not dependent on terminology.
Yes, you can say “A monad is just a monoid in the category of endofunctors” in terms that more people know of, but it would take many pages, and that would make it hard to understand, too.
Players of magic the gathering will say a creature "has flying" by which they mean "it can only be blocked by other creatures with reach or flying".
Newcomers obviously need to learn this jargon, but once they do, communication is greatly facilitated by not having to spell out the definition.
Just like games, the definitions in mathematics are ethereal and purely formal as well, and it would be a pain to spell them out on every occasion. It stems more from efficient communication needs then from gatekeeping.
You expect the players of the game to learn the rules before they play.
He separates conceptual understanding from notational understanding— pointing out that the interface of using math has a major impact on utility and understanding. For instance, Roman numerals inhibit understanding and utilization of multiplication.
Better notational systems can be designed, he claims.
I understand that some degree of formalism is required to enable the sharing of knowledge amongst people across a variety of languages, but sometimes I'll read a white paper and think "wow, this could be written a LOT more simply".
Statistics is a major culprit of this.
I think you're confusing "I don't understand this" with "the man is keeping me down".
All fields develop specialized language and syntax because a) they handle specialized topics and words help communicate these specialized concepts in a concise and clear way, b) syntax is problem-specific for the same reason.
See for example tensor notation, or how some cultures have many specialized terms to refer to things like snow while communicating nuances.
> "wow, this could be written a LOT more simply"
That's fine. A big part of research is to digest findings. I mean, we still see things like novel proofs for the Pythagoras theorem. If you can express things clearer, why aren't you?
I'm surprised at how could you get at this conclusion. Formalisms, esoteric language and syntax are hard for everyone. Why would people invest in them if their only usefulness was gatekeeping? Specially when it's the same people who will publish their articles in the open for everyone to read.
A more reasonable interpretation is that those fields use those things you don't like because they're actually useful to them and to their main audience, and that if you want to actually understand those concepts they talk about, that syntax will end up being useful to you too. And that a lack of syntax would not make things easier to understand, just less precise.
OK, challenge accepted: find a way to write one of the following papers much more simply:
Fabian Hebestreit, Peter Scholze; A note on higher almost ring theory
https://arxiv.org/abs/2409.01940
Peter Scholze; Berkovich Motives
https://arxiv.org/abs/2412.03382
---
What I want to tell you with these examples (these are, of course, papers which are far above my mathematical level) is: often what you read in math papers is insanely complicated; simplifying even one of such papers is often a huge academic achievement.
But I don’t believe it to be used as gatekeeping at all. At worst, hazing (“it was difficult for me as newcomer so it should be difficult to newcomers after me”) or intellectual status (“look at this textbook I wrote that takes great intellectual effort to penetrate”). Neither of which should be lauded in modern times.
I’m not much of a mathematician, but I’ve read some new and old textbooks, and I get the impression there is a trend towards presenting the material in a more welcoming way, not necessarily to the detriment of rigor.
What, as opposed to using ambiguous language and getting absolutely nothing done?
The saying, "What one fool can do, another can," is a motto from Silvanus P. Thompson's book Calculus Made Easy. It suggests that a task someone without great intelligence can accomplish must be relatively simple, implying that anyone can learn to do it if they put in the effort. The phrase is often used to encourage someone, demystify a complex subject, and downplay the difficulty of a task.
If they can't see that, it's hard to think they have much chance with the actual math. "A mathematician is a person who knows how to separate the relevant from the irrelevant", a saying I was told in school.
Deleted Comment
This is so wrong it can only come from a place of inexperience and ignorance.
Mathematics is flush with inconsistent, abbreviated, and overloaded notation.
Show a child a matrix numerically and they can understand it, show them Ax+s=b, and watch the confusion.
It's good to be able to think and talk in terms of abstractions that do not force viewing analogous situations in very different terms. This is much of what math is about.
Deleted Comment
Thanks for the laughs :D
> Show a child a matrix numerically and they can understand it, show them Ax+s=b, and watch the confusion.
Show a HN misunderstood genius Riemann Zeta function as a Zeta() and they think they can figure out it's zeros. Show it as a Greek letter and they'll lament how impossible it is to understand.
1. Can we reinvent notation and symbology? No superscripts or subscripts or greek letters and weird symbols? Just functions with input and output? Verifiable by type systems AND human readable
2. Also, make the symbology hyperlinked i.e. if it uses a theorem or axiom that's not on the paper - hyperlink to its proof and so on..
For example, for your point 1: we could probably start there, but once you get familiar with the notation you dont want to keep writing a huge list of parameters, so you would probably come up with a higher level data structure parameter which is more abstract to write it as an input. And then the next generation would complain that the data structure is too abstract/takes too much effort to be comunicated to someone new to the field, because they did not live the problem that made you come with a solution first hand.
And for you point 2: where do you draw the line with your hyperlinks. If you mention the real plane, do you reference the construction of the real numbers? And dimensionl? If you reason a proof by contradiction, do you reference the axioms of logic? If you say "let {xn} be a converging sequence" do you reference convergence, natural numbers and sets? Or just convergence? Its not that simple, so we came up with a minmax solution which is what everybody does now.
Having said this, there are a lot of articles books that are not easy to understand. But that is probably more of an issue of them being written by someone who is bad at communicating, than because of the notation.
(2) Higher-level proofs are using so many ideas simultaneously that doing this would be tantamount to writing Lean code from scratch: painful.
Large part of math notation is to compress the writing so that you can actually fit a full equation in your vision.
Also, something like what you want already exists, see e.g. Lean: https://lean-lang.org/doc/reference/latest/. It is used to write math for the purpose of automatically proving theorems. No-one wants to use this for actually studying math or manually proving theorems, because it looks horrible compared to conventional mathematics notation (as long as you are used to the conventional notation).
1. It's more human-readable. The superscripts and subscripts and weird symbols permit preattentive processing of formula structures, accelerating pattern recognition.
2. It's familiar. Novel math notations face the same problem as alternative English orthographies like Shavian (https://en.wikipedia.org/wiki/Shavian_alphabet) in that, however logical they may be, the audience they'd need to appeal to consists of people who have spent 50 years restructuring their brains into specialized machines to process the conventional notation. Aim t3mpted te rait qe r3st ev q1s c0m3nt 1n mai on alterned1v i6gl1c orx2grefi http://canonical.org/~kragen/alphanumerenglish bet ai qi6k ail rez1st qe t3mpt8cen because, even though it's a much better way to spell English, nobody would understand it.
3. It's optimized for rewriting a formula many times. When you write a computer program, you only write it once, so there isn't a great burden in using a notation like (eq (deriv x (pow e y)) (mul (pow e y) (deriv x y)) 1), which takes 54 characters to say what the conventional math notation¹ says in 16 characters³. But, when you're performing algebraic transformations of a formula, you're writing the same formula over and over again in different forms, sometimes only slightly transformed; the line before that one said (eq (deriv x (pow e y)) (deriv x x) 1), for example². For this purpose, brevity is essential, and as we know from information theory, brevity is proportional to the logarithm of the number of different weird symbols you use.
We could certainly improve conventional math notation, and in fact mathematicians invent new notation all the time in order to do so, but the direction you're suggesting would not be an improvement.
People do make this suggestion all the time. I think it's prompted by this experience where they have always found math difficult, they've always found math notation difficult, and they infer that the former is because of the latter. This inference, although reasonable, is incorrect. Math is inherently difficult, as far as anybody knows (an observation famously attributed to Euclid) and the difficult notation actually makes it easier. Undergraduates routinely perform mental feats that defied Archimedes because of it.
______
¹ \frac d{dx}e^y = e^y\frac{dy}{dx} = 1
² \frac d{dx}e^y = \frac d{dx}x = 1
³ See https://nbviewer.org/url/canonical.org/~kragen/sw/dev3/logar... for a cleaned-up version of the context where I wrote this equation down on paper the other day.
It's not just "rewriting" arbitrarily either, but rewriting according to well-known rules of expression manipulation such as associativity, commutativity, distributivity of various operations, the properties of equality and order relations, etc. It's precisely when you have such strong identifiable properties that you tend to resort to operator-like notation in any formalism (including a programming language) - not least because that's where a notion of "rewriting some expression" will be at its most effective.
(This is generally true in reverse too; it's why e.g. text-like operators such as fadd() and fmul() are far better suited to the actual low-level properties of floating-point computation than FORTRAN-like symbolic expressions, which are sometimes overly misleading.)
1 and 2 would be
edit: edited, first got them wrongOne last rant point is that you don't have "the manual" of math in the very same way you would go on your programming language man page and so there is no single source of truth.
Everybody assumes...
Your rant would be akin to this if the sides are reversed: "It's surprising how many different ways there are to describe the same thing. Eg: see all the notations for dictionaries (hash tables? associative arrays? maps?) or lists (vectors? arrays?).
You don't have "the manual" of programming languages. "
Well, we kinda do when you can say "this python program" the problem with a lot of math is that you can't even tell which manual to look up.
A. Grothendieck
Understanding mathematical ideas often requires simply getting used to them
For example, Dvoretzky-Rogers theorem in isolation is hard to understand.
While more applications of it appear While more generalizations of it appear While more alternative proofs of it appear
it gets more clear. So, it takes time for something to become digestible, but the effort spent gives the real insights.
Last but not least is the presentation of this theorem. Some authors are cryptic, others refactor the proof in discrete steps or find similarities with other proofs.
Yes it is hard but part of the work of the mathematician is to make it easier for the others.
Exactly like in code. There is a lower bound in hardness, but this is not an excuse to keep it harder than that.
All of which is compounded by the desire to provide minimal "proofs from the book" and leave out the intuitions behind them.
Do you know the reason for that? The reason is that those problems are open and easy to understand. For the rest of open problems, you need an expert to even understand the problem statement.
Mathematics is old, but a lot of basic terminology is surprisingly young. Nowadays everyone agrees what an abelian group is. But if you look into some old books from 1900 you can find authors that used the word abelian for something completely different (e.g. orthogonal groups).
Reading a book that uses "abelian" to mean "orthogonal" is confusing, at least until you finally understand what is going on.
Hopefully interactive proof assistants like Lean or Rocq will help to mitigate at least this issue for anybody trying to learn a new (sub)field of mathematics.
If we are already venturing outside of scientific realm with philosophy, I'm sure fields of literature or politics are older. Especially since philosophy is just a subset of literature.
As far as anybody can tell, mathematics is way older than literature.
The oldest known proper accounting tokens are from 7000ish BCE, and show proper understanding of addition and multiplication.
The people who made the Ishango bone 25k years ago were probably aware of at least rudimentary addition.
The earliest writings are from the 3000s BCE, and are purely administrative. Literature, by definition, appeared later than writing.