Math is mind-boggingly huge. We've been at it for thousands of years. The area of our knowledge is enormous. But so is its boundary. We aren't running out of problems any time soon. Including "interesting" problems.
> These days, I can look at any of these journals and find at most one or two papers that are even remotely amusing, and algebra was my specialty. On the other hand, I can take a journal in biology like the Journal of Animal Behavior and still find quite a few papers in each journal that are interesting to me even though I’m not even a research biologist! Keep in mind I still like mathematics a lot, and I still enjoy algebra.
I'm sure that if you had more than passing knowledge in animal behavior, you would also find most of the papers dull. Learning completely new things is of course always a blast. Learning about the latest bleeding edge advances in a field where you already know a lot is not as exciting. I'm not sure what point you were trying to make there. When I read papers in physics I'm always thinking "holy shit electrons are so cool and crazy" because I'm always discovering something new at basically every paragraph. But for an expert the novelty eventually wears off.
I second this last paragraph.
I want to add my pet theory that it is not a purely psychological phenomenon. It's like that thing where we only remember the good movies (the "classics"), and the others are forgotten. When you first get started on the field, you're reading just the "classics". When you get to the current state of the art, the classics have not been sorted out yet, and so you get a lot of trash.
> I'm sure that if you had more than passing knowledge in animal behavior, you would also find most of the papers dull
Disagree. I read papers in biology and ecology regularly, and the relevance of them is much greater and at least easier to appreciate them. (I do some science popularization now and people care much more about recent research in biology than math graduate students care about the latest papers in math in neighbouring fields.)
The article suggests this test to establish that math is running out of interesting research: "Take a fairly generalist journal, like the Journal of Algebra (take a topic in which you have expertise — my doctoral thesis was in algebra). Look at some of the papers. How many of them are truly interesting to you?".
But it seems to me fairly likely that applying the same test to math journals from 100 or 200 years ago would produce similar results. Most published papers will not be of great interest to any particular one person.
No way, if you take a generalist algebra journal from 50 years ago, chances are many more graduate students interested in abstract algebra will be able to understand and appreciate the nature of the research. I mean, the best and most interesting papers from algebra were published between 1950-1990, IMO.
Is the test about ability to understand or about finding it very interesting? Of course it's easier to understand what has already become old hat and popularized in the field.
The 20th century was just a completely abnormal period, where people decided to change their understanding of math at the same times that physics, statistics, and engineering were demand more and more different ideas fro it. On top of that, CS was created and branched from math.
Is that based on the average level of maths at the time? I would argue that there are far more mathematicians now who understand the results from that period now than there were then because our mathematical literacy, especially in higher education and in the developing world has increased significantly.
The fact that those results are easier to understand is because of our increased literacy. Trigonometry was the cutting edge of maths at one point and math literacy was even less then. Now it’s material for tweens.
My take on this is that each narrow field goes through an S curve with a very exciting exponential beginning followed by a long tail of slow and boring progress. Then there is a new field with a new S curve.
The current hotness is experimental and theoretical investigations of large language models. This is just maths, and the papers coming out recently have been amazing.
I’ve read papers showing things like that neurons pack information into nearly-orthogonal spaces with beautiful geometric symmetries.
Just yesterday there was a paper showing that the middle layers of a deep language model can be interchanged and still work!
I think this is a great point. The author seems to have a selection bias. All the “great” problems in maths are the ones that have remained hard to solve.
All the stuff in the middle has been solved and then taught and is no longer “interesting”. Or, are we build on those results the new problems are a bit further from the fundamentals so you have to look in specific more specialized domains to find new areas.
What the author seems to forget is that most of the stuff we take for granted now were at one point the cutting edge of maths and obscure to all but the leading mathematicians of the time.
I think this note also misses that there are idiosyncratic factors related to the Journal of Algebra. This used to be a quite good generalist journal focused on algebra -- the Tits Alternative appeared there in the 70s, for example. Elsevier greatly increased the page count in the ensuing decades and it's now mostly dreck. These are papers that might be good to have in print for the sake of completeness of the literature, but nobody is going to send an actual interesting result in algebra there anymore. - An algebraist
I think if you look at a new algebra paper in a good journal, it's as likely to be interesting as a random algebra paper in a good journal from 1980. (Of course neither is anywhere near 100%, there were many boring papers back then too.)
If you look at a new algebra paper in J Algebra, of course it's not going to be interesting, what do you expect.
I don't think the average people from the 1600s cared much about John Napier and his treaty on logarithms published in 1614.
But we care a lot about logarithms now.
Maybe people never cared about current mathematics. Maybe that's just the pace of progress.
If most of our current problems are solved by results from 50 years ago, could it just be that our future problems will be solved by results from right now?
Maybe not the average butcher or baker, but to engineers and scientists it was incredibly important.
The slide rule was invented just a few years later based on Napier's work and was used continuously for the next 350 years, until the invention of the modern calculator/computer.
I don't think I disagree with your overall point I just think you chose the worst example :)
Firstly: this can probably be neither proven nor disproven, but my intuition tells me that it's by definition impossible for mathematics to run out of problems.
Secondly:
> It cannot remain healthy with its incredible publication rate today of mostly useless generalizations.
So the issue isn't that mathematics is running out of problems. The issue is that there are more publications than there are new problems being discovered / solved, and, ergo, the majority of publications are of limited value / interest. And that isn't an issue unique to mathematics, that's just how academic research is in the 21st century!
My definition of "running out of problems" as I stated was "running out of problems that more than a handful of people care about", and I think this is definitely true in math.
I don't want to be snarky, but I do seriously wonder how many people really cared about calculus when Newton/Leibniz developed it. It honest couldn't have been more than a handful because Newton slept on it for the better part twenty years.
I honestly think Math as a field has always been defined by "problems that only a handful of people care about".
The only exception I can think about is maybe basic addition and multiplication.
Which problems people work on is dictated to a large extent by the need to publish to keep your job. There is a lot of incentive to work on publishable low-hanging fruit problems. Hence the abundance of “write-only” journals in mathematics.
I don’t think there is by any means a shortage of hard, interesting problems. But working on them directly comes with significant career risk.
Well, a general statement about technology and invention is not exactly the same as a highly specific branch of knowledge becoming mature and not having anything innovative left to add to it.
If there's one thing that history has taught us, regardless of which field, it's that anything we manage to answer raises at least half a dozen other questions. Saying that math, of all fields, is running out of problems is one of the most absurd statements in this day and age. I was blown away by the capabilities of dumb SVMs in university and today SVMs look like a child's play - just over a decade later.
For as long as humanity has existed, whenever we feel like we've reached our peak, something happens and completely shatters our understanding. Say we have prime numbers and their occurrence is completely unpredictable - the same way Aristotle was convinced that objects come to a rest because they get tired. It's not a specific branch of knowledge, it's an outrageous claim like many have already pointed out. Could it be that math is running out of problems? Yes - in the same way that the universe might vanish tomorrow. Both of those claims are equally absurd.
I am sort of surprised that no one has countered with the quote often misattributed to Lord Kelvin. More interesting was the advice given to Max Planck:
One of Jolly's students at the University of Munich was Max Planck, whom he advised in 1878 not to go into theoretical physics.[5] Nevertheless, Planck's later work lead to the discovery of quantum mechanics.[5] Later in life Planck reported:[2][6]
As I began my university studies I asked my venerable teacher Philipp von Jolly for advice regarding the conditions and prospects of my chosen field of study. He described physics to me as a highly developed, nearly fully matured science, that through the crowning achievement of the discovery of the principle of conservation of energy it will arguably soon take its final stable form. It may yet keep going in one corner or another, scrutinizing or putting in order a jot here and a tittle there, but the system as a whole is secured, and theoretical physics is noticeably approaching its completion to the same degree as geometry did centuries ago. That was the view fifty years ago of a respected physicist at the time.
Math is not running out of problems just like physics didn't stop advancing (or merely become more precise measurement) in the 19th and 20th centuries. Something new an innovative might be around the corner — it might not. It might result in a new field entirely. It might not.
I’m struggling to think of a field that is more general. Information theory? Oh wait, that’s a sub-field of math. Communications? Eh it’s pretty general but math has it beat handily I think.
The idea that it is “done” or “out of interesting problems” is just absurd.
I’m open it the possibility that it’s harder than it was before to find and articulate interesting problems, but that is a very different claim.
> Or take a look at any undergraduate text in mathematics. How many of them will mention recent research in mathematics from the last couple decades? I’ve never seen it. Now take an undergraduate text in biology and you’ll still find quite a few citations to modern research.
That’s because, in the natural sciences, a lot of what was considered knowledge long ago has been found out to be incorrect.
On the other hand, look at the Pythagorean theorem. There has been a bit of chipping at its corners when non-Euclidean geometry was discovered/invented, but it remains true in large branches of mathematics.
And this isn’t a matter of centuries. A lot of genetics work that predates the discovery of the structure of DNA isn’t worth studying anymore.
> At what point can we still say with a straight face that it makes sense to pour millions of dollars into mathematics research when its only objective seems reaching the next highest peak of hyper-specialization?
Also I find it ironic to fro them to gripe
about alleged "hyper-specializatiin" given that part of the beauty of Math is discovering how seemingly unrelated areas are in fact connected AND discovering generalizations that can be easily applied.
Caveat, I am no where close to being a mathematician.
> These days, I can look at any of these journals and find at most one or two papers that are even remotely amusing, and algebra was my specialty. On the other hand, I can take a journal in biology like the Journal of Animal Behavior and still find quite a few papers in each journal that are interesting to me even though I’m not even a research biologist! Keep in mind I still like mathematics a lot, and I still enjoy algebra.
I'm sure that if you had more than passing knowledge in animal behavior, you would also find most of the papers dull. Learning completely new things is of course always a blast. Learning about the latest bleeding edge advances in a field where you already know a lot is not as exciting. I'm not sure what point you were trying to make there. When I read papers in physics I'm always thinking "holy shit electrons are so cool and crazy" because I'm always discovering something new at basically every paragraph. But for an expert the novelty eventually wears off.
Disagree. I read papers in biology and ecology regularly, and the relevance of them is much greater and at least easier to appreciate them. (I do some science popularization now and people care much more about recent research in biology than math graduate students care about the latest papers in math in neighbouring fields.)
But it seems to me fairly likely that applying the same test to math journals from 100 or 200 years ago would produce similar results. Most published papers will not be of great interest to any particular one person.
Most of our history wasn't like that.
The fact that those results are easier to understand is because of our increased literacy. Trigonometry was the cutting edge of maths at one point and math literacy was even less then. Now it’s material for tweens.
The current hotness is experimental and theoretical investigations of large language models. This is just maths, and the papers coming out recently have been amazing.
I’ve read papers showing things like that neurons pack information into nearly-orthogonal spaces with beautiful geometric symmetries.
Just yesterday there was a paper showing that the middle layers of a deep language model can be interchanged and still work!
All the stuff in the middle has been solved and then taught and is no longer “interesting”. Or, are we build on those results the new problems are a bit further from the fundamentals so you have to look in specific more specialized domains to find new areas.
What the author seems to forget is that most of the stuff we take for granted now were at one point the cutting edge of maths and obscure to all but the leading mathematicians of the time.
If you look at a new algebra paper in J Algebra, of course it's not going to be interesting, what do you expect.
But we care a lot about logarithms now.
Maybe people never cared about current mathematics. Maybe that's just the pace of progress.
If most of our current problems are solved by results from 50 years ago, could it just be that our future problems will be solved by results from right now?
The slide rule was invented just a few years later based on Napier's work and was used continuously for the next 350 years, until the invention of the modern calculator/computer.
I don't think I disagree with your overall point I just think you chose the worst example :)
Maybe the average 17 year old but that's it
Secondly:
> It cannot remain healthy with its incredible publication rate today of mostly useless generalizations.
So the issue isn't that mathematics is running out of problems. The issue is that there are more publications than there are new problems being discovered / solved, and, ergo, the majority of publications are of limited value / interest. And that isn't an issue unique to mathematics, that's just how academic research is in the 21st century!
I honestly think Math as a field has always been defined by "problems that only a handful of people care about".
The only exception I can think about is maybe basic addition and multiplication.
I don’t think there is by any means a shortage of hard, interesting problems. But working on them directly comes with significant career risk.
[1] https://en.wikipedia.org/wiki/Charles_Holland_Duell#Everythi...
For as long as humanity has existed, whenever we feel like we've reached our peak, something happens and completely shatters our understanding. Say we have prime numbers and their occurrence is completely unpredictable - the same way Aristotle was convinced that objects come to a rest because they get tired. It's not a specific branch of knowledge, it's an outrageous claim like many have already pointed out. Could it be that math is running out of problems? Yes - in the same way that the universe might vanish tomorrow. Both of those claims are equally absurd.
One of Jolly's students at the University of Munich was Max Planck, whom he advised in 1878 not to go into theoretical physics.[5] Nevertheless, Planck's later work lead to the discovery of quantum mechanics.[5] Later in life Planck reported:[2][6]
As I began my university studies I asked my venerable teacher Philipp von Jolly for advice regarding the conditions and prospects of my chosen field of study. He described physics to me as a highly developed, nearly fully matured science, that through the crowning achievement of the discovery of the principle of conservation of energy it will arguably soon take its final stable form. It may yet keep going in one corner or another, scrutinizing or putting in order a jot here and a tittle there, but the system as a whole is secured, and theoretical physics is noticeably approaching its completion to the same degree as geometry did centuries ago. That was the view fifty years ago of a respected physicist at the time.
https://en.wikipedia.org/wiki/Philipp_von_Jolly
Related: https://en.wikipedia.org/wiki/Timeline_of_geometry#20th_cent...
Math is not running out of problems just like physics didn't stop advancing (or merely become more precise measurement) in the 19th and 20th centuries. Something new an innovative might be around the corner — it might not. It might result in a new field entirely. It might not.
I’m struggling to think of a field that is more general. Information theory? Oh wait, that’s a sub-field of math. Communications? Eh it’s pretty general but math has it beat handily I think.
The idea that it is “done” or “out of interesting problems” is just absurd.
I’m open it the possibility that it’s harder than it was before to find and articulate interesting problems, but that is a very different claim.
That’s because, in the natural sciences, a lot of what was considered knowledge long ago has been found out to be incorrect.
If you study Galen (https://en.wikipedia.org/wiki/Galen) or Hippocrates (https://en.wikipedia.org/wiki/Hippocrates), or Newton’s works on alchemy, you aren’t studying medicine or chemistry, but the history thereof.
On the other hand, look at the Pythagorean theorem. There has been a bit of chipping at its corners when non-Euclidean geometry was discovered/invented, but it remains true in large branches of mathematics.
And this isn’t a matter of centuries. A lot of genetics work that predates the discovery of the structure of DNA isn’t worth studying anymore.
> At what point can we still say with a straight face that it makes sense to pour millions of dollars into mathematics research when its only objective seems reaching the next highest peak of hyper-specialization?
Luckily, lots of mathematics research is fairly cheap. As Alfréd Rényi said (https://en.wikipedia.org/wiki/Alfréd_Rényi#Quotations) it runs on coffee.
Or for Erdös, amphetamine.
Also I find it ironic to fro them to gripe about alleged "hyper-specializatiin" given that part of the beauty of Math is discovering how seemingly unrelated areas are in fact connected AND discovering generalizations that can be easily applied.
Caveat, I am no where close to being a mathematician.