Why is taking methods from other fields an unorthodox move? I come from an engineering background an there it is the common case. The usage of harmonic analysis is a staple in many fields (audio, waves, electrical analysis, statistics) and of course the algorithms are pure math under the hood. If I want to find a reaccuring structure in an underlying system, wouldn't it be normal to try different plotting techniques and choose the one that suits my problem best?
This quote doesn't suggest that the only thing unorthodox about their approach was using some ideas from harmonic analysis. There's nothing remotely new about using harmonic analysis in number theory.
1. I would say the key idea in a first course in analytic number theory (and the key idea in Riemann's famous 1859 paper) is "harmonic analysis" (and this is no coincidence because Riemann was a pioneer in this area). See: https://old.reddit.com/r/math/comments/16bh3mi/what_is_the_b....
2. The hottest "big thing" in number theory right now is essentially "high dimensional" harmonic analysis on number fields https://en.wikipedia.org/wiki/Automorphic_form, https://en.wikipedia.org/wiki/Langlands_program. The 1-D case that the Langlands program is trying to generalize is https://en.wikipedia.org/wiki/Tate%27s_thesis, also called "Fourier analysis on number fields," one of the most important ideas in number theory in the 20th century.
3. One of the citations in the Guth Maynard paper is the following 1994 book: H. Montgomery, Ten Lectures On The Interface Between Analytic Number Theory And Harmonic Analysis, No. 84. American Mathematical Soc., 1994. There was already enough interface in 1994 for ten lectures, and judging by the number of citations of that book (I've cited it myself in over half of my papers), much more interface than just that!
What's surprising isn't that they used harmonic analysis at all, but where in particular they applied harmonic analysis and how (which are genuinely impossible to communicate to a popular audience, so I don't fault the author at all).
To me your comment sounds a bit like saying "why is it surprising to make a connection." Well, breakthroughs are often the result of novel connections, and breakthroughs do happen every now and then, but that doesn't make the novel connections not surprising!
Nothing close to this is known.
The nontrivial zeros of zeta lie within the critical strip, i.e., 0 <= Re(s) <= 1 (in analytic number theory, the convention, going back to Riemann's paper is to write a complex variable as s = sigma + it)*. The Riemann Hypothesis states that all zeros of zeta are on the line Re(s) = 1/2. The functional equation implies that the zeros of zeta are symmetric about the line Re(s) = 1/2. Consequently, RH is equivalent to the assertion that zeta has no zeros for Re(s) > 1/2. A "zero-free region" is a region in the critical strip that is known to have no zeros of the Riemann zeta function. RH is equivalent to the assertion that Re(s) > 1/2 is a "zero-free region." The main reason that we care about RH is that RH would give essentially the best possible error term in the prime number theorem (PNT) https://en.wikipedia.org/wiki/Prime_number_theorem. A weaker zero-free region gives a weaker error term in the PNT. The PNT in its weakest, ineffective form is equivalent to the assertion that Re(s) >= 1 is a zero free region (i.e., that there are no zeros on the line Re(s) = 1).
The best-known zero-free region for zeta is the Vinogradov--Korobov zero-free region. This is the best explicit form of Vinogradov--Korobov known today https://arxiv.org/abs/2212.06867 (a slight improvement of https://arxiv.org/abs/1910.08205).
I think your confusion stems from the fact that approximately the reverse of what you said above is true. That is, the best zero-free regions that we know get arbitrarily close to the Re(s) = 1 line (i.e., get increasingly "useless") as the imaginary part tends to infinity. Your statement seems to suggest that the the area we know contains the zeros gets arbitrarily close to the 1/2 line (which would be amazing). In other words, rather than our knowledge being about as close to RH as possible (as you suggested), our knowledge is about as weak as it could be. (See this image: https://commons.wikimedia.org/wiki/File:Zero-free_region_for.... The blue area is the zero-free region.)
* I don't like this convention; why is it s = sigma + it instead of sigma = s + it? Blame Riemann.
I did. See the paragraph that starts, "A model is any physical system whose behavior correlates in some way with another physical system."
> If you try to model math in a way that can actually be rooted in objective reality...
It's pretty clear you didn't actually read all the way to the end.
To expand on this: I think models are representations, and whether or not something is a model depends in some way on human minds. (In particular, it depends on whether a something would be understood by a human mind to be a representation.)
I don't think that any correlation between physical systems qualifies one as a model for the other. Your definition as written would include any two things that are connected causally, or have a common cause, as models for one another. One problem (though not the only one) that I have is that your definition removes any mention of human minds.
In particular, I think "representation" is, broadly speaking, some kind of correspondence relationship between linguistic or pictorial things (where I include mathematics as "linguistic") and physical reality, and "a representation" is some linguistic or pictorial thing that corresponds to reality. I think that a model is a kind of representation.
A model is a kind of representation where for convenience and tractability, certain aspects of reality are left out or "abstracted away" (deliberately), with the goal of understanding the real world by understanding the simpler representation of the real world.
To get Banach-Tarski you need to either accept Formalism - it is just a formal game whose concepts like uncountable sets do not "really" exist, or you need to accept that Platonic reality with uncountable things in it. If you try to model math in a way that can actually be rooted in objective reality, then you wind up with some form of Constructivism. And now Banach-Tarski goes away.
So when X is a model of Y, then Y is always also a model of X? (Since correlation is generally a symmetric relationship.) That seems like a strange definition of “model”.
Given your response, is it fair to say time as the 4th dimension is just a sci-fi concoction?
But "dimension" is something mathematical. I would say it doesn't quite make sense to say "is the fourth dimension time" in the same way as it wouldn't make sense to say "is the fifth an apple?" The same way that numbers can refer to different things in different contexts (including in the context of different scientific theories), dimensions can correspond to different things in different contexts. For example, statistics and machine learning heavily use "high dimensional" mathematics, but there the "dimensions" would correspond to different variables you are trying to predict or explain. E.g. if you were trying to predict chance of heart attack from 1000 different factors, then you would have 1000+1 total "dimensions," and in that case the "fourth dimension" might be "cigarettes smoked per week" (rather than time).