Readit News logoReadit News
derektank · a month ago
My favorite story about the Fourier Transform is that Carl Friedrich Gauss stumbled upon the algorithm for the Fast Fourier Algorthim over a century before Cooley and Tukey’s publication in 1965 (which itself revolutionized digital signal processing).[1] He was apparently studying the motion of the asteroids Pallas and Juno and wrote the algorithm down in his notes but it never made it into public knowledge.

[1] https://www.cis.rit.edu/class/simg716/Gauss_History_FFT.pdf

aquafox · a month ago
There is a saying about Gauss: when another mathematician came to show him a new result, Gauss would remark that he had already worked on it, open a drawer in his desk, and pull out a pile of papers on the same topic.
libraryofbabel · a month ago
One of the things I admire about many top mathematicians today like Terence Tao is that they are clearly excellent mentors to a long list of smart graduate students and are able to advance mathematics through their students as well as on their own. You can imagine a half-formed thought Terence Tao has while driving to work becoming a whole dissertation or series of papers if he throws it to the right person to work on.

In contrast, Gauss disliked teaching and also tended to hoard those good ideas until he could go through all the details and publish them in the way he wanted. Which is a little silly, as after a while he was already widely considered the best mathematician in the world and had no need to prove anything to anyone - why not share those half-finished good ideas like Fast Fourier Transforms and let others work on them! One of the best mathematicians who ever lived, but definitely not my favorite role model for how to work.

3abiton · a month ago
> There is a saying about Gauss: when another mathematician came to show him a new result, Gauss would remark that he had already worked on it, open a drawer in his desk, and pull out a pile of papers on the same topic.

As if phd students need more imposter syndrom to deal with. Ona serious side, I wonder what conditions allow such minds to grow. I guess a big part is genetics, but I am curious if the "epi" is relevant and how much.

pastrami_panda · a month ago
Gauss's notes and margins is riddled with proofs he didn't bother to publish - he was wild.

Not sure if true, but allegedy he insisted his son not go into maths, as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths :'D

cogman10 · a month ago
> as he would simply end up in his father's shadow as he deemed it utterly Impossible to surpass his brilliance in maths

Definitely true but also bad parenting. Gauss was somewhat of a freak of nature when it came to math. Him and Euler are two of the most unreasonably productive mathematicians of all time.

Deleted Comment

njarboe · a month ago
When I interned at Chevron someone said they (or some other oil company) were using Fourier transforms in the 1950's for seismic analysis but kept it a secret for obvious reasons. I think you couldn't (can't?) patent math equations.
hahahahhaah · a month ago
Gauss is gonna Gauss.
brcmthrowaway · a month ago
How was Gauss so productive with 6 children?
deepfriedchokes · a month ago
Probably sexual division of labor.
kowbell · a month ago
That's 30 more fingers he could count with.

Deleted Comment

skinkestek · a month ago
I only have 5 kids, and I am also not nearly as productive as Gauss but to a certain degree, it feels to me like responsibility kind of tries to force me to be more effective.
0xdeadbeefbabe · a month ago
He wasn't as productive as he could have been. This seems like Chuck Norris and Jeff Dean territory after all.
defrost · a month ago
I'd hazard by not dealing with them until they'd been schooled enough to operate as computers and GI (general intelligence) assistants.

There's a spread of farmers, railroad and telegraph directors, high level practical infomation management skills in the children.

wolfi1 · a month ago
no TV
kridsdale1 · a month ago
Chutzpah.
srean · a month ago
People go all dopey eyed about "frequency space", that's a red herring. The take away should be that a problem centric coordinate system is enormously helpful.

After all, what Copernicus showed is that the mind bogglingly complicated motion of planets become a whole lot simpler if you change the coordinate system.

Ptolemaic model of epicycles were an adhoc form of Fourier analysis - decomposing periodic motions over circles over circles.

Back to frequencies, there is nothing obviously frequency like in real space Laplace transforms *. The real insight is that differentiation and integration operations become simple if the coordinates used are exponential functions because exponential functions remain (scaled) exponential when passed through such operations.

For digital signals what helps is Walsh-Hadamard basis. They are not like frequencies. They are not at all like the square wave analogue of sinusoidal waves. People call them sequency space as a well justified pun.

My suspicion is that we are in Ptolemaic state as far as GPT like models are concerned. We will eventually understand them better once we figure out what's the better coordinate system to think about their dynamics in.

* There is a connection though, through the exponential form of complex numbers, or more prosaically, when multiplying rotation matrices the angles combine additively. So angles and logarithms have a certain unity, or character.

madhadron · a month ago
All these transforms are switching to an eigenbasis of some differential operator (that usually corresponds to a differential equation of interest). Spherical harmonics, Bessel and Henkel functions, which are the radial versions of sines/cosines and complex exponential, respectively, and on and on.

The next big jumps were to collections of functions not parameterized by subsets of R^n. Wavelets use a tree shapes parameter space.

There’s a whole, interesting area of overcomplete basis sets that I have been meaning to look into where you give up your basis functions being orthogonal and all those nice properties in exchange for having multiple options for adapting better to different signal characteristics.

I don’t think these transforms are going to be relevant to understanding neural nets, though. They are, by their nature, doing something with nonlinear structures in high dimensions which are not smoothly extended across their domain, which is the opposite problem all our current approaches to functional analysis deal with.

srean · a month ago
You may well be right about neural networks. Sometimes models that seem nonlinear turns linear if those nonlinearities are pushed into the basis functions, so one can still hope.

For GPT like models, I see sentences as trajectories in the embedded space. These trajectories look quite complicated and no obvious from their geometrical stand point. My hope is that if we get the coordinate system right, we may see something more intelligible going on.

This is just a hope, a mental bias. I do not have any solid argument for why it should be as I describe.

fc417fc802 · a month ago
Note that I'm not great at math so it's possible I've entirely misunderstood you.

Here's an example of directly leveraging a transform to optimize the training process. ( https://arxiv.org/abs/2410.21265 )

And here are two examples that apply geometry to neural nets more generally. ( https://arxiv.org/abs/2506.13018 ) ( https://arxiv.org/abs/2309.16512 )

anamax · a month ago
> My suspicion is that we are in Ptolemaic state as far as GPT like models are concerned. We will eventually understand them better once we figure out what's the better coordinate system to think about their dynamics in.

Most deep learning systems are learned matrices that are multiplied by "problem-instance" data matrices to produce a prediction matrix. The time to do said matrix-multiplication is data-independent (assuming that the time to do multiply-adds is data-independent).

If you multiply both sides by the inverse of the learned matrix, you get an equation where finding the prediction matrix is a solving problem, where the time to solve is data dependent.

Interestingly enough, that time is sort-of proportional to the difficulty of the problem for said data.

Perhaps more interesting is that the inverse matrix seems to have row artifacts that look like things in the training data.

These observations are due to Tsvi Achler.

srean · a month ago
Neural nets are quite a bit more than matrix multiplications, at least in their current representation.

There are layers upon layers of nonlinearity, be it with softmax or sigmoid. In the tangent kernel view it does linearize.

alexlesuper · a month ago
I feel like this is the way we should have learned Fourier and Laplace transforms in my DSP class. Not just blindly applying formulas and equations.
patentatt · a month ago
I’d argue that most if not all of the math that I learned in school could be distilled down to analyzing problems in the correct coordinate system or domain! The actual manipulation isn’t that esoteric once you get in the right paradigm. And those professors never explained things at that kind of higher theoretical level, all I remember was the nitty gritty of implementation. What a shame. I’m sure there’s higher levels of mathematics that go beyond my simplistic understanding, but I’d argue it’s enough to get one through the full sequence of undergraduate level (electrical) engineering, physics, and calculus.
RossBencina · a month ago
> exponential functions remain (scaled) exponential when passed through such operations.

See also: eigenvalue, differential operator, diagonalisation, modal analysis

Xcelerate · a month ago
It’s kind of intriguing that predicting the future state of any quantum system becomes almost trivial—assuming you can diagonalize the Hamiltonian. But good luck with that in general. (In other words, a “simple” reference frame always exists via unitary conjugation, but finding it is very difficult.)
srean · a month ago
Indeed.

It's disconcerting at times, the scope of finite and infinite dimensional linear algebra, especially when done on a convenient basis.

Jun8 · a month ago
A signal cannot be both time and frequency band limited. Many years ago I was amazed when I read that this fact I learned in my undergraduate is equivalent to the Uncertainty Principle!

On a more mundane note: my wife and I always argue whose method of loading the dishwasher is better: she goes slow and meticulously while I do it fast. It occurred to me we were optimizing for frequency and time domains, respectively, ie I was minimizing time so spent while she was minimizing number of washes :-)

ComplexSystems · a month ago
Signals can be approximately frequency and time bandlimited, though, meaning the set of values such that the absolute value exceeds any epsilon is compact in both domains. A Gaussian function is one example.
hammock · a month ago
It’s literally the Heisenberg uncertainty principle, applied to signal processing.
btilly · a month ago
For those who don't get this comment, the Heisenberg uncertainty principle applies to any two quantities that are connected in QM via a Fourier transform. Such as position and momentum, or time and energy. It is really a mathematical theorem that there is a lower bound on the variance of a function times the variance of its Fourier transform.

That lower bound is the uncertainty principle, and that lower bound is hit by normal distributions.

OscarCunningham · a month ago
Another example: ears are excellent at breaking down the frequency of sounds, but are imprecise about where the sound is coming from; whereas eyes are excellent at telling you where light is coming from, but imprecise about how its frequencies break down.
PunchyHamster · a month ago
that's mostly due to light waves being FAR shorter and many orders of magnitude more "sensors"

Ears are essentially 2 "pixels" of sound sensing; and for that limitation they are ABSOLUTELY AMAZING at pointing out the sound source.

bschwindHN · a month ago
> I was minimizing time so spent while she was minimizing number of washes

I'm probably just slow, but I'm not following. Do you mean because you went fast, you had to run another cycle to clean everything properly?

If you haven't already, you should watch the Technology Connections series on dishwashers.

https://www.youtube.com/watch?v=jHP942Livy0

Jun8 · a month ago
Since I’m rushing to load it as fast as possible the packing is not as good as hers so some dishes are left out. Overall this leads to more loads.
hahahahhaah · a month ago
The self loading dishwasher would be the greatest marriage saving invention since car navigation systems.
tsoukase · a month ago
One of the things that I feel blessed of in my marriage is that my wife has the same way of loading the dishwasher like me. Anyway, fellow husbands think about the dozens of other conflicts that you are might avoiding.
PunchyHamster · a month ago
you just need to go "if you want it loaded your way, you do it" and all is solved

And if loading dishwasher is on top of your marital issues you're probably in very happy marriage.

The constant small degree of conflict and strife is key to happiness, people can't be permanently happy, they just find ways to sabotage when they do

Deleted Comment

hedgehog · a month ago
Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple. I have some demo code that uses fourier transform on webcam video to read a heartrate off a person's face, basically looking for what frequency holds peak energy.
cogman10 · a month ago
It's effectively the underpinning of all modern lossy compression algorithms. The DCT which underlies codecs like Jpeg, h264, mp3, is really just a modified FFT.
astrange · a month ago
Inter/intra-prediction is more important than the DCT. H264 and later use simpler degenerate forms of it because that's good enough and they can define it with bitwise accuracy.
Rzor · a month ago
>Once you start looking at the world through the lens of frequency domain a lot of neat tricks become simple.

Not the first time I've heard this on HN. I remember a user commenting once that it was one of the few perspective shifts in his life that completely turned things upside down professionally.

isolli · a month ago
There is also a loose analogy with finance: act (trade) when prices cross a certain threshold, not after a specific time.
Chrupiter · a month ago
I don't think pulsing skin (due to blood flow) is visible from a webcam though.
javawizard · a month ago
Plenty of sources suggest it is:

https://github.com/giladoved/webcam-heart-rate-monitor

https://medium.com/dev-genius/remote-heart-rate-detection-us...

The Reddit comments on that second one have examples of people doing it with low quality webcams: https://www.reddit.com/r/programming/comments/llnv93/remote_...

It's honestly amazing that this is doable.

3eb7988a1663 · a month ago
MIT was able to reconstruct voice by filming a bag of chips on a 60FPS camera. I would hesitate to say how much information can leak through.

https://news.mit.edu/2014/algorithm-recovers-speech-from-vib...

hedgehog · a month ago
It is, I've done it live on a laptop and via the front camera of a phone. I actually wrote this thing twice, once in Swift a few years back, and then again in Python more recently because I wanted to remember the details of how to do it. Since a few people seem surprised this is feasible maybe it's worth posting the code somewhere.
jpablo · a month ago
You will be surprised of The Unreasonable Effectiveness of opencv.calcOpticalFlowPyrLK
rcxdude · a month ago
It is, but there's a lot of noise on top of it (in fact, the noise is kind of necessary to avoid it being 'flattened out' and disappearing). The fact that it covers a lot pixels and is relatively low bandwidth is what allows for this kind of magic trick.
zipy124 · a month ago
It totally is. Look for motion-magnification in the literature for the start of the field, and then remote PPG for more recent work.
neckro23 · a month ago
Sure it is. Smart watches even do it using the simplest possible “camera” (an LED).
moralestapia · a month ago
You can do it with infrared and webcams see some of it, but I'm not sure if they're sensitive enough for that.
hahahahhaah · a month ago
I have seen apps that use the principle for HRV. Finger pushed on phone cam.

Dead Comment

rcarmo · a month ago
I would heartily recommend Sebastian Lague's latest video, which covers this in a very approachable way: https://www.youtube.com/watch?v=08mmKNLQVHU
emil-lp · a month ago
Okay, who's gonna write the story

> The unreasonable effectiveness of The Unreasonable Effectiveness title?

cubefox · a month ago
It's a play on the famous essay 1960 "The Unreasonable Effectiveness of Mathematics in the Natural Sciences".

I agree this is getting old after 75 years. Not least because it seems slightly manipulative to disguise a declarative claim ("The Fourier transform is unreasonably effective."), which could be false, as a noun phrase ("The unreasonable effectiveness of the Fourier transform"), which doesn't look like a thing that can be wrong.

flufluflufluffy · a month ago
Also how most of the articles with this kind of title (those posted on HN at least) are about computation/logical processes, which are by definition, reasonable.
observationist · a month ago
Unreasonable effectiveness is all you need.
nextaccountic · a month ago
"Unreasonable effectiveness is all you need" considered harmful
greenwallnorway · a month ago
I did some analysis on top title patterns. Both of these make the list pretty handily: https://projects.peercy.net/projects/hn-patterns/index.html
chaboud · a month ago
Someone will end up writing "Scholastic Parrots"...
temp123789246 · a month ago
I lol’d
SAI_Peregrinus · a month ago
Given how much of the talk is about the original paper the title references, and how the Fourier transform turns out to be unreasonably effective at allowing communication over noisy channels, I'd say it's a reasonable reference.
Yodan2025 · a month ago
The Antipode of unreasonable effectiveness ness
kingkongjaffa · a month ago
Agreed, these kind of titles are very silly.

FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.

There's another title referenced in that link which is equally asinine: "Eugene Wigner's original discussion, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". "

Like, wtf?

Mathematics is the language of science, science would not compound or be explainable, communicable, or model-able in code without mathematics.

It's actually both plainly obvious for mathematics then to be extremely effective (which it is) and also be evidently reasonable as to why, ergo it is not unreasonably effective.

Also the slides are just FTs 101 the same material as in any basic course.

jwise0 · a month ago
Hi, original presenter here :) The beginning is FTs 101. The end gets more application-centric around OFDM and is why it feels 'unreasonably effective' to me. If it feels obvious, there's a couple of slides at the end that are food for thought jumping off points. And if that's obvious to you too, let's collab on building an open source LTE modem!
metalliqaz · a month ago
> FTs are actually very reasonable, in the sense that they are a easy to reason about conceptually and in practice.

ok but it's not the FTs that are unreasonable, it's the effectiveness

I think we all understand at this point that "unreasonable effectiveness" just means "surprisingly useful in ways we might not have immediately considered"

Certhas · a month ago
I find it hard to parse the middle of your post. Are you saying Wigner's article, which is what all the "unreasonable effectiveness" titles reference, is silly?

If that is what you are saying I suggest that you actually go back and read it. Or at least the Wiki article:

https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

By means of contrast: I think it's clear that mathematics is, for example, not unreasonably effective in psychology. It's necessary and useful and effective at doing what it does, but not surprisingly so. Yet in the natural sciences it often has been. This is not a statement about mathematics but about the world.

(As Wittgenstein put it some decades earlier: "So too the fact that it can be described by Newtonian mechanics asserts nothing about the world; but this asserts something, namely, that it can be described in that particular way in which as a matter of fact it is described. The fact, too, that it can be described more simply by one system of mechanics than by another says something about the world.")

w10-1 · a month ago
> Mathematics is the language of science

So, biology and medicine are not sciences? Or are only sciences to the extent they can be mathematically described?

The scientific method and models are much more than math. Equating the reality with the math has let to myriad misconceptions, like vanishing cats.

And silly is good for a title -- descriptive and enticing -- to serve the purpose of eliciting the attention without which the content would be pointless.

staticshock · a month ago
It is likewise unreasonable to look down on any kind of world model from the past. Remember that you, in 2026, are benefitting from millions of aggregate improvements to a world model that you've absorbed passively through participation in society, and not through original thought. You have a different vantage point on many things as a result of the shoulders of giants you get to stand on.
redhed · a month ago
It is pretty funny to flippantly call an influential paper by someone who received a Nobel Prize in Physics 'asinine'.
jmyeet · a month ago
How about The unreasonable effectiveness title considered harmful?
seba_dos1 · a month ago
The unreasonable effectiveness of considering something harmful.
bitwize · a month ago
Coming up next on Hackernews:

Why "The \"Unreasonable Effectiveness\" Title Considered Harmful" Matters

The Unreasonable Effectiveness of "\"Why \\\"The \\\\\\\"Unreasonable Effectiveness\\\\\\\" Title Considered Harmful\\\" Matters\" Considered Harmful"

BrokenCogs · a month ago
"The unreasonable effectiveness" is all you need

Deleted Comment

threethirtytwo · a month ago
The Unreasonable Effectiveness of LLMs.

Ironically a very relevant and accurate title.

tverbeure · a month ago
Not at all relevant in those instance because the blog post was not LLM generated slop.
shihab · a month ago
If you are from ML/Data science world, the analogy that finally unlocked FFT for me is feature size reduction using Principal Component Analysis. In both cases, you project data to a new "better" co-ordinate system ("time to frequency domain"), filter out the basis vectors that have low variance ("ignore high-frequency waves"), and project data back to real space from those truncated dimension ("Ifft: inverse transform to time domain").

Of course some differences exist (e.g. basis vectors are fixed in FFT, unlike PCA).

hinkley · a month ago
My biggest missing feature for Grafana is that I want a Fourier transform that can identify epicycles in spikes of traffic. Like the first Monday of the month, or noon on tuesdays.

I had a couple charts that showed a trend line of the last n days until someone in OPs noticed that three charts were fully half of our daily burn rate for Grafana. Oops. So I started showing a -7 days line instead, which helped me but confused everyone else.

physicsguy · a month ago
That wouldn't really work well because the sparsity of the periodic spikes wouldn't fit the assumption that the signal has a frequency component 'everywhere', even though it's periodic. You can see that mathematically - if you take the Fourier transform of an impulse signal you get a smeared result in frequency space.

You'd probably want to use a tool like calculating the cepstrum rather than fourier transform. Cepstral methods are commonly used in mechanical analysis to detect periodic impacts like where a gear tooth gets damaged.