Honestly, it's not so bad. It's easy to pick any such attempt apart. This is close to my favorite pithy way of explaining it, too, which is to break it down component-wise using the idea of filter banks. It's not a single sentence, but here's what I tend to say:
Any signal—like sounds or electrical signals, or even images—can be thought of as having a certain amount of 'energy' at any choice of frequency. This makes the most sense in music where we might thing of a 3-note chord as having 3 distinct packets of energy at 3 different frequencies.
For any given frequency, we can compute the amount of energy a signal contains at that frequency by comparing the signal with a test signal, a "pure tone" at that frequency. Pure tones are signals that have the unique property of putting all of the energy at exactly one frequency. The Fourier Transform is an equation which packages this idea up, showing us how to represent all of these measurements of energy at all frequencies.
The natural idea of a "pure tone" might be a sine wave. This is what we think of when we think of a musical pure tone and it certainly exists only at a single frequency. But sine waves make for bad comparison signals due to the problem of "phase": two sine waves played together can perfectly support one another and become twice as loud, or they can perfectly interrupt one another and become silence. This happens because sine waves oscillate between positive values and negative values and positive things can cancel out negative things.
When you look at the equation for the Fourier transform you'll see an exponent of a complex number. This is an improved version of a 'pure tone' which avoids the phasing issues of a sine wave. It does this by spinning like a clock hand in two dimensions, remaining always at the same length. This extra dimension lets us preserve enough information so that things never cancel out like with sine waves.
As someone who still doesn't understand the Fourier Transform, this explanation doesn't help, nor does the article's. As a complete noob, your explanation shows off what you know but doesn't help newcomers learn it.
Imagine you have a voice recording. You are looking at it a long squiggly line with no uniformity.
Zoom way in. If you go far enough it’ll just look like a curved line.
That curved line can be estimated down into a sin wave, or more accurately, a few sin waves that combine to make almost the same wave you have.
FFT is a way to take a complex wave and reduce it down to the sin wave components that would all combine to make it up.
To practically apply this, your voice recording is very high resolution and has a lot of bits at a high sample rate. If you instead used sin waves to represent the data, it could be almost as good sounding while being a lot less data to store.
… Watch a video. It’s something many people need to see to get.
Apologies, there’s only so much one can do firing explanations into the void. I don’t really mean to show off, but instead to share meanings which I’ve personally had success sharing with others and having them come to better understanding. I can’t hope that it will be what each individual finds most helpful, however. I also have little hope for any “short” resource to accomplish that. I hope you find what you’re looking for.
This is good, but I think it's cyclical (heh). We want to compare our signal with a "pure tone". What's a pure tone? A sine wave. Why is a sine wave a pure tone? Because when we compare it with a pure tone, it's identical.
That's a question of why Fourier transforms are important though, not just how they're defined and computed. The next-level answer is presumably that sinusoids (or complex exponentials in general) are the eigenfunctions of general linear time-invariant systems, i.e. that if the input to an LTI system is exp(j*w*t), then its output will be A*exp(j*w*t) for some complex constant A. Some other comments here already alluded to that, noting that sinusoids are good for solving linear differential equations (which are LTI systems), or that the sum of two sinusoids of the same frequency shifted in time (which is an LTI operation, since addition and time shift are both LTI) is another sinusoid of that same frequency.
LTI systems closely model many practical systems, including the tuning forks and flutes that give our intuition of what a "pure tone" means. I guess there's a level after that noting that conservation laws lead to LTI systems. I guess there's further levels too, but I'm not a physicist.
That eigenfunction property means we can describe the response of any LTI system to a sinusoid (again, complex exponential in general) at a given frequency by a single complex scalar, whose magnitude represents a gain and whose phase represents a phase shift. No other set of basis functions has this property, thus the special importance of a Fourier transform.
We could write a perfectly meaningful transform using any set of basis functions, not just sinusoids (and e.g. the graphics people often do, and call them wavelets). But if we place one of those non-sinusoidal basis functions at the input of an LTI system, then the output will in general be the sum of infinitely many different basis functions, not describable by any finite number of scalars. This makes those non-sinusoidal basis functions much less useful in modeling LTI systems.
I appreciate the feedback. I’m person I talk about pure tones more, mostly because I love how pretty complex spirals are.
Here, I just tried to define them “Pure tones are signals that have the unique property of putting all of the energy at exactly one frequency” and then use sine as an example.
Truly, complex rotations are a much better definition, but it’s somewhat reasonable to expect people to be familiar with sine waves.
A pure sine wave has only one frequency, that's why it is analogous to a "pure tone". It's not a circular definition; they key part is the "pure", which means no dilution by other frequencies, which admittedly is not obvious.
To be honest I think your explanation misses the point in two important ways.
First, the Fourier transform does not just measure the 'energy' or amplitude of a single wave; it must also take into account its phase.
Second, complex exponentials are in no way an essential ingredient for defining the Fourier transform. We just work with them for computational simplicity, essentially because exp(ix) exp(iy) = exp(i(x+y)).
I also have trouble assigning meaning to your last paragraph. After all, complex exponentials can cancel just as much as sine waves (proof: sine waves are a combination of complex exponentials).
I agree with these criticisms. Really, my goal when I talk about Fourier Transforms is to avoid talking about phase. It's important, clearly, but it's both less intuitive and less practically meaningful. For a lot of applications, the phase data is just discarded anyway. And if you're in a situation where it matters, you're probably reading more than 3 short paragraphs.
I'd love other thoughts on how to handle the ideas in the last paragraph. I'm not being technical, but am trying to allude to how the complex exponentials can capture more information more conveniently... and honestly waving my hands a lot. Really, I just want to explain why they're there instead of a more recognizable sine or cosine functions.
I've also tried to show the Fourier transform as the sum of a sine and a cosine transform, but that's too much, I think.
IME explanations of the Fourier transform focus far too much on the specific mechanics of it without motivating the reason. I find people often don't even realize the time space and frequency space functions are the same function! I dive in like so
"There are many different ways to write down the number 5. Tally marks, the sigil 5, 4+1, 2.5 times 2, 10/2. Each of them is more or less useful in different circumstances. The same is true of functions! There are many different ways to write the same function. The mechanics of taking a function in one form and writing it in a new form is called a transformation."
Above needs to be said a hundred times. Then you talk about why a frequency space function can be useful. Then you talk about how to transform a time space function to the frequency space representation.
I think the best semi-intuitive, non rigorous explanation I've seen of the Fourier transform is still one that first explained signal correlation in the time domain and then described the transform as basically performing correlation on the signal for all the possible sines at different frequencies. Essentially you're just testing for the presence of individual sine waves (of different frequencies) within the signal.
Unfortunately I can't remember where I saw this explanation. It might've been in the Scientist's and Engineers Guide to DSP book.
> best semi-intuitive, non rigorous explanation [...] signal correlation in the time domain [...] basically performing correlation on the signal for all the possible sines at different frequencies
That's only going to make sense for someone who understands your jargon usage of "correlate", and who groks that integrals of sine curves are orthogonal under addition. That's precisely the hard part the linked explanation is trying to explain.
If you've already gotten that far then the DFT is just calculator math.
True! Though tbh, I did not even know that property of integrals, since I've studied this stuff only in the discrete realm.
At the totally basic level, another thing that I think can really trip people up and that a lot of texts are not clear about is that the transform is about moving between representations and that we're not actually transforming anything. Not enough maths texts take the time to explain how the representation connects up with the objects we wish to study and how there are multiple possible ways to represent them, and in some cases, like Fourier, useful ways to "translate" between different representations.
I agree, FFT is simply a correlation of a bunch of sine waveforms.
The extra tidbit - the sine waveforms are orthogonal to each other (their cross correlation is zero), so the transform can be inverted. (in math terms: they form a basis set. I'm mostly referring the STFT here, which is the 'common' FFT use. not infinite FFT).
I have taught these kind of undergraduate subjects and, in the context of a course, the Fourier transform never struck me as something very complicated. First, it feels completely natural to write down a Fourier decomposition for periodic functions; the inverse transform to determine the coefficient is just (real or complex) calculus; and all that is left is to convince the audience of completeness which is best done through a few examples as well as the time-honoured method of "proof by intimidation". (Note that the blog post does not do a better job here.)
By contrast, these posts seem to fulfil a demand where people look for an isolated explanation of concept X, like Fourier transforms, matrix determinants, monoids, etc. But they are necessarily too isolated to really convey the subject. For example, one does not normally teach (complex) Fourier transforms without first dedicating significant time to topics like complex exponentials and integrals of trig functions, and likewise one normally teaches monoids only after introducing functors and applicatives.
In other words, without having the necessary background at your fingertips it is hard to really grasp the core concepts. That is why, IMHO, reading these blog posts may feel good but will rarely make things really stick.
>I have taught these kind of undergraduate subjects and, in the context of a course, the Fourier transform never struck me as something very complicated.
It's great that it was "never complicated" for you. In contrast, the author (David Smith) of this blog post admits that he initially struggled with the Fourier Transform -- and wants to share some insights he gained after he understood it.
He has already graduated with a degree in statistics and computer science and also developed add-on software packages for R so he presumably first got exposed to FT within the _context_ of a college course. So he actually did what you suggested. Nevertheless, he wants to share why some supplementary material he didn't initially have might be helpful to others.
Alternative presentations of topics can sometimes flip the "light bulb" in some brains. I don't think it's a useless blog post.
> It's great that it was "never complicated" for you.
I don't think they meant "not complicated to learn", but "never complicated to teach, once its time had arrived in the natural progression of the course".
> It's great that it was "never complicated" for you.
Please stop making it sound like the commenter was bragging. That's just having bad faith in HN users.
The comment was about how the context and prior understanding of the underlying topics is a necessary prerequisite for understanding complex topics built on those and how blog learning is a necessarily contrived method of teaching.
It does not mean it won't work for some. If you happen to have the prior understanding of the prerequisites then it'd be a way to learn some additional insight, but you shouldn't assume the Venn diagram for that audience is much larger than a small % of the readers.
> If, like me, you struggled to understand the Fourier Transformation when you first learned about it
No where does the author imply that he is teaching the Fourier Transform to someone who has never heard of it.
I agree with your overall point. Blog posts are not going to teach you difficult mathematics unless they are the length of college textbooks. But a blog post like this is great for reinforcement. If, like me, you are an ECE (Electrical and Computer Engineer) who has written software for the last 15 years, an explanation like the one in the post makes a lot of neurons reconnect.
There's a lot of math & physics subjects that I formally learned and passed exams at Uni, but frankly never really fully understood them. I've found clarifications on many of these subjects later on web, in exactly these types of blogs or youtube videos. All you need is one or two core concepts better explained and suddenly you can connect all the dots, and that what this type of content is fantastic for.
I think a lot of people have heard of the Fourier transform and kind of understand how any signal can be represented by a superposition of sine waves. Attempting to improve the intuition of how the algorithm works isn't a bad thing. The casual reader isn't going to invest the time to learn the pre-requisites so the alternative is nothing at all.
I mean, it's no replacement for learning something rigorously, but I still find little snippets like this helpful for building intuition. Frankly, I'm not really great at rigor myself, and for areas that aren't my areas of expertise (like signals & systems), intuition is all I really have to go off of. I took undergrad signals & systems years ago and probably could not say anything coherent about the Fourier transform today. Still had enough intuition kicking around somewhere to get the SNR of this ADC signal down via oversampling and an IIR filter, though. I remember churning through blog posts for things like PID loops and Kalman filters as a teen, and eventually got to the point where I kinda understood them. But then when it came time to actually learning them, having that small bit of background helped immensely, since I didn't have to build that basic intuition in the span of a single semester course.
What works for me is learning a single concept in many different ways. A textbook, a youtube video, a friend explaining it to me... Each next source of material (however casual or formal it is) is more likely make the concept "click", or allow me to understand the concept more efficiently.
To that extent, this blog post seems valuable to me. The author is using a language and colors to convey an intuition that I have not precisely seen before.
This is why it’s rare as good Carolina barbecue to find a person self-educated in these topics who also truly understands the material. At least in my experience.
I do think "blog post learning" can work, but agree with you that this ain't it. The "explanation" on the blog post would really only work for someone who more-or-less understood Fourier Series already.
You're also right that explanations can't be isolated. I'd go further and say they never are. People only come to understand new things in terms of things they already understand. There's no "explaining X" in isolation, there's only "explaining X in terms of Y." A good explanation either supplies a suitable Y or makes it clear to the reader what Y they'll be relying on.
So, even if the author tries to isolate, a successful explanation always relies on collateral information. The only question is whether that collateral information is left to chance or intentionally exploited.
What's the "Y" here? Well, the author assumes the reader can make sense of phrases like "energy at a frequency", what it means to "spin your frequency around a circle", "average a bunch of points along a path", etc.
That's...a lot. I'd go so far as to say that a reader who can make sense of that probably already understands more fundamental concepts like vector spaces, bases, and Taylor Series.
And if they understand those things then there are much more straightforward explanations available!
I wish my university courses had started with the intuition behind and utility of the functions we were about to learn. Be it linear algebra, differential equations or Fourier transforms.
This is what all these blog posts excel with.
Of course I got it along the way, but I do believe it would have been easier if the background was something like this.
I'm one,two,three,...,n blog posts/YouTube videos away from fully grasping this complex mathematical concept which not only requires fundamental knowledge I may not be familiar with, but also solving problems/writing code where at first I'll have no idea how is it supposed to be of any use but over time it will eventually 'click'.
has become.
Beautiful animations and witty narrations are nice, but you're in for a (bad) surprise if you think they're adequate. Also nothing beats good old book -> pencil/paper.
It depends on what the intention is. Is it to promote comprehensive understanding, proof, and the ability to then apply the concept in all situations? Then no.
Is it just to point out something interesting a make a point? Sure. There's nothing wrong with that. It's OK to explore aspects of a subject without a dense canonical treatise. There's a time and place for all sorts of explanations.
I don't get the point. There really isn't anything particularly inherently special about blog posts as a medium versus say, chapters in a textbook, other than the obvious things. You can, of course, learn about complex subject matter through a blog post. If you find someone with a similar mental model of things to your own, I think you can in fact learn from blog posts easier, as they can get from zero to understanding with less detours.
On a similar note, I personally believe there are many concepts that have been explained much better by random YouTubers than academic textbooks. Textbooks are more comprehensive almost always, but having an intuitive understanding of something is obviously more important in many cases than simply remembering a long list of bullet points about it. You can always use reference material later. In real life, memorizing facts on their own is rarely useful.
The best answer here is that you should use both, or imo, "all three," of blog posts, video content, and academic textbooks/articles/courses. Taking a variety of approaches helps me get to a mental "click" quicker, personally.
You mean learning by writing a blog post, or learning by reading a blog post? In both cases, the answer is "yes".
> In other words, without having the necessary background at your fingertips it is hard to really grasp the core concepts. That is why, IMHO, reading these blog posts may feel good but will rarely make things really stick.
The problem these posts solve, at least for me, is that despite having the necessary background, some concepts didn't "click" in my mind when I first learned them. I found that if I then read a couple different posts on a specific concept, each with slightly different perspective or way of presenting it, one of them will eventually hit the "sweet spot" and make it "click" in context of all the background I already have (and often cascade into making me understand a few more things in the topical background).
I agree. Way back when I learned the FT, I happened to be taking a signals and statistics class in the same semester and rationalizing the Fourier transform as the correlation between a function and a sinusoid was an incredibly natural mental leap and didn’t require me to hold much additional knowledge in my head
I find them excellent if you’ve learned the technical details already in a book or by yourself. Blog posts often are high level and can give alternative perspectives on the same problem.
They’re also excellent to motivate a problem so the reader can find the technical details elsewhere.
I have a degree in physics, but have been a working programmer for 15 years. For me, intuition is the only thing that remains and that is slipping, too. This post is like visiting an old friend I haven’t talked to in as much time.
I feel like I've learned some things from high quality interactive blog posts. These posts from Bartosz Ciechanowski come to mind
https://ciechanow.ski/archives/
I would tend to agree. The only people I saw really struggle with learning Fourier series and the Fourier transform were people who had struggled with more basic concepts and failed to internalize them. This kind of explanation might be helpful to build someone's intuition up a bit but is not a substitute for a proper education on the subject and definitely not some magic key to understanding.
Disagree. If you truly understand something, you must have prooven it to yourself, everything is just a mental help ("donkey bridge" in german) that allows you to remember the statement better - weather it's right or not, you can only know once you've prooven it; and before you did that, it can often happen that your intuition on what's right is actually wrong.
I think you're confusing understanding with intuition. Which is fair; I'd say understanding builds to intuition as it's accessed more. The line is blurry at points.
This reminds me of an old joke in the Haskell community, where people who struggled to understand Monads would finally get it after a while, and would assume that whatever the last sentence they heard was the only necessary one for the explanation.
In that spirit, my pet "a monad is a monoid in the category of endofunctors, duh"-style one-line explanation of the Fourier transform is that it's just the decomposition in the common (Hilbert) eigenbasis for all translation operators. It makes it surprisingly clear (to some) why it is a both natural and important construction.
Same, but it's only natural after studying inner product vector spaces. Also being comfortable with some calculus is needed to be able to overlook the technicalities of this construction and focus on the actual idea.
I’ve understood monads multiple times in my life, but each time that understanding was so fragile that it crumbled when I tried explaining to someone else.
I’m currently in a phase where I don’t understand them.
A monad is a computational context, where the nature of that context is determined by two things: the shape of the data structure corresponding to it, and the definition of (>>=) which handles sequencing of two computations in that context.
Anything more specific than that should be handled case-by-case until you build an intuition for how any given monad will behave.
Understanding monads isn't particularly useful unless you also understand functors, which are a relatively much easier concept to understand long-term. They're really just functors with a few extra rules that make them a bit more useful.
It took me an embarrassingly long time to realize that it was a joke when people said "It's always the last place you look." Like well into my teens. But ever since I figured it out, I always look at least one more place after finding something.
I always though the joke was, that anybody who finally understands what a Monad is, at that precise moment, automatically loses the ability to explain it to others...
That was my experience studying statistical physics in Uni. For "some reason" the fourth book I read was the "only one" written in a way that made sense. No relation to the order I read them in, just written better.
This for sure. "I didn't understand the first 5 explanations but I did understand the sixth one, so that's the best one." Needs to be said that without the first 5, the sixth probably wouldn't have caught.
It's useful to test explanations on people unfamiliar with it. So in this spirit, I want to share: the sentence didn't explain it to me.
After watching3Blue1Brown's video, I get it. So I can share with you what confused me:
It's this part of the sentence: "average a bunch of points along that path". I know what averaging it, but I don't know what averaging "along a path" means. Also I was (wrongly) expecting the output of this function to be a real number - I just expect "energy" to be a number. Instead, here what we're doing is averaging complex numbers so the output is a complex number. What a complex energy means is left as an exercise to the reader.
Here's my suggestion for a sentence that would have worked better for me:
"To find the energy at a particular frequency, spin your signal around a circle at that frequency, and find the center of mass of the plot."
Presumably then we have to take the distance from the origin to get an amount of energy.
> I just expect "energy" to be a number. Instead, here what we're doing is averaging complex numbers so the output is a complex number. What a complex energy means is left as an exercise to the reader.
They shouldn't be using the term energy, really. That should be reserved for the magnitude of the complex number (so re(z)^2+im(z)^2). You can write a complex number either as a real and imaginary part, or as a amplitude r (the distance you're referencing) and phase (the rotation from r+0im).
> "To find the energy at a particular frequency, spin your signal around a circle at that frequency, and find the center of mass of the plot."
As someone who's spent a lot of time with the subject, I like this better than the original.
Any signal—like sounds or electrical signals, or even images—can be thought of as having a certain amount of 'energy' at any choice of frequency. This makes the most sense in music where we might thing of a 3-note chord as having 3 distinct packets of energy at 3 different frequencies.
For any given frequency, we can compute the amount of energy a signal contains at that frequency by comparing the signal with a test signal, a "pure tone" at that frequency. Pure tones are signals that have the unique property of putting all of the energy at exactly one frequency. The Fourier Transform is an equation which packages this idea up, showing us how to represent all of these measurements of energy at all frequencies.
The natural idea of a "pure tone" might be a sine wave. This is what we think of when we think of a musical pure tone and it certainly exists only at a single frequency. But sine waves make for bad comparison signals due to the problem of "phase": two sine waves played together can perfectly support one another and become twice as loud, or they can perfectly interrupt one another and become silence. This happens because sine waves oscillate between positive values and negative values and positive things can cancel out negative things.
When you look at the equation for the Fourier transform you'll see an exponent of a complex number. This is an improved version of a 'pure tone' which avoids the phasing issues of a sine wave. It does this by spinning like a clock hand in two dimensions, remaining always at the same length. This extra dimension lets us preserve enough information so that things never cancel out like with sine waves.
Zoom way in. If you go far enough it’ll just look like a curved line.
That curved line can be estimated down into a sin wave, or more accurately, a few sin waves that combine to make almost the same wave you have.
FFT is a way to take a complex wave and reduce it down to the sin wave components that would all combine to make it up.
To practically apply this, your voice recording is very high resolution and has a lot of bits at a high sample rate. If you instead used sin waves to represent the data, it could be almost as good sounding while being a lot less data to store.
… Watch a video. It’s something many people need to see to get.
Deleted Comment
https://kinder-chen.medium.com/denoising-data-with-fast-four...
Deleted Comment
LTI systems closely model many practical systems, including the tuning forks and flutes that give our intuition of what a "pure tone" means. I guess there's a level after that noting that conservation laws lead to LTI systems. I guess there's further levels too, but I'm not a physicist.
That eigenfunction property means we can describe the response of any LTI system to a sinusoid (again, complex exponential in general) at a given frequency by a single complex scalar, whose magnitude represents a gain and whose phase represents a phase shift. No other set of basis functions has this property, thus the special importance of a Fourier transform.
We could write a perfectly meaningful transform using any set of basis functions, not just sinusoids (and e.g. the graphics people often do, and call them wavelets). But if we place one of those non-sinusoidal basis functions at the input of an LTI system, then the output will in general be the sum of infinitely many different basis functions, not describable by any finite number of scalars. This makes those non-sinusoidal basis functions much less useful in modeling LTI systems.
Here, I just tried to define them “Pure tones are signals that have the unique property of putting all of the energy at exactly one frequency” and then use sine as an example.
Truly, complex rotations are a much better definition, but it’s somewhat reasonable to expect people to be familiar with sine waves.
First, the Fourier transform does not just measure the 'energy' or amplitude of a single wave; it must also take into account its phase.
Second, complex exponentials are in no way an essential ingredient for defining the Fourier transform. We just work with them for computational simplicity, essentially because exp(ix) exp(iy) = exp(i(x+y)).
I also have trouble assigning meaning to your last paragraph. After all, complex exponentials can cancel just as much as sine waves (proof: sine waves are a combination of complex exponentials).
I'd love other thoughts on how to handle the ideas in the last paragraph. I'm not being technical, but am trying to allude to how the complex exponentials can capture more information more conveniently... and honestly waving my hands a lot. Really, I just want to explain why they're there instead of a more recognizable sine or cosine functions.
I've also tried to show the Fourier transform as the sum of a sine and a cosine transform, but that's too much, I think.
Dead Comment
Deleted Comment
"There are many different ways to write down the number 5. Tally marks, the sigil 5, 4+1, 2.5 times 2, 10/2. Each of them is more or less useful in different circumstances. The same is true of functions! There are many different ways to write the same function. The mechanics of taking a function in one form and writing it in a new form is called a transformation."
Above needs to be said a hundred times. Then you talk about why a frequency space function can be useful. Then you talk about how to transform a time space function to the frequency space representation.
coupled with voidhorse's comment, it appears like "transform" obscures the concept whereas "translation" would be more appropriate and clearer.
Unfortunately I can't remember where I saw this explanation. It might've been in the Scientist's and Engineers Guide to DSP book.
That's only going to make sense for someone who understands your jargon usage of "correlate", and who groks that integrals of sine curves are orthogonal under addition. That's precisely the hard part the linked explanation is trying to explain.
If you've already gotten that far then the DFT is just calculator math.
At the totally basic level, another thing that I think can really trip people up and that a lot of texts are not clear about is that the transform is about moving between representations and that we're not actually transforming anything. Not enough maths texts take the time to explain how the representation connects up with the objects we wish to study and how there are multiple possible ways to represent them, and in some cases, like Fourier, useful ways to "translate" between different representations.
The extra tidbit - the sine waveforms are orthogonal to each other (their cross correlation is zero), so the transform can be inverted. (in math terms: they form a basis set. I'm mostly referring the STFT here, which is the 'common' FFT use. not infinite FFT).
You're going to need a longer sentence...
[Edit] If this sentence genuinely did what it said on the tin, the rest of the article - which give a lot more detail - wouldn't be necessary.
I have taught these kind of undergraduate subjects and, in the context of a course, the Fourier transform never struck me as something very complicated. First, it feels completely natural to write down a Fourier decomposition for periodic functions; the inverse transform to determine the coefficient is just (real or complex) calculus; and all that is left is to convince the audience of completeness which is best done through a few examples as well as the time-honoured method of "proof by intimidation". (Note that the blog post does not do a better job here.)
By contrast, these posts seem to fulfil a demand where people look for an isolated explanation of concept X, like Fourier transforms, matrix determinants, monoids, etc. But they are necessarily too isolated to really convey the subject. For example, one does not normally teach (complex) Fourier transforms without first dedicating significant time to topics like complex exponentials and integrals of trig functions, and likewise one normally teaches monoids only after introducing functors and applicatives.
In other words, without having the necessary background at your fingertips it is hard to really grasp the core concepts. That is why, IMHO, reading these blog posts may feel good but will rarely make things really stick.
It's great that it was "never complicated" for you. In contrast, the author (David Smith) of this blog post admits that he initially struggled with the Fourier Transform -- and wants to share some insights he gained after he understood it.
He has already graduated with a degree in statistics and computer science and also developed add-on software packages for R so he presumably first got exposed to FT within the _context_ of a college course. So he actually did what you suggested. Nevertheless, he wants to share why some supplementary material he didn't initially have might be helpful to others.
Alternative presentations of topics can sometimes flip the "light bulb" in some brains. I don't think it's a useless blog post.
I don't think they meant "not complicated to learn", but "never complicated to teach, once its time had arrived in the natural progression of the course".
Please stop making it sound like the commenter was bragging. That's just having bad faith in HN users.
The comment was about how the context and prior understanding of the underlying topics is a necessary prerequisite for understanding complex topics built on those and how blog learning is a necessarily contrived method of teaching.
It does not mean it won't work for some. If you happen to have the prior understanding of the prerequisites then it'd be a way to learn some additional insight, but you shouldn't assume the Venn diagram for that audience is much larger than a small % of the readers.
> If, like me, you struggled to understand the Fourier Transformation when you first learned about it
No where does the author imply that he is teaching the Fourier Transform to someone who has never heard of it.
I agree with your overall point. Blog posts are not going to teach you difficult mathematics unless they are the length of college textbooks. But a blog post like this is great for reinforcement. If, like me, you are an ECE (Electrical and Computer Engineer) who has written software for the last 15 years, an explanation like the one in the post makes a lot of neurons reconnect.
To that extent, this blog post seems valuable to me. The author is using a language and colors to convey an intuition that I have not precisely seen before.
You're also right that explanations can't be isolated. I'd go further and say they never are. People only come to understand new things in terms of things they already understand. There's no "explaining X" in isolation, there's only "explaining X in terms of Y." A good explanation either supplies a suitable Y or makes it clear to the reader what Y they'll be relying on.
So, even if the author tries to isolate, a successful explanation always relies on collateral information. The only question is whether that collateral information is left to chance or intentionally exploited.
What's the "Y" here? Well, the author assumes the reader can make sense of phrases like "energy at a frequency", what it means to "spin your frequency around a circle", "average a bunch of points along a path", etc.
That's...a lot. I'd go so far as to say that a reader who can make sense of that probably already understands more fundamental concepts like vector spaces, bases, and Taylor Series.
And if they understand those things then there are much more straightforward explanations available!
This is what all these blog posts excel with.
Of course I got it along the way, but I do believe it would have been easier if the background was something like this.
It's bothersome how popularized the notion
has become.Beautiful animations and witty narrations are nice, but you're in for a (bad) surprise if you think they're adequate. Also nothing beats good old book -> pencil/paper.
It depends on what the intention is. Is it to promote comprehensive understanding, proof, and the ability to then apply the concept in all situations? Then no.
Is it just to point out something interesting a make a point? Sure. There's nothing wrong with that. It's OK to explore aspects of a subject without a dense canonical treatise. There's a time and place for all sorts of explanations.
On a similar note, I personally believe there are many concepts that have been explained much better by random YouTubers than academic textbooks. Textbooks are more comprehensive almost always, but having an intuitive understanding of something is obviously more important in many cases than simply remembering a long list of bullet points about it. You can always use reference material later. In real life, memorizing facts on their own is rarely useful.
The best answer here is that you should use both, or imo, "all three," of blog posts, video content, and academic textbooks/articles/courses. Taking a variety of approaches helps me get to a mental "click" quicker, personally.
You mean learning by writing a blog post, or learning by reading a blog post? In both cases, the answer is "yes".
> In other words, without having the necessary background at your fingertips it is hard to really grasp the core concepts. That is why, IMHO, reading these blog posts may feel good but will rarely make things really stick.
The problem these posts solve, at least for me, is that despite having the necessary background, some concepts didn't "click" in my mind when I first learned them. I found that if I then read a couple different posts on a specific concept, each with slightly different perspective or way of presenting it, one of them will eventually hit the "sweet spot" and make it "click" in context of all the background I already have (and often cascade into making me understand a few more things in the topical background).
They’re also excellent to motivate a problem so the reader can find the technical details elsewhere.
I’m currently in a phase where I don’t understand them.
Anything more specific than that should be handled case-by-case until you build an intuition for how any given monad will behave.
[1]: https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...
well, it’s so simple: they’re just burritos!
Deleted Comment
After watching3Blue1Brown's video, I get it. So I can share with you what confused me:
It's this part of the sentence: "average a bunch of points along that path". I know what averaging it, but I don't know what averaging "along a path" means. Also I was (wrongly) expecting the output of this function to be a real number - I just expect "energy" to be a number. Instead, here what we're doing is averaging complex numbers so the output is a complex number. What a complex energy means is left as an exercise to the reader.
Here's my suggestion for a sentence that would have worked better for me:
"To find the energy at a particular frequency, spin your signal around a circle at that frequency, and find the center of mass of the plot."
Presumably then we have to take the distance from the origin to get an amount of energy.
They shouldn't be using the term energy, really. That should be reserved for the magnitude of the complex number (so re(z)^2+im(z)^2). You can write a complex number either as a real and imaginary part, or as a amplitude r (the distance you're referencing) and phase (the rotation from r+0im).
> "To find the energy at a particular frequency, spin your signal around a circle at that frequency, and find the center of mass of the plot."
As someone who's spent a lot of time with the subject, I like this better than the original.