Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
An Interactive Guide To The Fourier Transform (betterexplained.com)
218 points by lnmx on Dec 20, 2012 | hide | past | favorite | 25 comments


Great article. I did Fourier Transforms in EE maths and could apply them but never properly understand them. Typical calculation versus understanding.

The greatest graphic in this entire article is if you ask me:

http://betterexplained.com/wp-content/uploads/images/Derived...

If all mathematics was explained like this, many less eyeballs would have been gouged out of sockets.


Thanks. I love the conversion of equations into "math English" (that particular diagram is from Stuart Riffle).


Very good explanation, the smoothie-recipe decomposition approach is the one I used when I taught this to students (I used drinks, but well). However, this example really works only for teaching Fourier Series, which should be the first step to the FT anyway. If you want to understand this stuff, the order you should approach, I think is the Fourier Series, the FT (ignoring many mathematical difficulties), the discrete-time FT, which has its own quirks.


This line towards the end has a wonderful meta cleverness about it:

"The analogy is flawed, and that's ok: it's a raft to use, and leave behind once we cross the river."


I studied frequency domain/time domain when I was in eight grade and high school learning the technical aspects of how radios work. An oscilloscope was a big boost for developing this intuition, particularly if hooked to a microphone where you could see how various sounds showed up, then how tones from the speaker of a radio would look. And using radios day-in, day-out, and understanding how the spectrum of an AM radio signal looked.

I am glad that this analogy works for many of you here, but I have always been deeply suspicious of using analogies to teach concepts. For one thing, there is always the part that "Ok, what I just told you and you learned is not true in the following ways.." And the famous analogy between water in a pipe and electricity being actually potentially dangerous to a new student. In fact, I am sure that Fred remembers me railing "all analogies are False". And I have had very little success teaching complex topics using analogies.

Incidentally, the AM radio spectrum is quite easy to understand. It wasn't until an advanced signals course that I saw the math for the spectrum of an FM signal. Much more complicated.


I have desperately wanted to understand the Fourier Transform and signal processing for a long time. In high school I read every tutorial or book I could get my hands on, hoping that one would finally "click." The very first thing I did when I got to college was approach the CS faculty and ask for recommendations for literature that would help me understand it. My college didn't have a signals and systems class, but a few years later I heard that Richard Lyons's book "Understanding Digital Signal Processing" was supposed to be the friendliest book on the subject so I bought and read it.

I say all of this just to impress upon you that I was serious about wanting to understand this stuff.

Despite all the effort, the FFT and DFT were never more than opaque blobs of math to me, mysterious boxes that you put inputs into and could extract outputs from (albeit in a very obscure format that seemed to involve complex numbers for no apparent reason). I sort of gave up and forgot about it for several years.

Recently I came upon Stuart Riffle's article (http://www.altdevblogaday.com/2011/05/17/understanding-the-f...) which explained it in terms of the circular approach that this article adopts. Rotate the waveform about the origin at the frequency of interest and average the samples, simple as that.

I am not exaggerating when I say that after 10 minutes of reading that article, I had an understanding that years of highly motivated knowledge-seeking had not given me.

I take a few things from this story. One is that, given a specific learner (ie. me), some ways of explaining things are infinitely better than others. I mean that literally, because this new, circle-based explanation gave me an understanding that I literally was not capable of achieving with all of the symbolic and algebraic explanations I had studied previously. Judging from other comments I've read, I'm not alone in this, which says to me that there could be exciting advances ahead of us in the way we learn. Just as the Khan Academy is making learning more accessible, I hope that it could also help discover and widely disseminate the best explanations for things. Sal Khan is very good at explaining things but we can't expect him to think of everything. This fantastic DFT explanation was created by a random systems/graphics programmer; I hope it will percolate into DFT curricula until everyone who is learning about the subject is at least exposed to it.

My other takeaway, though, is that hard-core math people really think in a fundamentally different way than I do. I'm a highly intuitive thinker, and formulas are a sea of meaningless symbols to me without an intuitive understanding of what is going on. That someone could understand the DFT without thinking in terms of the circular interpretation is amazing to me. I now know that I am indeed capable of understanding the concept, but only by thinking about it in a different way than most math people do. I suspect, however, that my intuitive way of thinking about it would be more difficult to formalize and make rigorous, so in the end I am dependent on the mathematicians and their way of thinking, even if I can't as easily understand things in their terms.

(One other example of this: I think calculus is simpler to understand in terms of infinitesimals rather than limits, but this is another example where the infinitesimals are more difficult to make rigorous).


(Author here)

I totally agree. There are certain explanations out there which have orders-of-magnitude differences in ease of understanding (seeing i as a rotation, seeing radians as the "mover's perspective", seeing integrals as "better multiplication"). My personal mission is finding these aha! moments which unravel years of confusing symbol manipulation. Calculus is definitely more easily understood with infinitesimals vs. limits (ask any physics major or engineer).

Personally, I'm looking forward to a world where the very best explanations / analogies can bubble to the top. It's ridiculous that 200+ years after Calculus was invented, we still teach it poorly, and nearly everyone struggles.

Rigor & Intuition have a delicate balance in math. I see it as language: children can speak fluently, even if they don't know the "rigorous" rules of grammar & spelling [which are left to linguists]. I suspect the reason most adults have trouble learning languages is because they try to start from rigor (vocab lists and grammar structure) vs. absorbing an intuitive notion of what's going on (and later refining with rigor, "me want food" => "I want food").


>Personally, I'm looking forward to a world where the very best explanations / analogies can bubble to the top.

Please do keep in mind that there isn't one best explanation per subject, just a "local maximum" explanation that is best to a class of people sharing a similar way of thinking. Personally, the normal explanation of "representing (approximating) a complex cyclic function as a linear combination of complex trigonometric functions" made the most sense, while your explanation reads like a convoluted mess of analogies. Therefore, try to have a few completely different explanations per topic, rather than just the one which makes most intuitive sense to you.


Definitely, appreciate the feedback here. It's easiest to share the explanations that come to me, but I love finding a few different ways to look at things, and as they emerge I like to include them.

As a simple example, here's the formula for adding the numbers 1...n:

http://betterexplained.com/articles/techniques-for-adding-th...

The explanation I was originally given (pair the first and last items, and count the pairs) seems gnarly because you have even/odd issues, off-by-one errors, etc. There are others which click better.


I agree here. The solution for me on not quite getting concepts in calculus and engineering was to delve into greater abstraction. When you get linear algebra, you understand the inverse function theorem in analysis. When you understand Hilbert spaces, the Fourier series makes a lot more sense.

A lot of the simple analogies and explanations I can use to talk about determinants or Fourier transforms now I can only conjure because of the complex study taking years to soak through.


"Calculus is definitely more easily understood with infinitesimals vs. limits (ask any physics major or engineer)."

I don't think this is true, and suffers from the same fallacy as the opposite statement that limits are easier to understand than infinitesimals.

For example, I've always found limits very easy to understand and infinitesimals confusing, even before I was asked to work with them rigorously using epsilons and deltas. I'm also a math major. In the physics classes I took, I sucked at mechanics (like, almost-failed sucked) but love relativity and QM.

Talk to me in terms of Lie algebras and invariant subgroups and I'll be ten steps ahead of you. I realize that's not usual, I'm just pointing out that there's no absolute "easiest way" to understand something.

Warning: personal theories of learning and pedagogy ahead, stated with more certainty than warranted.

I think a key to understanding is presenting the same idea using multiple models/representations. The learner already has some picture of the idea you're trying to explain in their head. The picture might be confused and poorly formed, but there's some version of it nonetheless.

I'd say, for that learner the "easiest way" to understand something is to find an accurate picture of the idea you want to explain and relate it to an idea the learner already understands clearly.

I have a picture I want to put into your head. Your version of that picture is fuzzy and confused. I need to relate the clear picture I want to give you to clear pictures of other ideas you already have in your head.

Ceci n'est pas une pipe, the map is not the terrain, the signifier is not the signified, etc.

http://www.brandonbird.com/signifier_signified.html


Great article!, I´ll say it is very easy to follow till the spike part. From there is it possible to follow what you are explaining but not as easily as before. Maybe is because you are introducing notations that are not clear yet. I don´t know. But keep at it, really great work!.


Appreciate the feedback! Yep, that transition from analogy -> math can get bumpy. Over time I'll keep getting smoother :).


I am excited to hear that your entire site is built around this mission of intuitive explanations for things! I'll have to bookmark it and read more.


My other takeaway, though, is that hard-core math people really think in a fundamentally different way than I do.

Among other things, when somebody says something is "literally" infinitely better than something else, we compulsively start trying to figure out what that could mean and whether it is true, and then we feel foolish and stop :-)

I'm a highly intuitive thinker, and formulas are a sea of meaningless symbols to me without an intuitive understanding of what is going on.

This is true of "hard-core math people," too, but studying math adds new intuitive concepts on top of the spatial and physical intuitions you already know how to apply. For me, compactness was the first mathematical intuition I developed, because real analysis was the first abstract mathematics that I studied in depth. I developed a feeling for when compactness played a role, a feeling that was much simpler than any way of defining or describing compactness. I think it's the same phenomenon as a cook knowing when to take a pan off the heat or a basketball player knowing when his opponent intends to shoot instead of pass -- the experienced mind provides hints to the rational mind via feelings and intuitions. The math books you were reading probably appealed to intuition that is developed elsewhere in the undergraduate math curriculum (possibly linear algebra.)

tl;dr A book that depends on relatively elementary facts may appear (and claim) to be accessible to someone with a modest mathematical background, despite requiring intuitions that must be developed via more advanced study.


> Among other things, when somebody says something is "literally" infinitely better than something else, we compulsively start trying to figure out what that could mean and whether it is true

It's funny that you mention it, because this may be a great example of something that to me appears to be intuitively meaningful, but may not actually be.

Here is what I meant by it: one way you could compare different explanations of a concept is to compare the amount of time and effort it takes the learner to understand it. If one explanation takes 4 hours of study (or 4 problem sets, or 4 lectures of listening, etc) to give understanding compared with another that takes only 2, you could say that the latter explanation is twice as good.

If you accept this model, then an explanation that does not given understanding even after an arbitrarily large amount of study is "infinitely" worse than one that does, in the sense that the ratio is arbitrarily large. You could use the reciprocal formulation instead and get a division by zero.

Is this rigorous? I dunno, it seems reasonable to me. :)

> The math books you were reading probably appealed to intuition that is developed elsewhere in the undergraduate math curriculum (possibly linear algebra.)

Maybe, though I did study linear algebra (coincidentally without getting a great intuition for it either; the shear mapping graphic on this page blew my mind when I first saw it, since I had computed tons of eigenvalues before without having any idea it corresponded to a geometrical concept like this: http://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors).


It's the zero part that bothered me. The difference between zero and a very very very small quantity is not a big deal until it turns a finite number into something that is not a finite number, and then it's an infinitely big deal. So my brain got distracted wondering "is it really zero?" until I reminded myself that the important thing is that I knew what you meant :-)

There are two schools of thought in teaching linear algebra. One focuses on matrices, and the other takes the perspective of linear maps and vector spaces. The course I took in college was all about matrices, and I didn't understand the point at all. I hated it. When I reviewed my linear algebra for grad school, I got a book that took the abstract approach, and it felt a lot simpler. The linear algebra perspective on Fourier analysis is that the functions e^2πisx form a basis for a vector space of functions, just like (1, 0, 0), (0, 1, 0), and (0, 0, 1) form a basis for R^3. Any function in that vector space can be represented as a linear combination of the basis elements. That representation is the Fourier series of the function. There are a lot of technical details to figure out, such as which functions are in the space, exactly how to calculate the coefficients of the linear combination, and how to figure out if a given Fourier series converges, but intuitively you can say:

"The Fourier transform is simply a method of expressing a function (which is a point in some infinite dimensional vector space of functions) in terms of the sum of its projections onto a set of basis functions.[1]"

There's a similar description on Wikipedia with more detail [2].

The neat thing is that even though Fourier transforms is a complicated subject, even though I barely scraped by learning the basics fifteen years ago, and even though I couldn't do any real calculations today to save my life, this way of looking at it is so simple that I can't forget it. When I look at the equations I am quickly oriented: the series is a linear combination of functions, the functions are an orthogonal basis of a vector space, and the coefficients of the linear combination are obtained by projecting the function onto the elements of the basis. It's a good place to start if I ever need to learn something about Fourier transforms again someday. It's also a good complement to the concrete spatiotemporal intuition that the article provides.

[1] http://undergraduate.csse.uwa.edu.au/units/CITS4240/Lectures... [2] http://en.wikipedia.org/wiki/Hilbert_space#Fourier_analysis


Completely agree with you here, just want to add that if anyone is looking for a rigorous approach to infinitesimal calculus, check out this book: https://en.wikipedia.org/wiki/Elementary_Calculus:_An_Infini...

And more generally, look in to "non-standard analysis": https://en.wikipedia.org/wiki/Non-standard_analysis


Ended up playing with the animation for a while, this is probably nothing new but just thought I'd share a funny quirk I discovered. If you plug in the fibonacci sequence for the time, you get symmetrical strengths and phases (except for the first term):

For (0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610) you get:

99.74 77.77:51.4 53.46:82.8 39.97:104.3 32.41:121.8 27.98:137.3 25.39:152 24.03:166.2 23.62:180 24.03:-166.2 25.39:-152 27.98:-137.3 32.41:-121.8 39.97:-104.3 53.46:-82.8 77.77:-51.4

Formatting magic:

99.74

1 - 77.78 51.4

2 - 53.46 82.8

3 - 39.97 104.3

4 - 32.42 121.8

5 - 27.98 137.3

6 - 25.39 152

7 - 24.03 166.2

8 - 23.63 180

7 - 24.03 -166.2

6 - 25.39 -152

5 - 27.98 -137.3

4 - 32.42 -121.8

3 - 39.97 -104.3

2 - 53.46 -82.8

1 - 77.78 -51.4

It works for all sequence lengths I've tried, although the app starts rounding off when you start getting in the hundreds.. not that it matters in this case. Couldn't find any relationship between the first constant and the pairs, nor any relationship between the ratio of the pairs. Just something interesting.


Without wanting to quell any enthusiasm (or cause too much confusion), it's probably worth pointing out that any real-valued time-series input will result in a similarly symmetric pattern. I can't explain why that's true in a way that makes sense with the analogy used here, but it is.


Awesome, glad you were able to explore! As the other reply mentioned, it actually turns out that all real (1d) signals are symmetric.

The key reason is in order for a circular path to stay on the real axis, it needs to be combined with another path rotating the opposite way, so their sum will stay on the x-axis.

The "opposite" rotation can be a negative frequency (1Hz vs -1Hz) or a very fast positive frequency (if you have a 12-hour clockface, 1 hour backwards = 11 hours forward, aka -1 (mod 12) = 11 (mod 12)). I'd like to explore this in the follow-up, glad you discovered it.


This is indeed an awesome explanation. And I'm regularly shocked when people do not explain Fourier Transform as generalization of Fourier Series.

This makes it quite intuitive to me (well, at least as far as FT can be intuitive ...)


This amuses me. My friend and I spent part of our time at a bar trying to explain Fourier transforms to her philosophy major boyfriend. My explanation of how optical trapping using Snell's law and light momentum went much more smoothly.


> music recognition services compare recipes, not individual drops

What does this mean? I understand 'recipes' here, but what's 'drop' in this context? A continuation of the food metaphor?


I should probably change that. In this case, a "drop" would be something like a single second of audio.

Instead of saying "Does this single second of audio show up in other songs?" we should ask "Do the frequency components in this song show up in other songs?" (similar ratios of bass, treble, etc.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: