Derivation of the Fast Fourier Transform

| math

Step-by-step derivation of the fast Fourier transform that should be understandable without pen-and-paper calculations on the side, following Opfer (2008).

Trigonometric polynomials

A trigonometric polynomial of degree \(N\) is defined as the following function: $$ t_N(x) = \frac{a_0}{2} + \sum_{j=1}^N \left( a_j \cos(jx) + b_j \sin(jx) \right),\quad x \in [0, 2\pi[, $$ where \(N, a_0, a_1, \ldots, a_n, b_1, b_2, \ldots, b_N\) are given. For our purposes, the coefficients \(a_j\) and \(b_j\) are real numbers, so we are looking at real trigonometric polynomials.

Furthermore, we only care about the discrete inputs $$ x_k = \frac{2\pi k}{N} $$ at the \(N\) given values \(k=0,1,\ldots,N-1,\) which is a special case called discrete Fourier polynomial. Naively evaluating this expression requires \(2N\) real multiplications for each \(k\), so \(2N^2\), in addition to the evaluations of the trigonometric functions. We are looking for a way to evaluate the same polynomial with less effort.

Complex numbers and Euler's formula

In a first step, we are complicating our expression a little bit by lifting it up to the realm of the complex numbers, which will allow us to apply Euler's formula. Remember that the complex numbers have the field properties associativity and distributivity just like the reals, so with \(\gamma_j = a_j - ib_j\) and Euler's formula \(e^{i\phi} = \cos \phi + i \sin \phi\) we get $$ \begin{align*} \gamma_j e^{i j x_k} &= (a_j - ib_j) (\cos(j x_k) + \sin(j x_k))\\ &= a_j \cos(j x_k) + i a_j \sin(j x_k) - i b_j \cos(j x_k) - i^2 b_j \sin(j x_k)\\ &= a_j \cos(j x_k) + b_j \sin(j x_k) + i (a_j \sin(j x_k) - b_j \cos(j x_k)) \end{align*} $$

This multiplication introduced an imaginary part, which we will simply discard since it was not there before. The real part is exactly what we had in our trigonometric polynomial definition. $$ t_N(x_k) = \frac{a_0}{2} + \Re \left\{ \sum_{j=1}^N \gamma_j e^{i j x_k} \right\} $$

We know that \(\Re z = \frac{1}{2} (z + \bar{z}),\) because the imaginary part of \(z\) and its conjugate cancel out and the real part is just doubled and halved. $$ t_N(x_k) = \frac{a_0}{2} + \frac{1}{2} \left\{ \sum_{j=1}^N \gamma_j e^{i j x_k} + \overline{\sum_{j=1}^N \gamma_j e^{i j x_k}} \right\} $$

Now we're getting a bit creative. $$ e^{i N x_k} = e^{i 2 \pi k} = 1 $$ Since \(e^{i0}\) is one and our function is \(2\pi\) periodic, \(e^{i 2 \pi k}\) must also be one. We could also verify this logic using Euler's formula and evaluating sine and cosine. Since this expression is one, we can introduce it as a factor without changing anything, so we'll do just that. Seems random, but what this does will become clear soon. $$ \begin{align*} t_N(x_k) &= \frac{a_0}{2} + \frac{1}{2} \left\{ \sum_{j=1}^N \gamma_j e^{i j x_k} + e^{i N x_k} \sum_{j=1}^N \overline{\gamma_j} e^{-i j x_k} \right\}\\ &= \frac{a_0}{2} + \frac{1}{2} \left\{ \sum_{j=1}^N \gamma_j e^{i j x_k} + \sum_{j=1}^N \overline{\gamma_j} e^{i (N - j) x_k} \right\}\\ \end{align*} $$

Repackaging the constants into coefficients, we get $$ t_N(x_k) = \sum_{j=0}^{N-1} c_j e^{i j x_k} $$ for \(k = 0, 1, \ldots, N-1, c_0 = \frac{a_0}{2} + a_N\) and \(c_j = \frac{\gamma_j + \overline{\gamma_{N-j}}}{2}.\) Note that for \(j = N\) we have \(\gamma_N e^{i N x_k} + \overline{\gamma_N} e^{i 0 x_k} = \gamma_N + \overline{\gamma_N} = a_N,\) which becomes part of \(c_0,\) so the sum runs from \(0\) to \(N-1.\)

For our convenience, we introduce another notation: $$ w_N = e^{\frac{2\pi i }{N}} $$ This is a generator of the cyclic group of the Nth roots of unity, which are \(N\) equally spaced points on the unit circle. And with this we have $$ t_N(x_k) = \sum_{j=0}^{N-1} c_j w_N^{jk} $$ with \(k\) as above.

Divide and conquer

The next idea is to divide odd and even \(k\) into separate sums, each with \(M = \frac{N}{2}\) terms. For this we'll simply assume that \(N\) is an even number, making our lives easy. We start with the even values \(k = 2l\) for some \(l.\) It's all relatively straightforward calculation, but you probably need to zoom in to read the tiny exponents. $$ \begin{align*} t_N(x_{2l}) &= \sum_{j=0}^N c_j e^{\frac{2\pi i j 2 l}{N}}\\ &= \sum_{j=0}^{M-1} c_j e^{\frac{2\pi i j \cancel{2} l}{\cancel{2} M}} + \sum_{j=0}^{M-1} c_{j+M} e^{\frac{2 \pi i (j+M) \cancel{2} l}{\cancel{2} M}}\\ &= \sum_{j=0}^{M-1} c_j w_M^{jl} + \sum_{j=0}^{M-1} c_{j+M} e^{\frac{2\pi i jl}{M}} e^{\frac{2\pi i \cancel{M} l}{\cancel{M}}}\\ &= \sum_{j=0}^{M-1} c_j w_M^{jl} + \sum_{j=0}^{M-1} c_{j+M} w_M^{jl}\\ &= \sum_{j=0}^{M-1} (c_j + c_{j+M}) w_M^{jl} \end{align*} $$ We start with the definition, which we rewrite into two sums with indices running from \(0\) to \(M-1.\) Some things cancel out and we again use the fact that \(e^{i 2 \pi l} = 1.\) In the end we managed to cut the number of operations we need to evaluate the sum for some \(x_{2l}\) in half, ignoring \(c_j + c_{j+M}\) as it's just a constant that can be calculated in advance, independent of \(x_{2l}\).

Analogously, for odd values \(k = 2l + 1.\) $$ \begin{align*} t_N(x_{2l + 1}) &= \sum_{j=0}^{N-1} c_j e^{\frac{2 \pi j (2l + 1)}{N}}\\ &= \sum_{j=0}^{M-1} c_j e^{\frac{2 \pi i j (2l + 1)}{2M}} + \sum_{j=0}^{M-1} c_{j+M} e^{\frac{(2l + 1) 2 \pi i (j+M)}{2M}}\\ &= \sum_{j=0}^{M-1} c_j e^{\frac{2\pi i j \cancel{2} l}{2M}} e^{\frac{2\pi i j}{2M}} + \sum_{j=0}^{M-1} c_{j+M} e^{\frac{2 \pi i (j+M) (2l +1)}{2M}}\\ &= \sum_{j=0}^{M-1} c_j w_M^{jl} w_N^j + \sum_{j=0}^{M-1} c_{j+M} e^{\frac{2 \pi i (j+M) (2l +1)}{2M}} \end{align*} $$ This one is a little bit more involved, so let's zoom in on the last exponent in a little sub-calculation. $$ \begin{align*} \frac{2 \pi i (j+M)(2l + 1)}{2M} &= \frac{2\pi i (j+M) 2l}{2M} + \frac{2\pi i (j+M)}{2M}\\ &= \frac{2 \pi i \cancel{2} l j}{\cancel{2} M} + \frac{2 \pi \cancel{2} l \cancel{M}}{\cancel{2M}} + \frac{2\pi i j}{2M} + \frac{2\pi i M}{2M}\\ \end{align*} $$

If we raise \(e\) to the power of this expression, we get the following terms. $$ w_M^{lj} \cdot 1 \cdot w_N^j w_N^M = - w_M^{lj} w_N^j $$ where \(w_N^M = e^{\frac{\cancel{2} \pi i \cancel{M}}{\cancel{2M}}} = e^{i \pi} = -1,\) which is known as Euler's identity (and is just a special case of Euler's formula). So for odd \(k\) we get the following sum. $$ t_N(x_{2l + 1}) = \sum_{j=0}^{M-1} (c_j - c_{j+M}) w_M^{jl} w_N^j $$

A little more cosmetics: we define new coefficients \(d_j = c_j + c_{M+j}\) and \(d_{j+M} = (c_j - c_{M+j}) w_N^j\) for \(j = 0, 1, \ldots, M-1\) and get the following two sums. $$ \begin{align*} t_N(x_{2l}) &= \sum_{j=0}^{M-1} d_j w_M^{jl}\\ t_N(x_{2l +1}) &= \sum_{j=0}^{M-1} d_{j+M} w_M^{jl} \end{align*} $$ Formally, these look like the sum we started with, only with \(M\) instead of \(N\) terms. To compute the new \(d\) coefficients, we need \(M-1\) complex multiplications, not counting the factor \(w_N^0 = 1.\) If \(N\) happens to be a power of two \((2^n = N = 2M)\), we can apply this procedure recursively until each sum has only one term \(w_1 = 1,\) so the evaluation is based only on the coefficients.

Complexity

The number of sums doubles with each step while the number of terms in each sum halves and multiplications are only necessary to calculate odd terms, except the one with factor \(w_N^0\). So the number of multiplications is: $$ \begin{align*} \sum_{i=0}^{n-2} 2^i \left( \frac{M}{2^i} - 1 \right) &= \sum_{i=0}^{n-2} (M - 2^i)\\ &= (n-1) M - \sum_{i=0}^{n-2} 2^i\\ &= (n-1) M - (2^{n-1} - 1)\\ &= (n-2) M + 1 \end{align*} $$ For the last step, keep in mind that \(M = 2^{n-1}\) by our definition. On top we need \(nN\) additions or subtractions. The naive evaluation took a number of real multiplications that was quadratic in \(N,\) now we're looking at something that is \(O(N \log N),\) so we definitely got somewhere with all of this.


Opfer, Gerhard (2008). Numerische Mathematik für Anfänger, 5. Auflage. Vieweg-Teubner.