Thayer Watkins Silicon Valley & Tornado Alley USA |
---|
One of the most important theorems of mathematical analysis is the one proving the fact that an arbitrary function f(x) which is periodic over the interval [0, 2π] can be perfectly represented as a sum of sine and cosine functions of the form sin(nx) and cos(nx) for n=1 to ∞; i.e.,
In this formulation the constant term could be taken to be the coefficient of cos(0*x) which is just the constant function 1.0. The coefficient of sin(0*x) could be defined as zero. In this case the summation could start at 0 and there would be no need for a special treatment of the constant term.
The set of sine and cosine functions
constitute an orthogonal set of functions. This means that any of these functions is orthogonal to any of the other functions; i.e., the integral of their product over the interval [0,2π] is equal to zero.
Because of the orthogonality of the sine and cosine functions the coefficients can be expressed as:
The values of the denominators in the fractions are known from integration to be π for n=1 to ∞, 2π for cos(0) and 0 for sin(0).
The values of the numerators might be considered to be approximated using the values of the function f(x) at N equally spaced points. The width of an interval is 2π/N, which will be denoted as Δx, so
While the above approximation seems reasonable, in fact there is no mathematical basis for it because between the sampled points the function could have any behavior whatsoever and the values of the integrals under that behavior could take on values completely at variance with the value of the summation on the sampled values. While in practice the functions being considered are usually smooth functions between the sampled points there is a difference between mathematical assertions and empirical practice.
When considering the Fourier analysis of a function given only at a discrete set of N points {q(2π/N); q=0, 1, 2, ..,N-1} it is essential for mathematical correctness to keep in mind that the sine and cosine functions are simply N-dimensional vectors and as such there can be no more than N independent vectors in that N-dimensional vector space. What must happen as {sin(j(2πq/N): q=0...N-1} and {cos(k(2πq/N)): q=0...N-1} are generated for various values of j and k they will be equal to a linear combination of other vectors. Usually it is a matter of a new vector generated for some value k being identically equal to a vector generated with some other value j. This phenomenon is called aliasing and is illustrated below.
In the illustration the green lines represent the points at which the two functions are observed. At those points the functions have the same values and would thus generate the same vector.
The nature of the aliasing for a particular number of observations N can be determined analytically but it is more convenient to show the aliasing by way of a table of the dot products of the vectors generated from sin(jx) and cos(kx) for various values of j and k for a particular value of the number of observations N. The table below shows the dot products for N=4 and for j and k running from 0 to 5. The squared norms shown on the diagonal should be 0 for the zero vector sin(0x) and N for the unit vector cos(0x) and N/2 for all other independent vectors. These values are shown in highlighted in green. When a vector for a higher value of j is aliased into the zero or unit vector its squared norm is 0 or N. The values on the diagonal of the table in which the squared norm is not N/2 are shown highlighted in blue. For the vectors generated by sin(jx) and cos(kx) for i≠j the dot product should be zero unless one vector is aliased into the other. The case of a nonzero dot product off the diagonal are shown highlighted in pink. In this case when N=4 the vector for sin(2x) aliases into the zero vector sin(0x) and the vector for cos(3x) aliases into the unit vector cos(0x). Likewise the vectors for sin(3x) and cos(3x) for the case N=4 alias into the vectors for sin(1x) and cos(1x), respectively. There is no way to get independent vectors for this case beyond the ones for j = 0 and j=1. But an arbitrary 4 dimensional vector will not necessarily be perfectly representable by three vectors. Thus for the case N=4 the Fourier series cannot perfectly represent the observations. The inclusion of vector that aliases into another vector makes the representation or fit of the Fourier series worse rather than better. In fact, the unintentional inclusion of an aliasing vector would make the determination impossible because the matrix of dot products would not have an inverse.
The extent of the aliasing depends upon number-theoretic relationships between the values of N and j and k. The display below shows the case in which N=5. As can be seen by comparing the table above with the table below the nature of the aliasing is different.
HOME PAGE OF Thayer Watkins |