San José State University

applet-magic.com
Thayer Watkins
Silicon Valley
USA

 The Spectra of Various Transformations of White Noise

Spectral analysis is the decomposition of a functions into its cyclic components. It is carried out using the Fourier transform. The Fourier transform of the function y(t) is defined as:

#### Fy(ω) = ∫−∞∞exp(−iωt)y(t)dt

The Fourier transform is generally a complex function. The spectrum of a function is simply the absolute value of its Fourier transform.

The spectrum of white noise is constant over a broad frequency band. This is in analogy with white light that contains light of all colors over the frequency band of visible light. Sometimes white noise is taken to extend over an infinite range but this would be impossible to realize physically because such noise would have infinite enegy. If the frequency band is too narrow the noise would be said to be of a particular color. Therefore white noise is defined to be such that its spectrum is

## The Cumulative Sum of White Noise

The cumulative sum is defined as the integral of white noise. If u(t) is white noise then

#### y(t) = ∫0tu(s)ds and, equivalently dy/dt = u(t)

As state previously, the spectrum is the magnitude of the Fourier transform of the variable and therefore

#### |Fy(ω)| = |Fu(ω)/(iω)| = |Fu(ω)/ω|

The variable y is said to be pink noise.

Pink noise would be any variable whose spectrum is of the form

## The Spectrum of the Moving Average of a Variable

The general form of a moving average of a variable y(t) is

#### y(t) = ∫0Hh(s)y(t-s)ds

where h(s) for 0 ≤ s ≤ H is a weighting function. The upper limit H could be finite or infinite. Note that the moving average of a variable is being denoted by an underscore of that variable.

The Fourier transform of y(t) is

#### Fy(ω) = ∫−∞∞ exp(−iωt)y(t)dt = ∫−∞∞exp(−iωt)(∫0Hh(s)y(t-s))dsdt

The reversal of the order of integration gives

#### Fy(ω) = ∫0Hh(s)[∫−∞∞exp(−iωt)y(t-s)dt]ds

If the variable of integration in ∫−∞exp(−iωt)y(t-s)dt is changed to z=t-s then t=z+s and dt=dz so the integral becomes

#### ∫−∞∞exp(−iω(z+s))y(z)dz which reduces to exp(−iωs)∫−∞∞exp(−iωz)y(z)dz and finally to exp(−iωs)Fy(ω)

This is a standard theorem for Fourier transforms which says

Therefore

#### Fy(ω) = ∫0Hh(s)[exp(−iωs)Fy(ω]ds which reduces to Fy(ω) = Fy(ω)∫0Hh(s)exp(−iωs)ds

If h(s) is extended over the interval [−∞,+∞] such that h(s)=0 for s<0 and s≥H then the second term on the RHS of the above expression is just the Fourier transform Fh.

The relationship is then

#### Fy(ω) = Fy(ω)·Fh(ω)

For a simple moving average h(s) = 1/H and (1/H)∫0Hexp(−iωs)ds reduces to

#### (1/H)[exp(−iωs)/(−iω]0H = (1/H)[exp(−iHω)−1]/(iω) which by factoring out a term of exp(−iωH/2) leads to exp(−iωH/2)[ exp(+iωH/2)− exp(−iωH/2)]/(2iωH/2) which is exp(−iωH/2)[sin(ωH/2)/(ωH/2) = exp(−iωH/2)sinc(ωH/2)

By labeling the t variable of the moving average with the midpoint of the H interval the term exp(−iωH/2) can be eliminated leaving

#### Fy(ω) = Fy(ω)sinc(½ωH)

Since the spectrum is the absolute value of the Fourier transform the relevant function is |sinc(x)|

The sinc function creates peaks in the spectrum of the moving average that were not there in the original data.

## Sampling and Intervalizing

Samping in spectral analysis generally means taking the value of a variable at discrete intervals. A related procedure is to replace the instantaneous values within an interval by the sample values; i.e., for ti−½H≤t≤ti+½H replace y(t) with y(ti). The Fourier transform of the intervalized function is related to the Fourier transform of the sampled function through multiplication by a factor of the form

#### ∫−½H+½Hexp(−iωt)dt which reduces to sinc(½ωH)

Since the intervalizing procedure is applied to the moving average of the original variable the Fourier transform for the intervalized moving average function z(t) is given by

#### Fz(ω) = Fysinc²(½ωH)

The sinc²(x) has the following shape:

For y being pink noise, Fy(ω)=c/ω, the spectrum for interval average function rises to a peak and then declines. Thus the low frequency components dominate the interval average even more so than they do for the cumulative sum.

## A Moving Average of Annual Averages

Any manipulation or transformation of data which are the cumulative sums of random disturbance can introduce elements of stochastic structure which are peculiar and non-intuitive and potentially dangerous for objective statistical analysis. For example suppose the annual averages are computed for variables which are the cumulative sums of random disturbances and then the annual averages are averaged over a five year period. In the diagram below the upper graph shows the weights which are placed upon the rates of changes. Annual averaging places a relatively high weight on changes which occur early in the year and a low weight on changes which occur near the end of the year. When values are averaged over a five-year period the changes that occur near the beginning of the five-year receive a much higher rate than those occurring near the end of the five-year period.

The five-year average would typically be identified with the third year whereas it is more closely associated with the changes occurring in the first year. This would confuse the analysis of time lags among variables.

## Illustrations

The following is the four-period moving average of a four period moving average of random variable uniformly distributed between 0 and +1.0. To illustrate how this double smoothing generates the appearance of cycles a sinusoidal cycle about a level of 0.5 is plotted in the same graph. ## Autocorrelation

A physically measurable quantity, such as the temperature of an object, may be the cumulative sum of a stochastic variable. In the case of the temperature of an object the stochastic variable is proportional to the net heat input to the object. This variable however may be subject to autocorrelation; i.e., a dependence of its distribution on its past values.

For example, the temperature T(t) of a body at time t may be given by

#### T(t) = T(t-1) + U(t) but U(t) = λU(t-1) + V(t)

where the variables V(t) are independent random variables.

The variable U(t) is given by the formula

#### U(t) = V(t) + λV(t-1) + λ²V(t-2) + … or, in general, U(t) = Σj=0t λjV(tj)

This is an exponentially weighted sum, a type of smoothing operation. Since temperature is the cumulative sum of the U(t)'s, another smoothing operation, temperature is a doubly smoothed variable. As in the case of a moving average of a moving average the double smoothing will generate the appearance of cycles even when the original variable, the V(t)'s, are random white noise. When temperatures are subjected to averaging the result could triply smoothed white noise which would be even more subject to the generation of spurious trends and cycles.

(To be continued.)

## Differentiation and Differencing of Moving Averages

Let z(t) be a variable and Fz(ω) be its Fourier transform. Let y(t)=dz/dt, then

#### |Fy(ω)| = ω|Fz(ω)|

If z(t) is a moving average of the cumulative sum of white noise its Fourier transform is of the form

then

#### |Fy(ω)| = c*|sinc(½ωH)|

Thus the derivative of a moving average of the cumulative sum of white noise has a spectrum that indicates cycles but the spectrum comes from the moving average process rather than the original data.

More generally the Fourier transform of a weighted moving average of a variable v(t) based upon a weighting function h(s) is of the form

#### Fz(ω) = Fs(ω)Fh(ω)

If s(t) is the cumulative sum of white noise then Fs(ω)=c/ω over some range of ω. Thus the Fourier transform of y(t) which is the derivative of the weighted moving average is then

#### Fy(ω) = ω(c/ω)Fh(ω) = c*Fh(ω)

Thus the spectrum of the derivative of a moving average of white noise is just the spectrum of the averaging process. This means that when cycles are found in the review of processed versions of moving averages they may be just an artifact of the averaging and processing procedures.

Differencing of moving averages would occur more commonly than differentiation. The result are similar. Let y(t)=[z(t)−z(t-H)]/H. The Fourier transform of y(t) is then

#### Fy(ω) = (1/H)(1-e-ωH)Fz(ω)

Since (1-e-ωH)=ωH − (ωH)²/2 + …

#### Fy(ω) = (ω −ω²H/2 + … )Fz(ω)

Thus a Fourier transform of the cumulative sum of white noise will be multiplied a factor that is a multiple of ω and the effect is to cancel out the ω in the denominator of the Fourier transform of the cumulative sum of white noise leaving approximately just the Fourier transform of the averaging procedures; i.e.,

#### Fy(ω) = (ω −ω²H/2 + … )(c/ω)Fh(ω) = (1 − ωH/2 + …)*c*Fh(ω) which for small values of ωH reduces to Fy(ω) = c*Fh(ω)

(To be contined.)