applet-magic.com
Thayer Watkins
Silicon Valley
USA

 The Characteristic Function of Algebraic Combinations of Random Variables and Cumulative Sums of Random Disturbances

The purpose of this material is to derive the characteristic function of a linear combination of random variables. The properties of the characteristic function of a probability distribution have been developed elsewhere. The characteristic function of a probability distribution is its Fourier transform. This results in the characteristic function of the sum of two random variables being the product of their characteristic functions. More conveniently this relationship is expressed in terms of the logarithms of the characteristic functions; i.e., the logarithm of the characteristic function of two random variables is the sum of the logarithms of their characteristic functions.

## The Characteristic Function of a Multiple of a Random Variable

Let the probability density function (a.k.a. distribution function) of a random variable y(t) be denoted as p(y) and its characteristic function as φ(ω). Let Y(t)=by(t), where b may be positive or negative. The distribution function for Y(t), P(Y), is given by the general rule

#### P(Y)|dY| = p(y)|dy| which reduces to P(Y) = p(Y/b)/|dY/dy|) and further to P(Y) = p(Y/b)/|b|

Since the characteristic function for P(Y) is defined as

#### Φ(ω) = ∫-∞∞exp(iωY)P(Y)dY which, from the previous relation becomes Φ(ω) = ∫-∞∞exp(iωY)[p(Y/b)/|b|]dY

A change in the variable of integration from Y to y=Y/b results in

#### Φ(ω) = ∫-∞∞exp(iωby)p(y)dy

If φ(ω) is the characteristic function of p(y) then the above relation means that

#### Φ(ω) = φ(bω)

To see the consequences of this rule consider the characteristic function for a normal distribution; i.e.,

#### log(φnormal(ω) = iμ −|σω|²

where μ is the mean value of the variable and σ is its standard deviation. The symbol i stands for the square root of −1.

Thus

#### log(φY(ω) = iμbω −|σbω|² which can be expressed as log(φY(ω) = i(μb)ω −|(σb)ω|²

This means that for the Y distribution the mean value is the mean value for the y distribution multiplied by factor of b and likewise the standard deviation for the Y distribution is standard deviation for the y distribution multiplied by a factor of b. However the effect on the standard deviation is independent of the sign of b because the term is squared.

## The Characteristic Function of the Difference of Two Random Variables

One significant implication of the above is that the probability distribution function of the negative of a variable is given by p(-y) and that its characteristic function is given by φ(-ω). Thus the characteristic function of z equal to the difference of random variable x and y is

#### φz(ω) = φx(ω)φy(−ω) and hence log(φz(ω) = log(φx(ω))+log(φy(−ω))

For x and y with normal distributions with means μx and μy and standard deviations of σx and σy, respectively, this works out as

## The Characteristic Function of a Linear Combination of Two Random Variable

Let z=ax+by. Then

#### φz(ω) = φx(aω)φy(bω) and hence log(φz(ω)) = log(φx(aω))+log(φy(bω))

For normal distributions this works out as

## The Characteristic Function of the Cumulative Sum of Random Variables Having the Same Probability Distribution

Let zn=x1+x2+…+xn and let φx(ω) be the common characteristic function for the xi's. Then

#### log(φzn(ω)) = n·log(φx(ω))

For the xj's being normal with mean μ and standard deviation σ

## The Case for Continuous Time

Let Y(t)=∫0tu(t)dt where u(s) for all s is a random variable with a normal distribution have mean μ and standard deviation σ. Then

#### log(φY(ω)) = itμω − tσ²ω²

Now consider a moving average of the Y values given by

#### Z(t) = (1/k)∫t-½kt+½kY(s)ds

The integral ∫t-½kt+½ksds is equal to

This means that

#### log(φZ(ω)) = itkμ(ω/k) − tkσ²(ω/k)² which reduces to log(φZ(ω)) = itμω − (t/k)σ²ω² and further to log(φZ(ω)) = itμω − [(t/k)½σ]²ω² and hence μZ = tμ and σZ = (t/k)½σ

Now consider a variable which is a unit time difference in the moving average; i.e.,

#### V(t) = Z(t) − Z(t-1)

This variable represents the slope of the moving average function Z(t) and will have a normal distribution with

## The Average Curvature of a Variable Over an Interval

Let x(t) be a twice differentiable function of time. The curvature of this function is given by the second derivative x"(t). The average curvature over an interval of length L is given by

#### (1/L)∫t-½Lt+½Lx"(t)dt which is the same as (1/L)∫t-½Lt+½L(dx'(t)/dt)dt which reduces to (1/L)[x'(t+½L) − x'(t-½L)]

Now consider the variable W(t) defined as the difference in the slope of the moving average Z(t) over an interval of length L; i.e.,

#### W(t) = (1/L)[V(t+½L) − V(t-½L)]

This variable represents the average curvature of the function Z(t) over an interval of L. It has a normal distribution with the characteristic function being given by

#### log( φW(t)(ω) = i[μV(t)−μV(t-1)]ω − [σ²V(t)+σ²V(t-1)](ω/L)² which reduces to log( φW(t)(ω) = [2t/k + 2(t-1)/k]σ²ω²/L² which means that μW = 0 and σW = [(2(2t-1)/k)½/L]σ

Having the standard deviation of the average curvature being inversely dependent upon the product of the square root of the moving averaging interval k and the interval L is a powerful influence in clustering the average curvature values around the mean value of 0. The curvature of a straight line is zero. Thus the times series for Z(t) appearing to be linear. The moving average of random disturbances appears to have trends. The trends from a minimum to a maximum or from a maximum to minimum necessarily have an average curvature of zero. From the foregoing then these apparent trends will generally appear to be linear.

The temperature of any body, including the Earth's surface, is the accumulation of the net heat energy inputs to it. Here is the record of the average global temperatures.

There appears to be a series of linear trends. The foregoing explains why these trends apperar to be linear.

(To be continued.)