SAN JOSÉ STATE UNIVERSITY |
---|
applet-magic.com Thayer Watkins Silicon Valley & Tornado Alley USA |
---|
of a Probability Distribution |
Let z be a stochastic variable and p(z) be the probability density function for z; i.e., the probability of obtaining a value of z between a and b is:
The expected value of any function of z, say g(z), is defined as
The expected value of the function exp(iωz) is called the characteristic function for the probability distribution p(z), where ω is parameter that can have any real value and i is the square root of -1. That is to say, the characteristic function of p(z) is
Note that Φ(0) = ∫_{-∞}^{∞}p(z)dz = 1.
The characteristic function will generally be a complex function; i.e., Φ(ω) = Χ(ω) + iΥ(ω). Since exp(iωz) = cos(ωz) + isin(ωz) the components of the characteristic function are given by:
Thus given a probability distribution p(z) it is a straight forward computation to calculate the real and imaginary components of its characteristic function.
The crucial property of characteristic functions is that the characteristic function of the sum of two independent random variables is the product of those variables' characteristic functions. It is often more convenient to work with the natural logarithm of the characteristic function so that instead of products one can work with sums. This property of characteristic functions can be represented as follows. If Φ_{x}(ω) and Φ_{y}(ω) are the characteristic function of independent random variables x and y, respectively, then the characteristic function of a variable that involves taking an observation of x and an observation of y and adding them together is given by:
If two variables are not independent the proposition concerning the characteristic functions involve the characteristic function of the conditional probability distribution.
The process of aggregating data such as combining monthly data to obtain quarterly or annual data is easily presented in terms of characteristic functions. If the smaller unit data are statistically independent then the proposition concerning the characteristic function of the sum of random variables applies.
There is another operation that is often involved with combining random variables. Suppose x and y have different probability distributions but they are treated as coming from the same population. In effect the probability distribution of the combination involves the probabilities that an observation came from the x population or the y population. Let these probabilities be represented as P_{x} and P_{y} and let their probability distributions be denoted as f_{x} and f_{y}, respectively. The probability that an observation from the combined population has a value z, f_{z} is:
If the characteristic functions for f_{x} and f_{y} are Φ_{x} and Φ_{y} then the characteristic function for the combined population is given by:
It is usually more convenient to work with the logarithm of the characteristic function, log(Φ(ω)). The logarithm of the characteristic function will also be a complex function with real and imaginary components. The logarithm of a variable W is defined as as w if:
For a complex variable X+iY we must find x+iy such that
Since
it follows that
Thus the imaginary component y can be determined from:
The real component x can then be found from:
Note that the real and imaginary components of the log-characteristic function are not simply the logarithms of the corresponding real and imaginary components of the characteristic function.
The moments of a probability distribution are the expected values of the powers of the random variable; i.e.,
The value of n=0 could also be included in this definition. However for n=0 the value is the area under the probability distribution which is by definition equal to unity. Note that for ω=0 the characteristic function must have a value of unity.
The connection between the moments of a probability distribution and its characteristic function is seen from taking the derivative of the characteristic function with respect to the parameter ω. For the first derivative
Thus when ω=0, dΦ(ω)/dω is equal to iE{x}. Likewise for ω=0, d^{2}Φ(ω)/dω^{2} is equal to i^{2}E{x^{2}}. In general
However, since Φ(0) = 1, the case of n=0 also fits into this scheme.
Paul Lévy found the formula for the characteristic function of all stable distribution. The characteristic function of a stable distribution must such that the logarithm of the characteristic function Φ(ω) must be of the form:
The nature and allowable ranges for the parameters are as follows:
For a normal distribution α=2, β=0, ν is equal to the standard deviation and δ is equal to the mean. Thus the log-characteristic function for a normal distribution is of the form:
Some cases for particular values of the parameters are shown below:
Prior to Paul Levy's mathematical analysis empirical investigators were finding cases in which the histograms of some variable, while generally looking like normal distributions, were deviating from the normal distribution in a systematic manner. For example, the economist Wesley Claire Mitchell in 1915 found that the distribution of the percentage changes in stock prices when compared to the best-fitting normal distribution consistently deviated from the normal distribution as shown below:
This sort of deviation means that there would be too many very small deviations from the average, too many very large deviations and too few moderate deviations. The extreme large changes were of particular interest because those were the cases of stock market booms and busts. Because a higher proportion of the probability was in the tails of the distribution compared with the case of the normal distribution such distributions were called fat-tailed distributions. They were also given a name based upon Greek, leptokurtic.
There are Levy-Pareto stable distributions that are leptokurtic. Furthermore, there is a generalization of the Central Limit Theorem that says that the sum of a large number of independent random variables will have a stable distribution. Thus if some phenomenon such as changes in stock prices or rain from a storm is the result of a large number of independent influences then it would be expected that the distribution would be a stable distribution. If the distribution is a fat-tailed distribution then that fact would account for the unexpected extreme changes in a variables, the sort of occurrences associated with catastrophes.
The real component of the log-characteristic function for a stable distribution is
This last relationship implies that:
Thus for a stable distribution the graph of the logarithm of the real component of the log-characteristic function as a function of the logarithm of ω is a straight line, the slope of which is the stability index of the distribution, α.
The value of the logarithm of log(-χ(ω)) when ω=1 is the intercept of the straight line and is equal to αlog(ν). Thus a knowledge of the intercept and the value of α determines the value of ν, the dispersion parameter of the distribution; i.e.,
The imaginary component of the log-characteristic function for stable distributions is:
With α and ν known the values of δ and β can be determined from the imaginary component of the log-characteristic function. The values of δ and β can be found from any two points on the curve; i.e., by solving the linear equations in the two unknowns δ and β:
Thus if a probability distribution is actually a stable distribution it is an easy matter to determine the values of its parameters from its log-characteristic function. The problem is how to properly make the estimates when the true probability distribution is not known but only a sample estimate is available. For this topic see:
HOME PAGE OF Thayer Watkins |