applet-magic.com Thayer Watkins Silicon Valley & Tornado Alley USA |
---|
a Probability Distribution Function |
One of the interesting and important properties of the transformation that generates the characteristic function is that if it is applied a second time it generates the original function; i.e., the characteristic function of a characteristic function is essentially the original function. More precisely stated:
Note the factor of (1/2π) and the difference in the sign of the argument of the exponential function, -ωz instead of +ωz.
Caution: Because exp(-iωz) and Φ(ω) are both complex-valued functions the real part of their product is not just the product of their real parts; it is the product of their real parts less the product of their imaginary parts.
The problem of the numerical approximation of the above inversion formulas is not trivial. The range of the numerical integration must be finite rather than infinite and the integration over a continuous variable must be replaced by summation over a discrete variable. Nevertheless a simple implementation of an algorithm for the approximation of the inversion formula gives reasonable results.
Below is the case for the the normal distribution, which happens to have the same functional form as the original distribution. Only the positive axis portion is shown. The distribution is for a normal variable with mean zero and standard deviation equal to 1.0. The imaginary component, shown in red, is not perceptively different from the zero function.
The numerical inversion algorithm can be used to find the general shape of a Lévy stable distribution for particular values of the parameters. The imaginary component of the inverse function is again shown in red. If all computations were exact it would be the zero function lying along the z axis.
What is perceived in this last case is the rise of the probability distribution to another peak, another mode. However what is also perceived is that the inversion was not precise enough to produce a strictly zero imaginary component. The question of multiple modes requires more analysis.
A mode of a probability distribution p(z) is a value z* such that p(z*) is a relative maximum. For differentiable functions this corresponds to a value of z* such that the derivative of p(z) at z* is equal to zero. If the characteristic function is known but the probability distribution function is not known explictly the derivative function p'(z) can be found by
In words, the derivative function of a probability distribution can be found by the inversion of ωφ(ω), but its real component is imaginary component of the inversion of ωφ(ω). The imaginary component of p'(z) should be the zero function.
The second derivative, p"(z), is given by
A mode z* is then such that
The multimodality of a probability distribution is then a question of whether there exist multiple solutions to the complex equation
Below is the inversion of ωΦ(ω) for two cases.
For this case the probability distribution has critical points, p'(z)=0, at z=0 and at z equal approximately 3.1 and 3.7. The distribution is symmetrical so there are also critical points at approximately −3.1 and −3.7. The critical points at z=0 and z=±3.7 are modes, (relative maximums) and at ±3.1 there are minimums.
For this case apparently there is a relative maximum only at z=0.
Some of the Lévy stable distributions are multimodal. The tails of such distributions are not only fat; they have lumps in them.
HOME PAGE OF Thayer Watkins |