San José State University |
---|
applet-magic.com Thayer Watkins Silicon Valley & Tornado Alley USA |
---|
of the Sample Maximum as a Function of Sample Size |
Let f_{X}(x) be the probability density function function of a random variable X and let F_{X}(x) be the cumulative probability function; i.e., F_{X}(x)=∫_{−∞}^{x}f_{X}(z)dz. The cumulative probability function F_{X}(x) is the probability that the random variable X is less than or equal to x.
Let Y be the maximum of n observations of X. The cumulative probability distribution F_{Y}(y) is the probability that the maximum is less than or equal to y. Surprisingly this is very simple to calculate. It is simply the probability that all n sample observations are less than y; i.e.,
Consider for an example the log-normal distribution; the distribution such that the random variable's natural logarithm is normally distributed with a mean of μ and a standard deviation of σ:
The constant C must be chosen such that the limit of the cumulative probability distribution is 1 as x→+∞. The limit of f_{X}(x) as x→0 is 0.
The cumulative distribution is zero for x≤0.
Then
The integral (2/π)^{½})∫_{0}^{r}exp(-s²)ds is called the error function erf(r). From this definition erf(0)=0. Also erf(+∞)=+1. When the argument r is negative erf(r)=−erf(|r|). Thus erf(−∞)=−1. For some implementations of the error function it is not defined for negative arguments. In the following this will be taken into account.
Thus the cumulative probability distribution for a log-normal distribution is
where sgn(z) is the sign of z;, i.e., if z>0 then sgn(z)=+1 and if z<0 sgn(z)=-1. For z=0 sgn(z)=0.
and thus for the limit of F_{X}(x) as x→+∞ C has to be (2/π)^{½} and (2/π)^{½}C is equal to (1/2).
The cumulative probability distribution for the maximum of a sample of size n from a population with a log-normal distribution is
The probability density function f_{Y}(y) is then the derivative of F_{Y}(y). The following graphs are based upon the computation of F_{Y}(y) for a number of different sample sizes for a lognormal distribution with μ=0.1 and σ=1.0. The first shown is for the standard deviation of the sample maximum.
The interesting aspect of this graph is that the standard deviation reaches a maximum level at a sample size of 47 and thereafter declines.
The expected value of the sample maximum increases indefinitely with sample size, as shown.
The dependence of the expected value of the sample maximum is approximately logarithmic for sample sizes of 10 and above, as shown below.
For observations with a lognormal distribution the distribution of the sample maximum has an expected value which is approximately linear in the logarithm of the sample size. The standard deviation of the distribution rapidly increases to a maximum and declines slowly thereafter.
This appendix is to provide the conventional derivation of the probability distributions of the sample maximum. The probability of getting (n-1) observations in a row with values less than or equal to y is (F_{X}(y))^{n-1}. The probability of then getting one observation with a value of y is then f_{X}(y). There are n different arrangements of the y-value in the sample so the product of the probabilities must be multiplied by n. Thus
HOME PAGE OF Thayer Watkins |