San José State University

applet-magic.com
Thayer Watkins
Silicon Valley
USA

 The Distribution Functions of the Sample Maximum as a Function of Sample Size

Let fX(x) be the probability density function function of a random variable X and let FX(x) be the cumulative probability function; i.e., FX(x)=∫−∞xfX(z)dz. The cumulative probability function FX(x) is the probability that the random variable X is less than or equal to x.

Let Y be the maximum of n observations of X. The cumulative probability distribution FY(y) is the probability that the maximum is less than or equal to y. Surprisingly this is very simple to calculate. It is simply the probability that all n sample observations are less than y; i.e.,

FY(y) = [FX(y)]n

Consider for an example the log-normal distribution; the distribution such that the random variable's natural logarithm is normally distributed with a mean of μ and a standard deviation of σ:

fX(x) = C*exp(−(ln(x)−μ)²/(2σ²))/(xσ) for x>0

The constant C must be chosen such that the limit of the cumulative probability distribution is 1 as x→+∞. The limit of fX(x) as x→0 is 0.

The cumulative distribution is zero for x≤0.

Then

FX(x) = C∫0x(exp(−(ln(z)−μ)²/(2σ²))/(zσ)dz or, equivalently FX(x) = C∫0x(exp(−(ln(z)−μ)²/(2σ²))/σ)(dz/z) which upon changing the variable of integration to y=ln(z) and hence dz/z becomes dy FX(x) = C∫0ln(x)(exp(−(y−μ)²/(2σ²))/σ)dy The integration limits for z are 0 and x; for y the limits of integration become −∞ and ln(x). Another change of variable to s=(y−μ)/(σ√2) yields FX(x) = C∫−∞(ln(x)-μ)/(σ√2))(exp(−s²)ds

The integral (2/π)½)∫0rexp(-s²)ds is called the error function erf(r). From this definition erf(0)=0. Also erf(+∞)=+1. When the argument r is negative erf(r)=−erf(|r|). Thus erf(−∞)=−1. For some implementations of the error function it is not defined for negative arguments. In the following this will be taken into account.

Thus the cumulative probability distribution for a log-normal distribution is

FX(x) = (2π)½C·[(1/2)+(1/2)sgn((ln(x)-μ)/(σ√2))erf(|(ln(x)-μ)/(σ√2)|))]

where sgn(z) is the sign of z;, i.e., if z>0 then sgn(z)=+1 and if z<0 sgn(z)=-1. For z=0 sgn(z)=0.

and thus for the limit of FX(x) as x→+∞ C has to be (2/π)½ and (2/π)½C is equal to (1/2).

The cumulative probability distribution for the maximum of a sample of size n from a population with a log-normal distribution is

FY(y) = [(1/2)(1+sgn((ln(y)-μ)/(√2σ))·erf((|ln(y)-μ)/√2σ))|)]n

The probability density function fY(y) is then the derivative of FY(y). The following graphs are based upon the computation of FY(y) for a number of different sample sizes for a lognormal distribution with μ=0.1 and σ=1.0. The first shown is for the standard deviation of the sample maximum.

The interesting aspect of this graph is that the standard deviation reaches a maximum level at a sample size of 47 and thereafter declines.

The expected value of the sample maximum increases indefinitely with sample size, as shown.

The dependence of the expected value of the sample maximum is approximately logarithmic for sample sizes of 10 and above, as shown below.

Conclusion

For observations with a lognormal distribution the distribution of the sample maximum has an expected value which is approximately linear in the logarithm of the sample size. The standard deviation of the distribution rapidly increases to a maximum and declines slowly thereafter.

Appendix

This appendix is to provide the conventional derivation of the probability distributions of the sample maximum. The probability of getting (n-1) observations in a row with values less than or equal to y is (FX(y))n-1. The probability of then getting one observation with a value of y is then fX(y). There are n different arrangements of the y-value in the sample so the product of the probabilities must be multiplied by n. Thus