Pivotal quantity
In statistics, a pivotal quantity is a function of observations whose distribution does not depend on unknown parameters. Note that a pivot quantity need not be a statistic – the function and its value can depend on parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic.
More formally, given an independent and identically distributed sample from a distribution with parameter , a function is a pivotal quantity if the distribution of is independent of .
It is relatively easy to construct pivots for location and scale parameters: for the former we form differences, for the latter ratios.
Pivotal quantities provide one method of constructing confidence intervals, and the use of pivotal quantities improves performance of the bootstrap. In the form of ancillary statistics, they can be used to construct frequentist prediction intervals (predictive confidence intervals).
Examples
Normal distribution
Given independent, identically distributed (i.i.d.) observations from the normal distribution with unknown mean and variance , a pivotal quantity can be obtained from the function:
where
and
are unbiased estimates of and , respectively. The function is the Student's t-statistic for a new value , to be drawn from the same population as the already observed set of values .
Using the function becomes a pivotal quantity, which is also distributed by the Student's t-distribution with degrees of freedom. As required, even though appears as an argument to the function , the distribution of does not depend on the parameters or of the normal probability distribution that governs the observations .
This can be used to compute a prediction interval for the next observation see Prediction interval: Normal distribution.
Bivariate normal distribtion
In more complicated cases, it is impossible to construct exact pivots. However, having approximate pivots improves convergence to asymptotic normality.
Suppose a sample of size of vectors is taken from bivariate normal distribution with unknown correlation .
An estimator of is the sample (Pearson, moment) correlation
where are sample variances of and . Being a U-statistic, will have an asymptotically normal distribution:
- .
However, a variance stabilizing transformation
known as Fisher's transformation of the correlation coefficient allows to make the distribution of asymptotically independent of unknown parameters:
where is the corresponding population parameter. For finite samples sizes , the random variable will have distribution closer to normal than that of . Even closer approximation to normality will be achieved by using the exact variance
References
Shao, J (2003) Mathematical Statistics, Springer, New York. ISBN 978-0-387-95382-3 (Section 7.1)