Jump to content

L-estimator

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Nbarth (talk | contribs) at 16:58, 14 April 2013 (image). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Simple L-estimators can be visually estimated from a box plot, and include interquartile range, midhinge, range, mid-range, and trimean.

In statistics, an L-estimator is an estimator which is an L-statistic – a linear combination of order statistics of the measurements. This can be as little as a single point, as in the median (of an odd number of values), or as many as all points, as in the mean.

The main benefits of L-estimators are that they are often extremely simple, and often robust statistics: assuming sorted data, they are very easy to calculate and interpret, and are often resistant to outliers. They thus are useful in robust statistics, as descriptive statistics, in statistics education, and when computation is difficult. However, they are inefficient, and in modern robust statistics M-estimators are preferred, though these are much more difficult computationally. In many circumstances L-estimators are reasonably efficient, and thus adequate for initial estimation.

Examples

A basic example is the median. Given n values , if is odd, the median equals , the -th order statistic; if is even, it is the average of two order statistics: . These are both linear combinations of order statistics, and the median is therefore a simple example of an L-estimator.

A more detailed list of examples includes: with a single point, the maximum, the minimum, or any single order statistic or quantile; with one or two points, the median; with two points, the mid-range, the range, the midsummary (trimmed mid-range, including the midhinge), and the trimmed range (including the interquartile range and interdecile range); with three points, the trimean; with a fixed fraction of the points, the trimmed mean (including interquartile mean) and the Winsorized mean; with all points, the mean.

Note that some of these (such as median, or mid-range) are measures of central tendency, and are used as estimators for a location parameter, such as the mean of a normal distribution, while others (such as range or trimmed range) are measures of statistical dispersion, and are used as estimators of a scale parameter, such as the standard deviation of a normal distribution.

L-estimators can also measure the shape of a distribution, beyond location and scale. For example, the midhinge minus the median is a 3-term L-estimator that measures the skewness, and other differences of midsummaries give measures of asymmetry at different points in the tail.[1]

Sample L-moments are L-estimators for the population L-moment, and have rather complex expressions. L-moments are generally treated separately; see that article for details.

Robustness

L-estimators are often statistically resistant, having a high breakdown point. This is defined as the fraction of the measurements which can be arbitrarily changed without causing the resulting estimate to tend to infinity (i.e., to "break down"). The breakdown point of an L-estimator is given by the closest order statistic to the minimum or maximum: for instance, the median has a breakdown point of 50% (the highest possible), and a n% trimmed or Winsorized mean has a breakdown point of n%.

Not all L-estimators are robust; if it includes the minimum or maximum, then it has a breakdown point of 0. These non-robust L-estimators include the minimum, maximum, mean, and mid-range. The trimmed equivalents are robust, however.

Robust L-estimators used to measure dispersion, such as the IQR, provide robust measures of scale.

Applications

In practical use in robust statistics, L-estimators have been replaced by M-estimators, which provide robust statistics that also have high relative efficiency, at the cost of being much more computationally complex and opaque.

While L-estimators lack efficiency compared with other estimators, they possess certain advantages, not least of which is simplicity. This simplicity means that they are easily interpreted and visualized, and makes them suited for descriptive statistics and statistics education; many can even be computed mentally from a five-number summary or seven-number summary, or visualized from a box plot.

Assuming sorted data, L-estimators involving only a few points can be calculated with far fewer mathematical operations than efficient estimates.[2][3] Before the advent of electronic calculators and computers, these provided a useful way to extract much of the information from a sample with minimal labour. These remained in practical use through the early and mid 20th century, when automated sorting of punch card data was possible, but computation remained difficult,[2] and is still of use today, for estimates given a list of numerical values in non-machine-readable form, where data input is more costly than manual sorting.

L-estimators are often much more robust than maximally efficient conventional methods – the median is maximally statistically resistant, having a 50% breakdown point, and the X% trimmed mid-range has an X% breakdown point, while the sample mean (which is maximally efficient) is minimally robust, breaking down for a single outlier.

Efficiency

In terms of efficiency, given a sample of a normally-distributed numerical parameter, the arithmetic mean (average) for the population can be estimated with maximum efficiency by computing the sample mean – adding all the members of the sample and dividing by the number of members.

However, for a large data set (over 100 points) from a symmetric population, the mean can be estimated reasonably efficiently relative to the best estimate by L-estimators. Using a single point, this is done by taking the median of the sample, with no calculations required (other than sorting); this yields an efficiency of 64% or better (for all n). Using two points, a simple estimate is the midhinge (the 25% trimmed mid-range), but a more efficient estimate is the 29% trimmed mid-range, that is, averaging the two values 29% of the way in from the smallest and the largest values: the 29th and 71th percentiles; this has an efficiency of about 81%.[3] For three points, the trimean (average of median and midhinge) can be used, though the average of the 20th, 50th, and 80th percentile yields 88% efficiency. Using further points yield higher efficiency, though it is notable that only 3 points are needed for very high efficiency.

For estimating the standard deviation of a normal distribution, the scaled interdecile range gives a reasonably efficient estimator, though instead taking the 7% trimmed range (the difference between the 7th and 93rd percentiles) and dividing by 3 (corresponding to 86% of the data of a normal distribution falling within 1.5 standard deviations of the mean) yields an estimate of about 65% efficiency.[3]

For small samples, L-estimators are also relatively efficient: the midsummary of the 3rd point from each end has an efficiency around 84% for samples of size about 10, and the range divided by has reasonably good efficiency for sizes up to 20, though this drops with increasing n and the scale factor can be improved (efficiency 85% for 10 points). Other heuristic estimators for small samples include the range over n (for standard error), and the range squared over the median (for the chi-squared of a Poisson distribution).[3]

See also

References

  1. ^ Velleman & Hoaglin 1981.
  2. ^ a b Mosteller 2006.
  3. ^ a b c d Evans 1955, Appendix G: Inefficient statistics, pp. 902–904.
  • Template:Cite isbn
  • Fraiman, Ricardo; Meloche, Jean; Garcia-Escudero, Luis; Gordaliza, Alfonso; He, Xuming; Maronna, Ricardo (1999). "Multivariate L-estimation". TEST (2). Springer Berlin / Heidelberg: 255–317. doi:10.1007/BF02595872. {{cite journal}}: Unknown parameter |Volume= ignored (|volume= suggested) (help); soft hyphen character in |coauthors= at position 10 (help)
  • Huber, Peter J. (2004). Robust statistics. New York: Wiley-Interscience. ISBN 0-471-65072-2.
  • Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/978-0-387-44956-2_4, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/978-0-387-44956-2_4 instead.
  • Shao, Jun (2003). Mathematical statistics. Berlin: Springer-Verlag. pp. sec. 5.2.2. ISBN 0-387-95382-5.
  • Template:Cite isbn