Jump to content

Per-comparison error rate

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Autumnsault (talk | contribs) at 21:10, 9 May 2010 (Created page with ''''PCER''' stands for '''per comparison error rate''', which describes the probability of a result in the absence of any formal [[multiple comparisons|multiple hypo...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

PCER stands for per comparison error rate, which describes the probability of a result in the absence of any formal multiple hypothesis testing correction.[1] Typically, when considering a result under many hypotheses, some tests will give false positives; many statisticians make use of Bonferroni correction, false discovery rate, and other methods to determine the odds of a negative result appearing to be positive.

References

  1. ^ Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing" (PDF). Journal of the Royal Statistical Society, Series B (Methodological). 57 (1): 289–300. MR1325392.