Jump to content

Per-comparison error rate

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Fgnievinski (talk | contribs) at 06:40, 23 July 2015 (added Category:Rates using HotCat). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In statistics, per-comparison error rate (PCER) is the probability of a result in the absence of any formal multiple hypothesis testing correction.[1] Typically, when considering a result under many hypotheses, some tests will give false positives; many statisticians make use of Bonferroni correction, false discovery rate, and other methods to determine the odds of a negative result appearing to be positive.

References

  1. ^ Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing" (PDF). Journal of the Royal Statistical Society, Series B. 57 (1): 289–300. MR 1325392.