Jump to content

Detection error tradeoff

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 15.195.201.90 (talk) at 08:36, 24 January 2012 (External links: link to many more DET graphs). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

A detection error tradeoff (DET) graph is a graphical plot of error rates for binary classification systems, plotting false reject rate vs. false accept rate.[1] The x- and y-axes are scaled non-linearly by their normal deviates, yielding tradeoff curves that are more linear than ROC curves, and spend most of the image area highlighting the differences of importance in the critical operating region.

Axis warping

The normal deviate mapping (or normal quantile function, or inverse normal cumulative distribution) is given by the probit function, so that the horizontal axis is x = probit(Pfa) and the vertical is y = probit(Pfr), where Pfa and Pfr are the false-accept and false-reject rates.

The probit mapping maps probilities which live in the unit interval [0,1], to the extended real line [−∞, +∞]. Since this makes the axes infinitely long, one has to confine the plot to some finite rectangle of interest.

References

  1. ^ Martin, A. F. et al., "The DET Curve in Assessment of Detection Task Performance", Proc. Eurospeech '97, Rhodes, Greece, September 1997, Vol. 4, pp. 1899–1903.

See also