Jump to content

Platt scaling

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Qwertyus (talk | contribs) at 22:08, 8 April 2014 ("method of" → "way of"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In machine learning, Platt scaling or Platt calibration is a way of transforming the outputs of a classification model into a probability distribution over classes. The method was invented by John Platt in the context of support vector machines,[1] replacing an earlier method by Vapnik, but can be applied to other classification models.[2] Platt scaling works by fitting a logistic regression model to a classifier's scores.

Description

Let f be a real-valued function that is used as a binary classifier to predict, for examples x, a label y from the set {+1, -1}, as y = sign(f(x)) (disregarding the possibility of a zero output for now). When what is required is instead a probability P(y=1|x), but the model does not provide this (or gives bad probability estimates), Platt scaling can be used. This method produces probabilities

,

i.e., a logistic transformation of the classifier scores f(x). Note that predictions can now be made according to y = 1 iff P(y=1|x) > ½; if B ≠ 0, the probability estimates contain a correction compared to the old decision function y = sign(f(x)).[3]

The (scalar) parameters A and B are estimated using a maximum likelihood method. The training set for parameter optimization is typically the same as that for the original classifier f. To avoid overfitting to this set, a held-out calibration set or cross-validation can be used, but Platt additionally suggests transforming the labels y to target probabilities

for positive samples (y = 1), and
for negative samples, y = -1.

Here, N and N are the number of positive and negative samples, resp. This transformation follows by applying Bayes' rule to a model of out-of-sample data that has a uniform prior over the labels.[1]

Platt himself suggested using the Levenberg–Marquardt algorithm to optimize the parameters, but a Newton algorithm was later proposed that should be more numerically stable.[4]

Analysis

Platt scaling has been shown to be effective for SVMs as well as other types of classification models, including boosted models and even naive Bayes classifiers, which produce distorted probability distributions. It is particularly effective for max-margin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with well-calibrated models such as logistic regression, multilayer perceptrons and random forests.[2]

An alternative approach to probability calibration is to fit an isotonic regression model to an ill-calibrated probability model. This has been shown to work better than Platt scaling, in particular when enough training data is available.[2]

See also

References

  1. ^ a b Platt, John (1999). "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods" (PDF). Advances in large margin classifiers. 10 (3): 61–74.
  2. ^ a b c Niculescu-Mizil, Alexandru; Caruana, Rich (2005). Predicting good probabilities with supervised learning (PDF). ICML.
  3. ^ Olivier Chapelle; Vladimir Vapnik; Olivier Bousquet; Sayan Mukherjee (2002). "Choosing multiple parameters for support vector machines" (PDF). Machine Learning. 46: 131–159.
  4. ^ Lin, Hsuan-Tien; Lin, Chih-Jen; Weng, Ruby C. (2007). "A note on Platt's probabilistic outputs for support vector machines" (PDF). Machine Learning. 68 (3): 267–276.