Jump to content

Sequential minimal optimization

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by X7q (talk | contribs) at 21:44, 8 November 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Sequential minimal optimization (SMO) is an algorithm for solving the optimization problem which arises during the training of support vector machines. It was invented by John Platt in 1998 at Microsoft Research.[1] SMO is widely used for training support vector machines and is implemented by the popular libsvm tool.[2][3]

The publication of SMO algorithm in 1998 has generated a lot of excitement in SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers.[4]

Optimization problem

Consider a binary classification problem with a dataset (x1, y1), ..., (xn, yn), where xi is an input vector and yi ∈ {-1, +1} is a binary label corresponding to it. A soft-margin support vector machine is trained by solving an optimization problem whose dual is the following convex quadratic programming problem:

subject to:

where C is an SVM hyperparameter and K(xi, xj) is the kernel function, both supplied by the user.

SMO is an iterative algorithm for solving this dual SVM optimization problem.

Algorithm

SMO breaks up large QP problems into a series of smallest possible QP problems, which are then solved analytically.

References

  1. ^ Platt, John (1998), Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines
  2. ^ Chih-Chung Chang and Chih-Jen Lin (2001). LIBSVM: a Library for Support Vector Machines.
  3. ^ Luca Zanni (2006). Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems.
  4. ^ Rifkin, Ryan (2002), "Everything Old is New Again: a Fresh Look at Historical Approaches in Machine Learning", Ph.D. thesis: p.18 {{citation}}: |pages= has extra text (help)