Sequential minimal optimization
Sequential minimal optimization (SMO) is an algorithm for solving the optimization problem which arises during the training of support vector machines. It was invented by John Platt in 1998 at Microsoft Research.[1] SMO is widely used for training support vector machines and is implemented by the popular libsvm tool.[2][3]
The publication of SMO algorithm in 1998 has generated a lot of excitement in SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers.[4]
Optimization problem
Consider a binary classification problem with a dataset (x1, y1), ..., (xn, yn), where xi is an input vector and yi ∈ {-1, +1} is a binary label corresponding to it. A soft-margin support vector machine is trained by solving an optimization problem whose dual is the following convex quadratic programming problem:
- subject to:
where C is an SVM hyperparameter and K(xi, xj) is the kernel function, both supplied by the user.
SMO is an iterative algorithm for solving this dual SVM optimization problem.
Algorithm
![]() | This section needs expansion. You can help by adding to it. (November 2010) |
SMO breaks up large QP problems into a series of smallest possible QP problems, which are then solved analytically.
References
- ^ Platt, John (1998), Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines
- ^ Chih-Chung Chang and Chih-Jen Lin (2001). LIBSVM: a Library for Support Vector Machines.
- ^ Luca Zanni (2006). Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems.
- ^ Rifkin, Ryan (2002), "Everything Old is New Again: a Fresh Look at Historical Approaches in Machine Learning", Ph.D. thesis: 18
External links