Regularization perspectives on support vector machines
This article, Regularization perspectives on support vector machines, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
Introduction
Regularization perspectives on SVM interpret SVM as a special case Tikhonov regularization, specifically Tikhonov regularization with the hinge loss for a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: to generalize without overfitting. SVM was first proposed in 1995 by Corinna Cortes and Vladimir Vapnik, and framed geometrically as a method for finding hyperplanes that can separate multidimensional data into two categories.[1] This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to other machine learning techniques for avoiding overfitting like regularization, early stopping, sparsity and Bayesian inference. However, once it was discovered that SVM is also a special case of Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms.[2][3][4] This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss.[5]
Theoretical background
In the statistical learning theory framework, an algorithm is a strategy for choosing a function given a training set of inputs and their labels (the labels are usually ). Regularization strategies avoid overfitting by choosing a function that fits the data, but is not too complex. Specifically:
,
where is a hypothesis space[6] of functions, is the loss function, is a norm on the hypothesis space of functions, and is the regularization parameter[7] .
When is a reproducing kernel Hilbert space, there exists a kernel function that can be written as an symmetric positive definite matrix . By the representer theorem[8], , and
Special properties of the hinge loss
The simplest and most intuitive loss function for categorization is the misclassification loss, or 0-1 loss, which is 0 if and 1 if , i.e the heaviside step function on . However, this loss function is not convex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0-1 loss. The hinge loss, where , provides such a convex relaxation. In fact, the hinge loss is the tightest convex upper bound to the 0-1 misclassification loss function[9], and with infinite data returns the Bayes optimal solution:[10] [11]
Derivation[12]
To show that SVM is indeed a special case of Tikhonov regularization using the hinge loss, we will first state the Tikhonov regularization problem with the hinge loss, then demonstate that it is equivalent to traditional formulations of SVM. With the hinge loss, where , the regularization problem becomes:
,
However if we set , we get:
.
Notes and References
- ^ Cortes, Corinna (1995). "Suppor-Vector Networks". Machine Learning. 20: 273–297. doi:10.1007/BF00994018.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Rosasco, Lorenzo. "Regularized Least-Squares and Support Vector Machines" (PDF).,
- ^ Rifkin, Ryan (2002). Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning (PDF). MIT (PhD thesis).
- ^ Lee, Yoonkyung (2012). "Multicategory Support Vector Machines". Journal of the American Statistical Association. 99 (465): 67–81. doi:10.1198/016214504000000098.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Rosasco, Lorenzo (2004). "Are Loss Functions All the Same". Neural Computation. 5. 16: 1063–1076. doi:10.1162/089976604773135104.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|month=
ignored (help) - ^ This hypothesis space of functions is a Hilbert space of all the functions we're allowing the algorithm to pick
- ^ For insight on choosing the parameter, see, e.g., Wahba, Grace (1990). "When is the optimal regularization parameter insensitive to the choice of the loss function". Communications in Statistics - Theory and Methods. 19 (5): 1685–1700. doi:10.1080/03610929008830285.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ See Scholkopf, Bernhard (2001). "A Generalized Representer Theorem". Computational Learning Theory: Lecture Notes in Computer Science. 2111: 416–426. doi:10.1007/3-540-44581-1_27.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Lee, Yoonkyung (2012). "Multicategory Support Vector Machines". Journal of the American Statistical Association. 99 (465): 67–81. doi:10.1198/016214504000000098.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Lin, Yi (2002). "Support Vector Machines and the Bayes Rule in Classification" (PDF). Data Mining and Knowledge Discovery. 6 (3): 259–275. doi:10.1023/A:1015469627679.
{{cite journal}}
: Unknown parameter|month=
ignored (help) - ^ Rosasco, Lorenzo (2004). "Are Loss Functions All the Same". Neural Computation. 5. 16: 1063–1076. doi:10.1162/089976604773135104.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|month=
ignored (help) - ^ For a detailed derivation, see Rifkin, Ryan (2002). Everything Old is New Again: A Fresh Look at Historical Approaches in Machine Learning (PDF). MIT (PhD thesis).
- Evgeniou, Theodoros (2000). "Regularization Networks and Support Vector Machines" (PDF). Advances in Computational Mathematics. 13 (1): 1–50. doi:10.1023/A:1018946025316.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help)
- Joachims, Thorsten. "SVMlight".
- Vapnik, Vladimir (1999). The Nature of Statistical Learning Theory. New York: Springer-Verlag. ISBN 0-387-98780-0.
This article, Regularization perspectives on support vector machines, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |