Jump to content

Random feature

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Cosmia Nebula (talk | contribs) at 23:56, 28 September 2024 (Examples: fx notation). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Random features (RF) are a technique used in machine learning to approximate kernel methods, introduced by Ali Rahimi and Ben Recht in their 2007 paper "Random Features for Large-Scale Kernel Machines".[1]

RF uses a Monte Carlo approximation to kernel functions by randomly sampled feature maps. It is used for datasets that are too large for traditional kernel methods like support vector machine, kernel ridge regression, and gaussian process.

Mathematics

Kernel method

Given a feature map , where is a Hilbert space (more specifically, a reproducing kernel Hilbert space), the kernel trick replaces inner products in feature space by a kernel functionKernel methods perform linear operations in high-dimensional space by manipulating the kernel matrix instead: where is the number of data points.

Random kernel method

The problem with kernel methods is that the kernel matrix has size . This becomes computationally infeasible when reaches the order of a million. The random kernel method replaces the kernel function by an inner product in low-dimensional feature space : where is a randomly sampled feature map .

This converts kernel linear regression into linear regression in feature space, kernel SVM into feature SVM, etc. These methods no longer involve matrices of size , but only random feature matrices of size .

Examples

Radial basis function kernel

The radial basis function (RBF) kernel on two samples is defined as[2]

where is the squared Euclidean distance and is a free parameter defining the shape of the kernel. It can be approximated by a random Fourier feature map :where are independent samples from the multidimensional normal distribution .

Theorem: (unbiased esimation)

Proof: It suffices to prove the case of . Use the trigonometric identity , the spherical symmetry of gaussian distribution, then evaluate the integral

Theorem: (variance bound) . (Appendix A.2[3]).

Corollary: (convergence) As the number of random features increases, the approximation converges to the true kernel with high probability.

Proof: By Chebyshev's inequality.

Random binning features

A random binning features map partitions the input space using randomly shifted grids at randomly chosen resolutions and assigns to an input point a binary bit string that corresponds to the bins in which it falls. The grids are constructed so that the probability that two points are assigned to the same bin is proportional to . The inner product between a pair of transformed points is proportional to the number of times the two points are binned together, and is therefore an unbiased estimate of . Since this mapping is not smooth and uses the proximity between input points, Random Binning Features works well for approximating kernels that depend only on the distance between datapoints.

Orthogonal random features

Orthogonal random features[4] uses a random orthogonal matrix instead of a random Fourier matrix.

See also

References

  1. ^ Rahimi, Ali; Recht, Benjamin (2007). "Random Features for Large-Scale Kernel Machines". Advances in Neural Information Processing Systems. 20.
  2. ^ Jean-Philippe Vert, Koji Tsuda, and Bernhard Schölkopf (2004). "A primer on kernel methods". Kernel Methods in Computational Biology.
  3. ^ Peng, Hao; Pappas, Nikolaos; Yogatama, Dani; Schwartz, Roy; Smith, Noah A.; Kong, Lingpeng (2021-03-19). "Random Feature Attention". arXiv:2103.02143 [cs.CL].
  4. ^ Yu, Felix Xinnan X; Suresh, Ananda Theertha; Choromanski, Krzysztof M; Holtmann-Rice, Daniel N; Kumar, Sanjiv (2016). "Orthogonal Random Features". Advances in Neural Information Processing Systems. 29. Curran Associates, Inc.