Multilinear subspace learning
With the advances in data collection and storage technology, massive multidimensional data are being generated on a daily basis in a wide range of emerging applications. These massive multidimensional data are usually very high-dimensional, with a large amount of redundancy, and only occupying a subspace of the input space. Therefore, dimensionality reduction is frequently employed to map high-dimensional data to a low-dimensional space while retaining as much information as possible.
Linear subspace learning algorithms are traditional dimensionality reduction techniques that represent input data as vectors and solve for an optimal linear mapping to a lower dimensional space. Unfortunately, they often become inadequate when dealing with massive multidimensional data. They result in very high-dimensional vectors, lead to the estimation of a large number of parameters, and also break the natural structure and correlation in the original data.
Due to the challenges in emerging applications above, there has been growing interest in multilinear subspace learning (MSL) [1], which reduces the dimensionality of massive data directly from their natural multidimensional representation: tensors. The research on MSL has progressed from heuristic exploration to systematic investigation recently.
Multilinear Projections
Tensor-to-Tensor Projection (TTP)
A TTP is a direct projection of a high-dimensional tensor to a low-dimensional tensor of the same order, using N projection matrices.
Tensor-to-Vector Projection (TVP)
A TVP is a direct projection of a high-dimensional tensor to a low-dimensional vector, which is also referred to as the rank-one projections. As TVP projects a tensor to a vector, it can be viewed as multiple projections from a tensor to a scalar. Thus, the TVP of a tensor to a P-dimensional vector consists of P projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP). In EMP, a tensor is projected to a point through N unit projection vectors. It is the projection of a tensor on a single line (resulting a scalar), with one projection vector in each mode. Thus, the TVP of a tensor objectto a vector in a P-dimensional vector space consists of P EMPs.
Typical Approach in MSL
There are N sets of parameters to be solved, one in each mode. The solution to one set often depends on the other sets (except when N=1, the linear case). Therefore, a suboptimal, iterative procedure is usually taken.
- Initialization of the projections in each mode
- For each mode, fixing the projection in all the other mode, and solve for the projection in the current mode.
- Do the mode-wise optimization for a few iterations or until convergence.
Algorithms
- Multilinear Principal Component Analysis [2]
- Discriminant Analysis with Tensor Representation [3]
- General tensor discriminant analysis [4]
- Uncorrelated Multilinear Discriminant Analysis [5]
- Uncorrelated Multilinear Principal Component Analysis [6]
Features and Advantages
The advantages of MSL are:
- It operates on natural tensorial representation of multidimensional data so the structure and correlation in the original data are preserved.
- The number of parameters to be estimated is much smaller than in the linear case.
- It has less problem in the small sample size scenario.
Disadvantages
- Closed-form solution can not be obtained for most cases so the algorithm is usually iterative.
References
- ^ Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos, "A Survey of Multilinear Subspace Learning for Tensor Data", Pattern Recognition, Vol. 44, No. 7, pp. 1540-1551, Jul. 2011.
- ^ H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "MPCA: Multilinear principal component analysis of tensor objects," IEEE Trans. Neural Netw., vol. 19, no. 1, pp. 18–39, Jan. 2008.
- ^ S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.-J. Zhang, "Discriminant analysis with tensor representation," in Proc. IEEE Conference on Computer Vision and Pattern Recognition, vol. I, June 2005, pp. 526–532.
- ^ D. Tao, X. Li, X. Wu, and S. J. Maybank, "General tensor discriminant analysis and gabor features for gait recognition," IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 10, pp. 1700–1715, Oct. 2007.
- ^ H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "Uncorrelated multilinear discriminant analysis with regularization and aggregation for tensor object recognition," IEEE Trans. Neural Netw., vol. 20, no. 1, pp. 103–123, Jan. 2009.
- ^ H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning," IEEE Trans. Neural Netw., vol. 20, no. 11, pp. 1820–1836, Nov. 2009.
Source codes
- MPCA The multilinear principal component analysis algorithm written in Matlab.