Jump to content

Edit filter log

Details for log entry 15671394

22:42, 9 May 2016: 75.82.159.128 (talk) triggered filter 3, performing the action "edit" on Multilinear principal component analysis. Actions taken: Warn; Filter description: New user blanking articles (examine)

Changes made in edit

{{context|date=June 2012}}
'''Multilinear principal component analysis (MPCA)'''
<ref name="MPCA2002b"> M. A. O. Vasilescu, D. Terzopoulos (2002) [http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf "Multilinear Analysis of Image Ensembles: TensorFaces"], Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002</ref><ref name="MPCA2003"> M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref>
<ref name="MPCA2002a"> M. Alex O. Vasilescu (2002) [http://www.cs.toronto.edu/~maov/motionsignatures/icpr02.pdf "Human Motion Signatures: Analysis, Synthesis, Recognition"], "Proceedings of the International Conference on Pattern Recognition (ICPR’02)", Quebec City, Canada, August, 2002</ref>
<ref name="MPCA2008">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, (2008) [http://www.dsp.utoronto.ca/~haiping/Publication/MPCA_TNN08_rev2010.pdf "MPCA: Multilinear principal component analysis of tensor objects"], ''IEEE Trans. Neural Netw.'', 19 (1), 18–39</ref> is a mathematical procedure that uses multiple orthogonal transformations to convert a set of multidimensional objects into another set of multidimensional objects of lower dimensions. There is one orthogonal (linear) transformation for each dimension (mode); hence ''multilinear''. This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data as possible, subject to the constraint of mode-wise orthogonality.

MPCA is a multilinear extension of [[principal component analysis]] (PCA). The major difference is that PCA needs to reshape a multidimensional object into a [[Feature vector|vector]], while MPCA operates directly on multidimensional objects through mode-wise processing. For example, for 100x100 images, PCA operates on vectors of 10000x1 while MPCA operates on vectors of 100x1 in two modes. For the same amount of [[dimension reduction]], PCA needs to estimate 49*(10000/(100*2)-1) times more parameters than MPCA. Thus, MPCA is more efficient and better conditioned in practice.

MPCA is a basic algorithm for dimension reduction via [[multilinear subspace learning]]. In wider scope, it belongs to [[tensor]]-based computation. Its origin can be traced back to the [[Tucker decomposition]]<ref>{{Cite journal|last1=Tucker| first1=Ledyard R
| authorlink1 = Ledyard R Tucker
| title = Some mathematical notes on three-mode factor analysis
| journal = [[Psychometrika]]
| volume = 31 | issue = 3 | pages = 279–311
|date=September 1966
| doi = 10.1007/BF02289464
}}</ref> in 1960s and it is closely related to [[higher-order singular value decomposition]],<ref name="HOSVD">L.D. Lathauwer, B.D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354398 "A multilinear singular value decomposition"], ''SIAM Journal of Matrix Analysis and Applications'', 21 (4), 1253–1278</ref> (HOSVD) and to the best rank-(R1, R2, ..., RN ) approximation of higher-order tensors.<ref>L. D. Lathauwer, B. D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354405 "On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors"], ''SIAM Journal of Matrix Analysis and Applications'' 21 (4), 1324–1342.</ref>

== The algorithm ==
MPCA performs [[feature extraction]] by determining a [[Multilinear_subspace_learning#Multilinear_projection|multilinear projection]] that captures most of the original tensorial input variations. As in PCA, MPCA works on centered data. The MPCA solution follows the alternating least square (ALS) approach.<ref>P. M. Kroonenberg and J. de Leeuw, [http://www.springerlink.com/content/c8551t1p31236776/ Principal component analysis of three-mode data by means of alternating least squares algorithms], Psychometrika, 45 (1980), pp. 69–97.</ref> Thus, is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. Each subproblem is a classical PCA problem, which can be easily solved.

It should be noted that while PCA with orthogonal transformations produces uncorrelated features/variables, this is not the case for MPCA. Due to the nature of tensor-to-tensor transformation, MPCA features are not uncorrelated in general although the transformation in each mode is orthogonal.<ref name="UMPCA">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "[http://www.dsp.utoronto.ca/~haiping/Publication/UMPCA_TNN09.pdf Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning]," IEEE Trans. Neural Netw., vol. 20, no. 11, pp. 1820–1836, Nov. 2009.</ref> In contrast, the uncorrelated MPCA (UMPCA) generates uncorrelated multilinear features.<ref name="UMPCA"/>

== Feature selection ==
MPCA produces tensorial features. For conventional usage, vectorial features are often preferred. For example most classifiers in the literature takes vectors as input. On the other hand, as there are correlations among MPCA features, a further selection process often improves the performance. Supervised (discriminative) MPCA feature selection is used in object recognition<ref name="MPCA">, M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref> while unsupervised MPCA feature selection is employed in visualization task.<ref>H. Lu, H.-L. Eng, M. Thida, and K.N. Plataniotis, "[http://www.dsp.utoronto.ca/~haiping/Publication/CrowdMPCA_CIKM2010.pdf Visualization and Clustering of Crowd Video Content in MPCA Subspace]," in Proceedings of the 19th ACM Conference on Information and Knowledge Management (CIKM 2010) , Toronto, ON, Canada, October, 2010.</ref>

== Extensions ==
Various extensions of MPCA have been developed:<ref>{{cite journal
|first=Haiping |last=Lu
|first2=K.N. |last2=Plataniotis
|first3=A.N. |last3=Venetsanopoulos
|url=http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf
|title=A Survey of Multilinear Subspace Learning for Tensor Data
|journal=Pattern Recognition
|volume=44 |number=7 |pages=1540–1551 |year=2011
|doi=10.1016/j.patcog.2011.01.004
}}</ref>
*Uncorrelated MPCA (UMPCA) <ref name="UMPCA"/>
*[[Boosting (meta-algorithm)|Boosting]]+MPCA<ref>H. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, "[http://www.hindawi.com/journals/ivp/2009/713183.html Boosting Discriminant Learners for Gait Recognition using MPCA Features]", EURASIP Journal on Image and Video Processing, Volume 2009, Article ID 713183, 11 pages, 2009. {{doi|10.1155/2009/713183}}.</ref>
*Non-negative MPCA (NMPCA) <ref>Y. Panagakis, C. Kotropoulos, G. R. Arce, "Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification", IEEE Trans. on Audio, Speech, and Language Processing, vol. 18, no. 3, pp. 576–588, 2010.</ref>
*Robust MPCA (RMPCA) <ref>K. Inoue, K. Hara, K. Urahama, "Robust multilinear principal component analysis", Proc. IEEE Conference on Computer Vision, 2009, pp. 591–597.</ref>

== Resources ==
* '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/26168 MPCA].
* '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/35432 UMPCA (including data)].

==References==
{{Reflist}}

[[Category:Dimension reduction]]
[[Category:Machine learning]]
[[Category:Multivariate statistics]]

Action parameters

VariableValue
Edit count of the user (user_editcount)
null
Name of the user account (user_name)
'75.82.159.128'
Age of the user account (user_age)
0
Groups (including implicit) the user is in (user_groups)
[ 0 => '*' ]
Global groups that the user is in (global_user_groups)
[]
Whether or not a user is editing through the mobile interface (user_mobile)
false
Page ID (page_id)
30928751
Page namespace (page_namespace)
0
Page title without namespace (page_title)
'Multilinear principal component analysis'
Full page title (page_prefixedtitle)
'Multilinear principal component analysis'
Last ten users to contribute to the page (page_recent_contributors)
[ 0 => 'AnomieBOT', 1 => '75.84.60.171', 2 => '142.206.2.12', 3 => 'Monkbot', 4 => 'MartinLjungqvist', 5 => 'Yobot', 6 => 'SchreiberBike', 7 => 'Violetriga', 8 => 'WikiMSL', 9 => 'Melcombe' ]
First user to contribute to the page (page_first_contributor)
'WikiMSL'
Action (action)
'edit'
Edit summary/reason (summary)
''
Whether or not the edit is marked as minor (no longer in use) (minor_edit)
false
Old page wikitext, before the edit (old_wikitext)
'{{context|date=June 2012}} '''Multilinear principal component analysis (MPCA)''' <ref name="MPCA2002b"> M. A. O. Vasilescu, D. Terzopoulos (2002) [http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf "Multilinear Analysis of Image Ensembles: TensorFaces"], Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002</ref><ref name="MPCA2003"> M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref> <ref name="MPCA2002a"> M. Alex O. Vasilescu (2002) [http://www.cs.toronto.edu/~maov/motionsignatures/icpr02.pdf "Human Motion Signatures: Analysis, Synthesis, Recognition"], "Proceedings of the International Conference on Pattern Recognition (ICPR’02)", Quebec City, Canada, August, 2002</ref> <ref name="MPCA2008">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, (2008) [http://www.dsp.utoronto.ca/~haiping/Publication/MPCA_TNN08_rev2010.pdf "MPCA: Multilinear principal component analysis of tensor objects"], ''IEEE Trans. Neural Netw.'', 19 (1), 18–39</ref> is a mathematical procedure that uses multiple orthogonal transformations to convert a set of multidimensional objects into another set of multidimensional objects of lower dimensions. There is one orthogonal (linear) transformation for each dimension (mode); hence ''multilinear''. This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data as possible, subject to the constraint of mode-wise orthogonality. MPCA is a multilinear extension of [[principal component analysis]] (PCA). The major difference is that PCA needs to reshape a multidimensional object into a [[Feature vector|vector]], while MPCA operates directly on multidimensional objects through mode-wise processing. For example, for 100x100 images, PCA operates on vectors of 10000x1 while MPCA operates on vectors of 100x1 in two modes. For the same amount of [[dimension reduction]], PCA needs to estimate 49*(10000/(100*2)-1) times more parameters than MPCA. Thus, MPCA is more efficient and better conditioned in practice. MPCA is a basic algorithm for dimension reduction via [[multilinear subspace learning]]. In wider scope, it belongs to [[tensor]]-based computation. Its origin can be traced back to the [[Tucker decomposition]]<ref>{{Cite journal|last1=Tucker| first1=Ledyard R | authorlink1 = Ledyard R Tucker | title = Some mathematical notes on three-mode factor analysis | journal = [[Psychometrika]] | volume = 31 | issue = 3 | pages = 279–311 |date=September 1966 | doi = 10.1007/BF02289464 }}</ref> in 1960s and it is closely related to [[higher-order singular value decomposition]],<ref name="HOSVD">L.D. Lathauwer, B.D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354398 "A multilinear singular value decomposition"], ''SIAM Journal of Matrix Analysis and Applications'', 21 (4), 1253–1278</ref> (HOSVD) and to the best rank-(R1, R2, ..., RN ) approximation of higher-order tensors.<ref>L. D. Lathauwer, B. D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354405 "On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors"], ''SIAM Journal of Matrix Analysis and Applications'' 21 (4), 1324–1342.</ref> == The algorithm == MPCA performs [[feature extraction]] by determining a [[Multilinear_subspace_learning#Multilinear_projection|multilinear projection]] that captures most of the original tensorial input variations. As in PCA, MPCA works on centered data. The MPCA solution follows the alternating least square (ALS) approach.<ref>P. M. Kroonenberg and J. de Leeuw, [http://www.springerlink.com/content/c8551t1p31236776/ Principal component analysis of three-mode data by means of alternating least squares algorithms], Psychometrika, 45 (1980), pp. 69–97.</ref> Thus, is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. Each subproblem is a classical PCA problem, which can be easily solved. It should be noted that while PCA with orthogonal transformations produces uncorrelated features/variables, this is not the case for MPCA. Due to the nature of tensor-to-tensor transformation, MPCA features are not uncorrelated in general although the transformation in each mode is orthogonal.<ref name="UMPCA">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "[http://www.dsp.utoronto.ca/~haiping/Publication/UMPCA_TNN09.pdf Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning]," IEEE Trans. Neural Netw., vol. 20, no. 11, pp. 1820–1836, Nov. 2009.</ref> In contrast, the uncorrelated MPCA (UMPCA) generates uncorrelated multilinear features.<ref name="UMPCA"/> == Feature selection == MPCA produces tensorial features. For conventional usage, vectorial features are often preferred. For example most classifiers in the literature takes vectors as input. On the other hand, as there are correlations among MPCA features, a further selection process often improves the performance. Supervised (discriminative) MPCA feature selection is used in object recognition<ref name="MPCA">, M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref> while unsupervised MPCA feature selection is employed in visualization task.<ref>H. Lu, H.-L. Eng, M. Thida, and K.N. Plataniotis, "[http://www.dsp.utoronto.ca/~haiping/Publication/CrowdMPCA_CIKM2010.pdf Visualization and Clustering of Crowd Video Content in MPCA Subspace]," in Proceedings of the 19th ACM Conference on Information and Knowledge Management (CIKM 2010) , Toronto, ON, Canada, October, 2010.</ref> == Extensions == Various extensions of MPCA have been developed:<ref>{{cite journal |first=Haiping |last=Lu |first2=K.N. |last2=Plataniotis |first3=A.N. |last3=Venetsanopoulos |url=http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf |title=A Survey of Multilinear Subspace Learning for Tensor Data |journal=Pattern Recognition |volume=44 |number=7 |pages=1540–1551 |year=2011 |doi=10.1016/j.patcog.2011.01.004 }}</ref> *Uncorrelated MPCA (UMPCA) <ref name="UMPCA"/> *[[Boosting (meta-algorithm)|Boosting]]+MPCA<ref>H. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, "[http://www.hindawi.com/journals/ivp/2009/713183.html Boosting Discriminant Learners for Gait Recognition using MPCA Features]", EURASIP Journal on Image and Video Processing, Volume 2009, Article ID 713183, 11 pages, 2009. {{doi|10.1155/2009/713183}}.</ref> *Non-negative MPCA (NMPCA) <ref>Y. Panagakis, C. Kotropoulos, G. R. Arce, "Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification", IEEE Trans. on Audio, Speech, and Language Processing, vol. 18, no. 3, pp. 576–588, 2010.</ref> *Robust MPCA (RMPCA) <ref>K. Inoue, K. Hara, K. Urahama, "Robust multilinear principal component analysis", Proc. IEEE Conference on Computer Vision, 2009, pp. 591–597.</ref> == Resources == * '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/26168 MPCA]. * '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/35432 UMPCA (including data)]. ==References== {{Reflist}} [[Category:Dimension reduction]] [[Category:Machine learning]] [[Category:Multivariate statistics]]'
New page wikitext, after the edit (new_wikitext)
''
Unified diff of changes made by edit (edit_diff)
'@@ -1,52 +1,2 @@ -{{context|date=June 2012}} -'''Multilinear principal component analysis (MPCA)''' -<ref name="MPCA2002b"> M. A. O. Vasilescu, D. Terzopoulos (2002) [http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf "Multilinear Analysis of Image Ensembles: TensorFaces"], Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002</ref><ref name="MPCA2003"> M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref> -<ref name="MPCA2002a"> M. Alex O. Vasilescu (2002) [http://www.cs.toronto.edu/~maov/motionsignatures/icpr02.pdf "Human Motion Signatures: Analysis, Synthesis, Recognition"], "Proceedings of the International Conference on Pattern Recognition (ICPR’02)", Quebec City, Canada, August, 2002</ref> -<ref name="MPCA2008">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, (2008) [http://www.dsp.utoronto.ca/~haiping/Publication/MPCA_TNN08_rev2010.pdf "MPCA: Multilinear principal component analysis of tensor objects"], ''IEEE Trans. Neural Netw.'', 19 (1), 18–39</ref> is a mathematical procedure that uses multiple orthogonal transformations to convert a set of multidimensional objects into another set of multidimensional objects of lower dimensions. There is one orthogonal (linear) transformation for each dimension (mode); hence ''multilinear''. This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data as possible, subject to the constraint of mode-wise orthogonality. -MPCA is a multilinear extension of [[principal component analysis]] (PCA). The major difference is that PCA needs to reshape a multidimensional object into a [[Feature vector|vector]], while MPCA operates directly on multidimensional objects through mode-wise processing. For example, for 100x100 images, PCA operates on vectors of 10000x1 while MPCA operates on vectors of 100x1 in two modes. For the same amount of [[dimension reduction]], PCA needs to estimate 49*(10000/(100*2)-1) times more parameters than MPCA. Thus, MPCA is more efficient and better conditioned in practice. - -MPCA is a basic algorithm for dimension reduction via [[multilinear subspace learning]]. In wider scope, it belongs to [[tensor]]-based computation. Its origin can be traced back to the [[Tucker decomposition]]<ref>{{Cite journal|last1=Tucker| first1=Ledyard R - | authorlink1 = Ledyard R Tucker - | title = Some mathematical notes on three-mode factor analysis - | journal = [[Psychometrika]] - | volume = 31 | issue = 3 | pages = 279–311 - |date=September 1966 - | doi = 10.1007/BF02289464 -}}</ref> in 1960s and it is closely related to [[higher-order singular value decomposition]],<ref name="HOSVD">L.D. Lathauwer, B.D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354398 "A multilinear singular value decomposition"], ''SIAM Journal of Matrix Analysis and Applications'', 21 (4), 1253–1278</ref> (HOSVD) and to the best rank-(R1, R2, ..., RN ) approximation of higher-order tensors.<ref>L. D. Lathauwer, B. D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354405 "On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors"], ''SIAM Journal of Matrix Analysis and Applications'' 21 (4), 1324–1342.</ref> - -== The algorithm == -MPCA performs [[feature extraction]] by determining a [[Multilinear_subspace_learning#Multilinear_projection|multilinear projection]] that captures most of the original tensorial input variations. As in PCA, MPCA works on centered data. The MPCA solution follows the alternating least square (ALS) approach.<ref>P. M. Kroonenberg and J. de Leeuw, [http://www.springerlink.com/content/c8551t1p31236776/ Principal component analysis of three-mode data by means of alternating least squares algorithms], Psychometrika, 45 (1980), pp. 69–97.</ref> Thus, is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. Each subproblem is a classical PCA problem, which can be easily solved. - -It should be noted that while PCA with orthogonal transformations produces uncorrelated features/variables, this is not the case for MPCA. Due to the nature of tensor-to-tensor transformation, MPCA features are not uncorrelated in general although the transformation in each mode is orthogonal.<ref name="UMPCA">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "[http://www.dsp.utoronto.ca/~haiping/Publication/UMPCA_TNN09.pdf Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning]," IEEE Trans. Neural Netw., vol. 20, no. 11, pp. 1820–1836, Nov. 2009.</ref> In contrast, the uncorrelated MPCA (UMPCA) generates uncorrelated multilinear features.<ref name="UMPCA"/> - -== Feature selection == -MPCA produces tensorial features. For conventional usage, vectorial features are often preferred. For example most classifiers in the literature takes vectors as input. On the other hand, as there are correlations among MPCA features, a further selection process often improves the performance. Supervised (discriminative) MPCA feature selection is used in object recognition<ref name="MPCA">, M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref> while unsupervised MPCA feature selection is employed in visualization task.<ref>H. Lu, H.-L. Eng, M. Thida, and K.N. Plataniotis, "[http://www.dsp.utoronto.ca/~haiping/Publication/CrowdMPCA_CIKM2010.pdf Visualization and Clustering of Crowd Video Content in MPCA Subspace]," in Proceedings of the 19th ACM Conference on Information and Knowledge Management (CIKM 2010) , Toronto, ON, Canada, October, 2010.</ref> - -== Extensions == -Various extensions of MPCA have been developed:<ref>{{cite journal - |first=Haiping |last=Lu - |first2=K.N. |last2=Plataniotis - |first3=A.N. |last3=Venetsanopoulos - |url=http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf - |title=A Survey of Multilinear Subspace Learning for Tensor Data - |journal=Pattern Recognition - |volume=44 |number=7 |pages=1540–1551 |year=2011 - |doi=10.1016/j.patcog.2011.01.004 -}}</ref> -*Uncorrelated MPCA (UMPCA) <ref name="UMPCA"/> -*[[Boosting (meta-algorithm)|Boosting]]+MPCA<ref>H. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, "[http://www.hindawi.com/journals/ivp/2009/713183.html Boosting Discriminant Learners for Gait Recognition using MPCA Features]", EURASIP Journal on Image and Video Processing, Volume 2009, Article ID 713183, 11 pages, 2009. {{doi|10.1155/2009/713183}}.</ref> -*Non-negative MPCA (NMPCA) <ref>Y. Panagakis, C. Kotropoulos, G. R. Arce, "Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification", IEEE Trans. on Audio, Speech, and Language Processing, vol. 18, no. 3, pp. 576–588, 2010.</ref> -*Robust MPCA (RMPCA) <ref>K. Inoue, K. Hara, K. Urahama, "Robust multilinear principal component analysis", Proc. IEEE Conference on Computer Vision, 2009, pp. 591–597.</ref> - -== Resources == -* '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/26168 MPCA]. -* '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/35432 UMPCA (including data)]. - -==References== -{{Reflist}} - -[[Category:Dimension reduction]] -[[Category:Machine learning]] -[[Category:Multivariate statistics]] '
New page size (new_size)
0
Old page size (old_size)
7822
Size change in edit (edit_delta)
-7822
Lines added in edit (added_lines)
[]
Lines removed in edit (removed_lines)
[ 0 => '{{context|date=June 2012}}', 1 => ''''Multilinear principal component analysis (MPCA)'''', 2 => '<ref name="MPCA2002b"> M. A. O. Vasilescu, D. Terzopoulos (2002) [http://www.cs.toronto.edu/~maov/tensorfaces/Springer%20ECCV%202002_files/eccv02proceeding_23500447.pdf "Multilinear Analysis of Image Ensembles: TensorFaces"], Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002</ref><ref name="MPCA2003"> M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref>', 3 => '<ref name="MPCA2002a"> M. Alex O. Vasilescu (2002) [http://www.cs.toronto.edu/~maov/motionsignatures/icpr02.pdf "Human Motion Signatures: Analysis, Synthesis, Recognition"], "Proceedings of the International Conference on Pattern Recognition (ICPR’02)", Quebec City, Canada, August, 2002</ref> ', 4 => '<ref name="MPCA2008">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, (2008) [http://www.dsp.utoronto.ca/~haiping/Publication/MPCA_TNN08_rev2010.pdf "MPCA: Multilinear principal component analysis of tensor objects"], ''IEEE Trans. Neural Netw.'', 19 (1), 18–39</ref> is a mathematical procedure that uses multiple orthogonal transformations to convert a set of multidimensional objects into another set of multidimensional objects of lower dimensions. There is one orthogonal (linear) transformation for each dimension (mode); hence ''multilinear''. This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data as possible, subject to the constraint of mode-wise orthogonality. ', 5 => 'MPCA is a multilinear extension of [[principal component analysis]] (PCA). The major difference is that PCA needs to reshape a multidimensional object into a [[Feature vector|vector]], while MPCA operates directly on multidimensional objects through mode-wise processing. For example, for 100x100 images, PCA operates on vectors of 10000x1 while MPCA operates on vectors of 100x1 in two modes. For the same amount of [[dimension reduction]], PCA needs to estimate 49*(10000/(100*2)-1) times more parameters than MPCA. Thus, MPCA is more efficient and better conditioned in practice.', 6 => false, 7 => 'MPCA is a basic algorithm for dimension reduction via [[multilinear subspace learning]]. In wider scope, it belongs to [[tensor]]-based computation. Its origin can be traced back to the [[Tucker decomposition]]<ref>{{Cite journal|last1=Tucker| first1=Ledyard R', 8 => ' | authorlink1 = Ledyard R Tucker', 9 => ' | title = Some mathematical notes on three-mode factor analysis', 10 => ' | journal = [[Psychometrika]]', 11 => ' | volume = 31 | issue = 3 | pages = 279–311', 12 => ' |date=September 1966', 13 => ' | doi = 10.1007/BF02289464 ', 14 => '}}</ref> in 1960s and it is closely related to [[higher-order singular value decomposition]],<ref name="HOSVD">L.D. Lathauwer, B.D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354398 "A multilinear singular value decomposition"], ''SIAM Journal of Matrix Analysis and Applications'', 21 (4), 1253–1278</ref> (HOSVD) and to the best rank-(R1, R2, ..., RN ) approximation of higher-order tensors.<ref>L. D. Lathauwer, B. D. Moor, J. Vandewalle (2000) [http://portal.acm.org/citation.cfm?id=354405 "On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors"], ''SIAM Journal of Matrix Analysis and Applications'' 21 (4), 1324–1342.</ref>', 15 => false, 16 => '== The algorithm ==', 17 => 'MPCA performs [[feature extraction]] by determining a [[Multilinear_subspace_learning#Multilinear_projection|multilinear projection]] that captures most of the original tensorial input variations. As in PCA, MPCA works on centered data. The MPCA solution follows the alternating least square (ALS) approach.<ref>P. M. Kroonenberg and J. de Leeuw, [http://www.springerlink.com/content/c8551t1p31236776/ Principal component analysis of three-mode data by means of alternating least squares algorithms], Psychometrika, 45 (1980), pp. 69–97.</ref> Thus, is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. Each subproblem is a classical PCA problem, which can be easily solved.', 18 => false, 19 => 'It should be noted that while PCA with orthogonal transformations produces uncorrelated features/variables, this is not the case for MPCA. Due to the nature of tensor-to-tensor transformation, MPCA features are not uncorrelated in general although the transformation in each mode is orthogonal.<ref name="UMPCA">H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "[http://www.dsp.utoronto.ca/~haiping/Publication/UMPCA_TNN09.pdf Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning]," IEEE Trans. Neural Netw., vol. 20, no. 11, pp. 1820–1836, Nov. 2009.</ref> In contrast, the uncorrelated MPCA (UMPCA) generates uncorrelated multilinear features.<ref name="UMPCA"/>', 20 => false, 21 => '== Feature selection ==', 22 => 'MPCA produces tensorial features. For conventional usage, vectorial features are often preferred. For example most classifiers in the literature takes vectors as input. On the other hand, as there are correlations among MPCA features, a further selection process often improves the performance. Supervised (discriminative) MPCA feature selection is used in object recognition<ref name="MPCA">, M. A. O. Vasilescu, D. Terzopoulos (2003) [http://www.cs.toronto.edu/~maov/tensorfaces/cvpr03.pdf "Multilinear Subspace Analysis of Image Ensembles"], "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’03), Madison, WI, June, 2003"</ref> while unsupervised MPCA feature selection is employed in visualization task.<ref>H. Lu, H.-L. Eng, M. Thida, and K.N. Plataniotis, "[http://www.dsp.utoronto.ca/~haiping/Publication/CrowdMPCA_CIKM2010.pdf Visualization and Clustering of Crowd Video Content in MPCA Subspace]," in Proceedings of the 19th ACM Conference on Information and Knowledge Management (CIKM 2010) , Toronto, ON, Canada, October, 2010.</ref>', 23 => false, 24 => '== Extensions ==', 25 => 'Various extensions of MPCA have been developed:<ref>{{cite journal', 26 => ' |first=Haiping |last=Lu', 27 => ' |first2=K.N. |last2=Plataniotis', 28 => ' |first3=A.N. |last3=Venetsanopoulos', 29 => ' |url=http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf', 30 => ' |title=A Survey of Multilinear Subspace Learning for Tensor Data', 31 => ' |journal=Pattern Recognition', 32 => ' |volume=44 |number=7 |pages=1540–1551 |year=2011', 33 => ' |doi=10.1016/j.patcog.2011.01.004', 34 => '}}</ref>', 35 => '*Uncorrelated MPCA (UMPCA) <ref name="UMPCA"/>', 36 => '*[[Boosting (meta-algorithm)|Boosting]]+MPCA<ref>H. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, "[http://www.hindawi.com/journals/ivp/2009/713183.html Boosting Discriminant Learners for Gait Recognition using MPCA Features]", EURASIP Journal on Image and Video Processing, Volume 2009, Article ID 713183, 11 pages, 2009. {{doi|10.1155/2009/713183}}.</ref>', 37 => '*Non-negative MPCA (NMPCA) <ref>Y. Panagakis, C. Kotropoulos, G. R. Arce, "Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification", IEEE Trans. on Audio, Speech, and Language Processing, vol. 18, no. 3, pp. 576–588, 2010.</ref>', 38 => '*Robust MPCA (RMPCA) <ref>K. Inoue, K. Hara, K. Urahama, "Robust multilinear principal component analysis", Proc. IEEE Conference on Computer Vision, 2009, pp. 591–597.</ref>', 39 => false, 40 => '== Resources ==', 41 => '* '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/26168 MPCA].', 42 => '* '''Matlab code''': [http://www.mathworks.com/matlabcentral/fileexchange/35432 UMPCA (including data)].', 43 => false, 44 => '==References==', 45 => '{{Reflist}}', 46 => false, 47 => '[[Category:Dimension reduction]]', 48 => '[[Category:Machine learning]]', 49 => '[[Category:Multivariate statistics]]' ]
Whether or not the change was made through a Tor exit node (tor_exit_node)
0
Unix timestamp of change (timestamp)
1462833739