Jump to content

Talk:Kernel principal component analysis

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 216.17.134.225 (talk) at 15:52, 9 February 2012 (Normalization of eigenvectors: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconStatistics Unassessed
WikiProject iconThis article is within the scope of WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the importance scale.
WikiProject iconRobotics Start‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.

On redirection to SVM

What is the relationship between kernel PCA and SVMs? I don't see any direct connection. //Memming 15:50, 17 May 2007 (UTC)[reply]

There is no relation, this is a common mistake. Not every kernel points to an SVM. Kernel is a more common thing in math.
Then I'll break the redirection to SVM. //Memming 12:00, 21 May 2007 (UTC)[reply]

Data reduction in the feature space

In the litterature, I found the way to center the input data in the feature space. Nevertheless, I never found a way to reduce the data in the feature space, so if anyone has knowledge about it, I would be glad if he could explain that toppic here or give few links

Example of kPCA projection

There is something wrong with the example given. It looks like the kernel matrix was not centered before eigendecomposition. Is this a acceptable modification of the algorithm? If it is, in which cases it makes sense to do not center the kernel matrix before other calculations?

See more on: http://agbs.kyb.tuebingen.mpg.de/km/bb/showthread.php?tid=1062 —Preceding unsigned comment added by 201.92.120.245 (talk) 01:20, 13 January 2010 (UTC)[reply]

Also, is it possible to show a working Kernel PCA code to reproduce the example plots? I tried to use Gaussian kernel, and I only obtain similar results when I use \sigma = 2, not \sigma = 1. —Preceding unsigned comment added by 141.211.51.212 (talk) 10:36, 5 February 2011 (UTC)[reply]

Expand?

This looks like an interesting subject, but I don't entirely follow what's here. I gather that the "kernel trick" essentially allows you to perform a nonlinear transform on your data. First, I think the example needs to be explained further. There are two output images that don't flow with the text. The text also only goes part way in describing how this is done. Here are some questions:

  1. How do you chose a kernel?
  2. How is the PCA performed? (Is it really just linear regression on transformed data by eigendecomposition of the covariance matrix of the transformed points?)
  3. If the nonlinear transformation is done implicitly by replacing the inner product in the PCA algorithm, then doesn't that mean you need to do something other than a simple eigendecomposition?

Thanks. —Ben FrantzDale (talk) 01:19, 28 April 2009 (UTC)[reply]


One easy, but significant, improvement that could be made is to include in the example a kernel equivalent to 'ordinary' PCA. At the moment it is not clear what the advantage is. For instance, the first kernel in the current example says "groups are distinguishable using the first component only", but this also seems to be true (as a layperson) for the second kernel in the current example. This should also be clarified.
It would also be interesting to know (at least broadly) how the technique is implemented conceptually, and whether it is supported in standard software packages.
—DIV (128.250.247.158 (talk) 07:19, 29 July 2009 (UTC))[reply]

Centralization

What is the source and justification for this "centralization" technique? can anyone expand this? — Preceding unsigned comment added by Chrisalvino (talkcontribs) 21:34, 31 August 2011 (UTC)[reply]


The mystery of Y

Maybe it is me .. but I can't find where Y, in k(x,y) is defined ... ? So I've no idea what y is. — Preceding unsigned comment added by Eep1mp (talkcontribs) 14:22, 13 October 2011 (UTC)[reply]

Normalization of eigenvectors

Should the normalization condition on the eigenvectors include a transpose on the first eigenvector?

1 = {a^k}^T K a^k instead of 1 = a^k K a^k.