Jump to content

Talk:Autoencoder

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 216.169.216.1 (talk) at 16:47, 17 September 2013 (Clarification of "An output layer, where each neuron has the same meaning as in the input layer"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconRobotics Stub‑class Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StubThis article has been rated as Stub-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
WikiProject iconComputer science Unassessed
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
Things you can help WikiProject Computer science with:

Template:IEP assignment

Training section

Much of the training section does not seem to be directly related to auto-encoders in particular but about neural networks in general. No? BrokenSegue 09:17, 29 August 2011 (UTC)[reply]

Clarification of "An output layer, where each neuron has the same meaning as in the input layer"

I don't understand what "has the same meaning as in the input layer" means in the output layer definition in the article. Can someone explain or clarify in the article, please. Many thanks, p.r.newman (talk) 09:53, 10 October 2012 (UTC)[reply]

==Answer: The outputs are the same as the inputs, i.e. y_i = x_i. The autoencoder tries to learn the identity function. Although it might seem that if the number of hidden units >= the number of input units (/output units) the resulting weights would be the trivial identity, in practice this does not turn out to be the case (probably due to the fact that the weights start so small). Sparse autoencoders, where a limited number of hidden layers can be activated at once, avoid this problem even in theory. 216.169.216.1 (talk) 16:47, 17 September 2013 (UTC) Dave Rimshnick[reply]