Talk:Autoencoder
![]() | Robotics Start‑class Mid‑importance | |||||||||
|
![]() | Computer science Start‑class | ||||||||||||||||
|
Training section
Much of the training section does not seem to be directly related to auto-encoders in particular but about neural networks in general. No? BrokenSegue 09:17, 29 August 2011 (UTC)
Clarification of "An output layer, where each neuron has the same meaning as in the input layer"
I don't understand what "has the same meaning as in the input layer" means in the output layer definition in the article. Can someone explain or clarify in the article, please. Many thanks, p.r.newman (talk) 09:53, 10 October 2012 (UTC)
==Answer: The outputs are the same as the inputs, i.e. y_i = x_i. The autoencoder tries to learn the identity function. Although it might seem that if the number of hidden units >= the number of input units (/output units) the resulting weights would be the trivial identity, in practice this does not turn out to be the case (probably due to the fact that the weights start so small). Sparse autoencoders, where a limited number of hidden units can be activated at once, avoid this problem even in theory. 216.169.216.1 (talk) 16:47, 17 September 2013 (UTC) Dave Rimshnick
Where is the structure section taken from?
I was wondering if there were any book sources that could be added as reference where a similar approach in describing the autoencoder is taken. Also, what are the W and b terms? It's not very clear what role W and b play on the decoding and enconding process.
Hi there, for anyone struggling to get the correct scientific quote for Autoencoder and where the argmin stuff can be found, the source you're looking for "Threaded Ensembles of Supervised and Unsupervised Neural Networks for Stream Learning" & anyone who unlike me cares enough could add that quote to the article, glhf