Jump to content

Learning rule

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Menschenreads (talk | contribs) at 08:45, 17 August 2020 (Hebb's Rule). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment.[1] A learning rule may accept existing conditions (weights and biases) of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias.[2] Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.

The learning rule is one of the factors which decides how fast or how accurately the artificial network can be developed. Depending upon the process to develop the network there are three main models of machine learning:

  1. Unsupervised learning
  2. Supervised learning
  3. Reinforcement learning

Learning Rule Types

It is to be noted that though these learning rules might appear to be based on similar ideas, they do have subtle differences, as they are a generalisation or application over the previous rule, and hence it makes sense to study them separately based on their origins and intents.

Hebb's Rule

Developed by Donal Hebb in 1949 to describe biological neuron firing. It defines Hebbian Learning with respect to Biological Neurons, which was in the min-1950s also applied to computer simulations of neural networks in Artificial Neural Networks.

Perceptron Learning Rule (PLR)

The perceptron learning rule originates from the Hebbian assumption, and was used by Frank Rosenblatt in his perceptron in 1958. The net is passed to the activation (transfer) function and the function's output is used for adjusting the weights.

Widrow-Hoff Learning (Delta Learning Rule)

Similar to the perceptron learning rule but with different origin. It was developed for use in the ADALAINE network. The weights are adjusted according to the weighted sum of the inputs (the net). This makes it ADALINE different from the normal perceptron. Sometimes when Widrow-Hoff is applied to binary targets specifically, it is referred to as Delta Rule. It is considered to a special case of the more general back-propagation algorithm.

Delta rule (DR) is similar to the Perceptron Learning Rule (PLR), with some differences:

  1. Error (δ) in DR is not restricted to having values of 0, 1, or -1 (as in PLR), but may have any value
  2. DR can be derived for any differentiable output/activation function f, whereas in PLR only works for threshold output function

Delta rule also closely resembles the Rescorla-Wagner model under which Pavlovian conditioning occurs[3].

Back-propagation

Seppo Linnainmaa in 1970 is said to have developed the Backpropagation Algorithm[4] but the origins of the algorithm go back to the 1960s with many contributors. It is a generalisation of the least mean squares algorithm in the linear perceptron and the Delta Learning Rule.

See also

References

  1. ^ Simon Haykin (16 July 1998). "Chapter 2: Learning Processes". Neural Networks: A comprehensive foundation (2nd ed.). Prentice Hall. pp. 50–104. ISBN 978-8178083001. Retrieved 2 May 2012.
  2. ^ S Russell, P Norvig (1995). "Chapter 18: Learning from Examples". Artificial Intelligence: A Modern Approach (3rd ed.). Prentice Hall. pp. 693–859. ISBN 0-13-103805-2. Retrieved 20 Nov 2013.
  3. ^ Rescorla, Robert (2008-03-31). "Rescorla-Wagner model". Scholarpedia. 3 (3): 2237. doi:10.4249/scholarpedia.2237. ISSN 1941-6016.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  4. ^ Schmidhuber, Juergen (2015-01). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. doi:10.1016/j.neunet.2014.09.003. {{cite journal}}: Check date values in: |date= (help)