Jump to content

User:Xperroni/Weightless neural network

From Wikipedia, the free encyclopedia

Weightless Neural Networks (WNN's) are a form of content-addressable memory loosely inspired by the excitatory / inhibitory decoding performed by the dendritic trees of biological neurons. [1] In contrast to traditional artificial neural network models, weightless neurons do not store sets of adaptive weights. Instead, they record sample input / output pairs: when a test input is presented to a neuron, it searches its list of stored inputs for a matching entry, and returns the output associated to the matched (stored) input. Different WNN architectures differ on possible output types, how input pairs are matched, whether a test input can have no match and what the response is in such a case, but they generally agree in restricting inputs to be bit strings of a fixed length. [2]

Although there is a large degree of intersection between weightless neural networks and Boolean Neural Network (BNN's), as a whole the two approaches are distinct. BNN's are those neural networks that handle only boolean values at their inputs and outputs, but this definition does not exclude architectures where neurons store sets of adaptive weights: for example, Hopfield networks are BNN's but not WNN's. [2] Likewise, some WNN's can output non-boolean values. [3]

Weightless neural networks have been successfully applied to a variety of supervised and unsupervised learning problems. Examples include face recognition [3], data clustering [4], stock return prediction [5] [6] and multi-label text categorization [7]. Since training is performed simply by writing sample cases to neurons, usually a weightless neural network can be completely trained in a single unordered pass through the data. Input matching can be implemented efficiently by defining mismatch as the Hamming weight of the XOR product between compared bit strings. WNN's essentially implement nearest neighbor search over the space of inputs, therefore neurons can model complex functions without need for multi-layer arrangements.

Weightless neural networks were first proposed by Igor Aleksander in the 1960's. Aleksander was interested in the application of N-tuple sampling machines to learning problems, and developed the RAM node as a universal logic circuit for hardware-based machine learning. [8] Continuous improvements to integrated circuit technology led to these being dropped in favor of standard RAM memories by the 1980's, and by the early 1990's WNN architectures were typically realized in software running on desktop computers and other general-purpose hardware. [2] Currently there is increased interest in parallel implementations, particularly on top of GPU architectures. [9]

Architectures

[edit]

Weightless neural network architectures differ across a number of features, including:

  • The training process;
  • How test inputs are matched to recorded entries;
  • What is the response when no match can be found for a test input;
  • The extent to which the "undefined" value u (which is neither 0 nor 1) can be used as a placeholder for bit values;
  • Which values can be returned as output;
  • The overall network layout.

Some of the WNN architectures proposed over the years are described below. A more complete review is available in the literature. [2]

WISARD

[edit]

The Wilkie, Aleksander and Stonham's Recognition Device (WISARD) was a general purpose pattern recognition machine developed in the 1980's. It was based on the principle of RAM nodes invented by Aleksander, but actually implemented using conventional RAM banks. [10] A RAM node is a look-up table composed of 2N entries and a binary address bus of length N. Each entry holds a single binary value (that is, either 0 or 1). RAM nodes are trained by first setting all entries to 0, and then setting select entries to 1. When a test input is presented to a RAM node, it will return either 1 if this value was written to the corresponding entry, or 0 otherwise.

In the WISARD architecture, RAM nodes are grouped in units called discriminators, which sample a common input area through distinctive connection patterns – that is, no two discriminators read the exact same input region. During training, each discriminator assigns 1 to the patterns (that is, memory addresses) representing the class it is supposed to recognize, and 0 to the others. During test, all discriminators sample the input area and return the sum of responses from its RAM nodes: the highest output indicates the discriminator (and therefore the class) that best matched the input. Connection patterns are chosen randomly at the time of network setup, and preserved for reuse from that point on as a network parameter.

Probabilistic Logic Node (PLN)

[edit]

When a RAM node returns 1, this is an unambiguous indication that the input was recognized as a member of the trained class. However, when the response is 0, there is no way to tell apart a negative example (that is, a pattern that was marked as not belonging to the class) from a previously unseen pattern. The Probabilistic Logic Node (PLN) solves this problem by allowing each memory location to store one of three possible values: 0, 1 or u (undefined). PLN's are trained by first setting all entries to u, and then setting select entries to either 1 or 0, depending on whether they belong or not to the trained class. When a test input addresses an entry with an undefined value, either 0 or 1 can be returned – the output of a random bit generator is used.

In a PLN network, nodes are grouped in three-layered tree structures called pyramids, where each node has a limited number of inputs (fan-in is low) and is connected to at most a single output node (fan-out is 1). A network is composed of several pyramids whose input terminals sample a common input area: their number and internal connection patterns constitute the network's parameters.

Several algorithms have been devised for training PLN networks [11][12][13][14]. One of the simplest is as follows [14]:

  1. Set all memory entries in all nodes to u;
  2. Load a training pattern to the common input area;
  3. As nodes in the input layer fire in response to the presented input, activity propagates forward through the network until it reaches the output layer, producing the network's output;
  4. Compare the network output to the desired responses for the given training input;
  5. For each node in the output layer:
    1. If the desired value matches the node's output, any u entries addressed will be written with the last value returned;
    2. Otherwise, try again; if a match could not be achieved after β tries, any defined entries addressed have their values reverted to u;
  6. Repeat steps 2-5 for all training cases, until all patterns produce correct responses without any (re)writing of memory entries.

Virtual Generalizing RAM (VG-RAM)

[edit]

Comparison to Weighted Models

[edit]

Current Issues

[edit]

References

[edit]
  1. ^ Aleksander, Igor. "A brief introduction to Weightless Neural Systems" (PDF). ESAAN. Retrieved 18 March 2014. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  2. ^ a b c d Ludemir, Teresa B. (1998). "Weightless Neural Models: A Review of Current and Past Works" (PDF). Neural Computing Surveys. 2: 41–61. Retrieved 18 March 2014. {{cite journal}}: Unknown parameter |coauthors= ignored (|author= suggested) (help)
  3. ^ a b De Souza, Alberto F.; Badue, Claudine; Pedroni, Felipe; Oliveira, Elias; Dias, Stiven Schwanz; Oliveira, Hallysson; De Souza, Soterio Ferreira (2008). "Face Recognition with VG-RAM Weightless Neural Networks" (PDF). Lecture Notes in Computer Science. 5163: 951–960. doi:10.1007/978-3-540-87536-9_97. ISBN 978-3-540-87535-2. Retrieved 18 March 2014.
  4. ^ Wickert, Iuri; França, Felipe M. G. (2001). "AUTOWISARD: Unsupervised Modes for the WISARD". Lecture Notes in Computer Science. 2084: 435–441. CiteSeerX 10.1.1.21.5091. doi:10.1007/3-540-45720-8_51. ISBN 978-3-540-42235-8. Retrieved 18 March 2014.
  5. ^ De Souza, Alberto Ferreira; Freitas, Fabio Daros; De Almeida, Andre Gustavo Coelho (2010). "High performance prediction of stock returns with VG-RAM weightless neural networks". IEEE Workshop on High Performance Computational Finance (WHPCF): 1–8. doi:10.1109/WHPCF.2010.5671832. ISBN 978-1-4244-9062-2.
  6. ^ Coelho de Almeida, André G. "Preditor de Alto Desempenho para Retornos de Ações Baseado em Redes Neurais sem Peso" (PDF). Master Thesis. Universidade Federal do Espírito Santo (UFES). Retrieved 18 March 2014.
  7. ^ De Souza, Alberto F.; Pedroni, Felipe; Oliveira, Elias; Ciarelli, Patrick M.; Henrique, Wallace Favoreto; Veronese, Lucas; Badue, Claudine (June 2009). "Automated multi-label text categorization with VG-RAM weightless neural networks". Neurocomputing. 72 (10–12): 2209–2217. doi:10.1016/j.neucom.2008.06.028. Retrieved 18 March 2014.
  8. ^ Aleksander, Igor (August 1966). "Self-adaptive universal logic circuits". IEEE Electronics Letters. 2 (8): 321–322. doi:10.1049/el:19660270.
  9. ^ Farias, Ricardo. "CUDA Research Center Program" (PDF). Universidade Federal do Rio de Janeiro (UFRJ). Retrieved 18 March 2014.
  10. ^ Aleksander, I.; Thomas, W.V.; Bowden, P.A. (1984). "WISARD·a radical step forward in image recognition". Sensor Review. 4 (3): 120–124. doi:10.1108/eb007637. ISSN 0260-2288.
  11. ^ Myers, C. E. (Oct 1989). "Output functions for probabilistic logic nodes". Artificial Neural Networks, 1989., First IEE International Conference on (Conf. Publ. No. 313). pp. 310–314.{{cite book}}: CS1 maint: date and year (link)
  12. ^ Al-Alawi, R.; Stonham, T.J. (1992). "A Training Strategy and Functionality Analysis of Digital Multi-Layer Neural Networks". Journal of Intelligent Systems. 2 (1–4). doi:10.1515/JISYS.1992.2.1-4.53. ISSN 2191-026X.
  13. ^ Zhang, Bo; Zhang, Ling; Zhang, Huai (1995). "The complexity of learning in PLN networks". Neural Networks. 8 (2): 221–228. doi:10.1016/0893-6080(94)00082-W. ISSN 0893-6080.
  14. ^ a b Ludermir, Teresa B; de Oliveira, Wilson R (1994). "Weightless neural models". Computer Standards & Interfaces. 16 (3): 253–263. doi:10.1016/0920-5489(94)90016-7. ISSN 0920-5489.
[edit]
  • Clarus: a library for machine learning, geared towards computer vision problems, that includes an API for working with VG-RAM Weightless Neural Networks.