Jump to content

M-theory (learning framework)

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Ivan sysoev (talk | contribs) at 01:28, 8 December 2014 (Created page with '{{User sandbox}} <!-- EDIT BELOW THIS LINE --> In Machine Learning and Computer Vision, '''M-Theory''' is a learning framework inspired by functioning o...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

This sandbox is in the article namespace. Either move this page into your userspace, or remove the {{User sandbox}} template.

In Machine Learning and Computer Vision, M-Theory is a learning framework inspired by functioning of visual cortex and originally developed for recognition and classification of objects in visual scenes. M-Theory was later applied to other areas, such as speech recognition. On certain image recognition tasks, algorithms based on M-Theory achieved human-level performance [1]

The core principle of M-Theory is using representations invariant to various transformations of images (such as rotation, translation and scale). In contrast with other approaches using invariant representations, in M-Theory they are not hardcoded into the algorithms, but learned. Just like visual cortex, the learning architecture suggested by M-Theory consist of several layers. In contrast with some other models exploiting similar ideas (such as Memory-prediction framework), M-Theory architecture is purely feedforward. It doesn’t consider feedback flow of information from higher levels of cortical hierarchy.

Citations

  1. ^ Serre T., Oliva A., Poggio T. (2007) A feedforward architecture accounts for rapid categorization. PNAS, vol. 104, no. 15, pp. 6424-6429