M-theory (learning framework)
This sandbox is in the article namespace. Either move this page into your userspace, or remove the {{User sandbox}} template.
In Machine Learning and Computer Vision, M-Theory is a learning framework inspired by functioning of visual cortex and originally developed for recognition and classification of objects in visual scenes. M-Theory was later applied to other areas, such as speech recognition. On certain image recognition tasks, algorithms based on M-Theory achieved human-level performance [1]
The core principle of M-Theory is using representations invariant to various transformations of images (such as rotation, translation and scale). In contrast with other approaches using invariant representations, in M-Theory they are not hardcoded into the algorithms, but learned. Just like visual cortex, the learning architecture suggested by M-Theory consist of several layers. In contrast with some other models exploiting similar ideas (such as Memory-prediction framework), M-Theory architecture is purely feedforward. It doesn’t consider feedback flow of information from higher levels of cortical hierarchy.
Citations
- ^ Serre T., Oliva A., Poggio T. (2007) A feedforward architecture accounts for rapid categorization. PNAS, vol. 104, no. 15, pp. 6424-6429