Jump to content

Talk:Computational learning theory

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 109.58.44.105 (talk) at 15:28, 8 January 2013 (Evolvability problems of neurological applications.: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconRobotics Start‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.


Untitled

This article looks like it overlaps learning theory (statistics) and should be combined in some way. -- hike395 05:29, 30 Jan 2004 (UTC)

Computational learning theory is more properly a sub-discipline of computational complexity theory. Although a number of results in learning theory use results from statistics, the contribution of (modern) learning theory (and learning theorists) to statistics is not all that significant. --krishnanp 15:23, 11 Nov 2004 (UTC)

Bayesian statistics is not a subdiscipline of computational learning theory. It's a rigorous approach to statistics coupled with a different philosophy about probability. It would make no sense that the Bayesian approach to stats was a subdiscipline of CLT but the classical frequentist stats was not. Unless you're claiming that statistics as a whole is a subdiscipline CLT, which is even more unjustified.

Blaise 17:50, 30 Apr 2005 (UTC)

Good point. I changed the list to "approaches" rather than subdisciplines. The theoretical underpinnings for many of these approaches arose outside of CLT. -- hike395 23:11, 30 Apr 2005 (UTC)

Category

The Category:Machine learning is a bit overfull. Anyone here up for organizing the relevant terms from computational learning theory in an appropriate subcategory? Thank you. --Chire (talk) 16:07, 27 October 2011 (UTC)[reply]

Evolvability problems of neurological applications.

Specific mechanisms theory predicts that at least three specific modules are necessary to get anything done at all: one for perceptual cognition, one for emotional motivation and one executive. None of them is of any use unless the other two are already there. This raises a severe evolvability paradox for psychological nativism/evolutionary psychology/computational theory of mind. There are also specific evolvability paradoxes, such as redundant phonemes (no reason why a vast range of innate phonetic potential should have evolved when far fewer phonemes are evidently enough for a complex language, as shown by Polynesian languages), the first moral evolvability paradox (that a single moral individual would not survive in a group where everyone else was amoral) and first individual evolvability paradoxes in regards to many sexual behaviors (especially species recognition and sexual characteristic recognition). Then there is evidence, especially from domestication research, that evolution can go very fast. This means that nativist theory predicts that different human groups should have evolved big racial differences in psychology by natural selection working on individual hereditary psychiatry. That prediction is falsified by studies showing that supposed racial differences disappear when social factors are taken into account. These evolvability paradoxes are described in greater detail on the pages "Brain" and "Self-organization" on Pure science Wiki, a wiki for the scientific method uncorrupted by academic pursuit of prestige. 109.58.44.105 (talk) 15:28, 8 January 2013 (UTC)Martin J Sallberg[reply]