Jump to content

Computational theory of mind

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 200.0.70.254 (talk) at 18:59, 16 February 2013 (Criticism). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In philosophy, a computational theory of mind names a view that the human mind and/or human brain is an information processing system and that thinking is a form of computing. The theory was proposed in its modern form by Hilary Putnam in 1961, and developed by the MIT philosopher and cognitive scientist (and Putnam's PhD student) Jerry Fodor in the 1960s, 1970s and 1980s.[1][2] Despite being largely repudiated in analytic philosophy in the 1990s (due to work by Putnam himself, John Searle, and others), the view is common in modern cognitive psychology and is presumed by many theorists of evolutionary psychology; in the 2000s and 2010s the view has resurfaced in analytic philosophy (Scheutz 2003, Edelman 2008).[full citation needed]

The computational theory of mind is the theory that the mind/brain is a computer. The theory can be elaborated in many ways, the most popular of which is that the brain is a computer and the mind is the program that the brain runs.[3] An algorithm is an effective procedure: a step-by-step set of instructions that always results in an output (the same output every time), based only on the form of the input, and not what it means. Algorithms terminate in a finite number of steps, and they work for any admissible input. A program is the instantiation of an algorithm in a particular computer language. So the computational theory of mind is the claim that the mind is a machine that derives output representations of the world from input representations in a deterministic (non-random) and formal (non-semantic) way.

Computational theories of mind are often said to require mental representation because 'input' into a computation comes in the form of symbols or representations of other objects. A computer cannot compute an actual object, but must interpret and represent the object in some form and then compute the representation. The computational theory of mind is related to the representational theory of mind in that they both require that mental states are representations. However the two theories differ in that the representational theory claims that all mental states are representations while the computational theory leaves open that certain mental states, such as pain or depression, may not be representational and therefore may not be suitable for a computational treatment. These non-representational mental states are known as qualia. In Fodor's original views, the computational theory of mind is also related to the language of thought. The language of thought theory allows the mind to process more complex representations with the help of semantics. (See below in semantics of mental states).

"Computer metaphor"

Computational theory of mind is not the same as the computer metaphor, comparing the mind to a modern day digital computer.[4] Computational theory just uses some of the same principles as those found in digital computing.[4]

'Computer' is not meant to mean a modern day electronic computer. Rather a computer is a symbol manipulator that follows step by step functions to compute input and form output. Alan Turing describes this type of computer in his concept of a Turing Machine.

Early proponents

One of the earliest proponents of the computational theory of mind was Thomas Hobbes, who said, "by reasoning, I understand computation. And to compute is to collect the sum of many things added together at the same time, or to know the remainder when one thing has been taken from another. To reason therefore is the same as to add or to subtract."[5] Since Hobbes lived before the contemporary identification of computing with instantiating effective procedures, he cannot be interpreted as explicitly endorsing the computational theory of mind, in the contemporary sense.

Causal picture of thoughts

At the heart of the Computational Theory of Mind is the idea that thoughts are a form of computation, and a computation is by definition a systematic set of laws for the relations among representations. This means that a mental state represents something if and only if there is some causal correlation between the mental state and that particular thing. An example would be seeing dark clouds and thinking “clouds mean rain”, where there is a correlation between the thought of the clouds and rain, as the clouds causing rain. This is sometimes known as Natural Meaning. Conversely, there is another side to the causality of thoughts and that is the non-natural representation of thoughts. An example would be seeing a red traffic light and thinking “red means stop”, there is nothing about the color red that indicates it represents stopping, and thus is just a convention that has been invented, similar to languages and their abilities to form representations.

Semantics of mental states

The computational theory of mind states that the mind functions as a symbolic operator, and that mental representations are symbolic representations; just as the semantics of language are the features of words and sentences that relate to their meaning, the semantics of mental states are those meanings of representations, the definitions of the ‘words’ of the language of thought. If these basic mental states can have a particular meaning just as words in a language do, then this means that more complex mental states (thoughts) can be created, even if they have never been encountered before. Just as new sentences that are read can be understood even if they have never been encountered before, as long as the basic components are understood, and it is syntactically correct. For example: “I have eaten plum pudding every day of this fortnight.” While it's doubtful many have seen this particular configuration of words, nonetheless most readers should be able to glean an understanding of this sentence because it is syntactically correct and the constituent parts are understood.

Criticism

A range of arguments have been proposed against Computational Theories of Mind.

John Searle has criticized the Computational Theory of Mind with a thought experiment known as the Chinese room. Imagine that there is a man in a room with no way of communicating to anyone or anything outside of the room except for a piece of paper that is passed under the door. With the paper, he is to use a series of provided books to “answer” what is on the paper. The symbols are all in Chinese, and all the man knows is where to look in the books, which then tell him what to write in response. It just so happens that this generates a conversation that the Chinese man outside of the room can actually understand, but can our man in the room really be said to understand it? This is essentially what the computational theory of mind presents us with; a model in which the mind simply decodes symbols and outputs more symbols. It is argued that perhaps this is not real learning or thinking at all. However, it can be argued in response to this that it is the man and the books together that understand Chinese.

Searle has further raised questions about what exactly constitutes a computation:

the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar, if it is a big enough wall it is implementing any program, including any program implemented in the brain.[6]

Putnam himself (see in particular Representation and Reality and the first part of Renewing Philosophy) became a prominent critic of computationalism for a variety of reasons, including ones related to Searle's Chinese room arguments, questions of world-word reference relations, and thoughts about the mind-body relationship. Regarding functionalism in particular, Putnam has claimed along lines similar to, but more general than Searle's arguments, that the question of whether the human mind can implement computational states is not relevant to the question of the nature of mind, because "every ordinary open system realizes every abstract finite automaton."[7] Computationalists have responded by aiming to develop criteria describing what exactly counts as an implementation.[8] [9] [10]

Roger Penrose has proposed the idea that the human mind does not use a knowably sound calculation procedure to understand and discover mathematical intricacies. This would mean that a normal Turing complete computer would not be able to ascertain certain mathematical truths that human minds can.[11]

Quantitative and experimental arguments

But so far the principal problem for the computational theory of mind is the lack of quantitative arguments to link computation to the cognitive aspects. However, in January 2013, Minds and Machines published a work that shows that the perceptual system behaves as a purely physical communication channel (Shannon-Hartley) converging to the Landauer thermodynamic limit of computation. [12] This feature can open the possibility to experimentally test the computational theory of mind by means of Landauer limit.

Prominent scholars

  • Daniel Dennett proposed the Multiple Drafts Model, in which consciousness seems linear but is actually blurry and gappy, distributed over space and time in the brain. Consciousness is the computation, there is no extra step or "Cartesian Theater" in which you become conscious of the computation.
  • Jerry Fodor argues that mental states, such as beliefs and desires, are relations between individuals and mental representations. He maintains that these representations can only be correctly explained in terms of a language of thought (LOT) in the mind. Further, this language of thought itself is codified in the brain, not just a useful explanatory tool. Fodor adheres to a species of functionalism, maintaining that thinking and other mental processes consist primarily of computations operating on the syntax of the representations that make up the language of thought. In later work (Concepts and The Elm and the Expert), Fodor has refined and even questioned some of his original computationalist views, and adopted a highly modified version of LOT (see LOT2).
  • David Marr proposed that cognitive processes have three levels of description: the computational level (which describes that computational problem (i.e., input/output mapping) computed by the cognitive process); the algorithmic level (which presents the algorithm used for computing the problem postulated at the computational level); and the implementational level (which describes the physical implementation of the algorithm postulated at the algorithmic level in biological matter, e.g. the brain). (Marr 1981)
  • Ulric Neisser coined the term 'cognitive psychology' in his book published in 1967 (Cognitive Psychology), wherein Neisser characterizes people as dynamic information-processing systems whose mental operations might be described in computational terms.
  • Steven Pinker described a "language instinct," an evolved, built-in capacity to learn speech (if not writing).
  • Hilary Putnam proposed functionalism (philosophy of mind) to describe consciousness, asserting that it is the computation that equates to consciousness, regardless of whether the computation is operating in a brain, in a computer, or in a "brain in a vat."
  • Georges Rey, professor at the University of Maryland, builds on Jerry Fodor's representational theory of mind to produce his own version of a Computational/Representational Theory of Thought.

Alternative theories

See also

Notes

  1. ^ Putnam, Hilary, 1961. “Brains and Behavior”, originally read as part of the program of the American Association for the Advancement of Science, Section L (History and Philosophy of Science), December 27, 1961, reprinted in Block (1983), and also along with other papers on the topic in Putnam, Mathematics, Matter and Method (1979)
  2. ^ Horst, Steven, (2005) "The Computational Theory of Mind" in The Stanford Encyclopedia of Philosophy
  3. ^ Block, Ned The Mind as the Software of the Brain
  4. ^ a b Pinker, Steven. The Blank Slate. New York: Penguin. 2002
  5. ^ Hobbes, Thomas "De Corpore"
  6. ^ Searle, J.R. (1992), The Rediscovery of the Mind
  7. ^ Putnam, H. (1988), Representation and Reality
  8. ^ Chalmers, D.J. (1996), "Does a rock implement every finite-state automaton?", Synthese, 108 (3): 309–333, doi:10.1007/BF00413692, retrieved 2009-05-27
  9. ^ Edelman, Shimon (2008), "On the Nature of Minds, or: Truth and Consequences" (PDF), Journal of Experimental and Theoretical AI, 20: 181–196, retrieved 2009-06-12 {{citation}}: Cite has empty unknown parameter: |coauthors= (help)
  10. ^ Blackmon, James (2012),"Searle’s Wall". Erkenntnis, Issn: 0165-0106 Url: http://dx.doi.org/10.1007/s10670-012-9405-4 Doi: 10.1007/s10670-012-9405-4
  11. ^ Roger Penrose, "Mathematical Intelligence," in Jean Khalfa, editor, What is Intelligence?, chapter 5, pages 107-136. Cambridge University Press, Cambridge, United Kingdom, 1994
  12. ^ Alexandre de Castro (2013) "The Thermodynamic Cost of Fast Thought.". Minds and Machines DOI: 10.1007/s11023-013-9302-x

References