Home

Blog

Photos

Writing

Bookshelf

Friends

Media

Shopping

Email me

Thermostats Don't Think

Hillary Putnam introduced functionalism, which led to the computational theory of mind that is now the philosophical foundation of cognitive science. Functionalism can be summed up by the slogan, "the mind is to the brain as the program is to the computer running it". Typically, functionalists assume that the computer-program relation is sufficiently well understood as to be ignored. I will argue that the functionalist slogan is not true. Further, I well argue that were it true, it would not be particularly helpful. The computer-program relation is just as mysterious as the mind-brain relation. Ultimately, the computer-program relation is the same as the form-meaning relation. I hold that these three distinctions are all irreconcilable and that dualism must follow.

I. Functionalism Gives Rise to the Computational Theory of Mind

Before Hillary Putnam introduced functionalism with his 1960 essay "Minds and Machines" there were two general ways of dealing with the relationship between the mind and the physical world. The first position is dualism, which holds that the mental and the physical are two distinct types of things. While one may act on the other, in no way is one composed of the other. The second position is reductive materialism, which holds that there is no separate mental world. For a reductive materialist, mental states simply are brain states. Both these positions have significant weaknesses. Functionalism tries to dissolve the issues that lead to these weaknesses.

Functionalism describes any system with a mind on two levels: physical and functional. However, these are only two different levels of description; they are not descriptions of two different things. The physical description of the system is generally taken for granted. Although I do not think this is obviously justified, it is inconsequential at the moment. For now, let the physical description of the system be given. The functional description of the system is given in terms of its functional states (also called logical states and computational states). Functional states are in principle unobservable. They are defined by their relation to input and output and to other functional states.

Putnam uses a description of a Turing machine as an illustration for how functional states can be defined. A Turing machine is a device with a few simple parts. First, there is a tape, divided into squares, each of which can have a symbol printed on it. Second, there is a device for reading the symbols on the tape. This device reads only one symbol at a time. Finally, there is a device for erasing symbols and printing new symbols in their places. A complete description of a Turing machine is given by its machine table, a table that gives the output for any given combination of state and input. I reproduce Putnam's example machine table here:

Fig 1. A B C D
(s1) 1 s1RA s1LB s3LD s1CD
(s2) + s1LB s2CD s2LD s2CD
(s3) blank s3CD s3RC s3LD s3CD

This machine has three symbols ("1", "+", and a blank space) and four functional states (A, B, C, and D). The instructions given in each cell of the table are to be understood as follows: We look to the first cell (A, s1) to ascertain how the machine will behave if it is in state A and reading "1". The instruction "s1RA" should be read as follows: "s1" tells us that the machine will print symbol 1 ("1"); "R" indicates that the machine will scan the square to the right on the tape; "A" means the machine will remain in state A.

This machine is designed to work out a sum given in unary notation. Unary notation is a way of representing numbers using only one symbol ("1"). The sequence "1, 2, 3" is represented in unary notation as "1, 11, 111". It begins in state A. Given a tape reading "111+11" the machine will print out "11111". The method by which the machine accomplishes this is by reading along the sum until it encounters the "+", replacing the "+" with a "1", and returning to the beginning of the tape, where it erases the initial "1". It is very instructive to actually work through a sum as the machine, writing and erasing symbols one at a time on a piece of paper, consulting the machine table for instruction. Really, do it. It will make things more understandable in the end.

Importantly, the machine table, which at least at one level is a complete description of the Turing Machine, makes no mention of its physical structure. The same Turing Machine can be implemented in many different physical forms. All that is necessary is that the physical implementation behaves as described by the machine table. In fact, if you worked through a sum on paper as suggested, you were an implementation of this Turing Machine, at least briefly.

Functionalism gave birth to the computational theory of mind. Because of the powerful analogy Putnam drew between mental states and computational states, many people came to believe that mental states are computational states (interestingly, Putnam never accepted this view). The brain is a computer. This metaphor underlies cognitive science. While it is a useful metaphor, it is not true. It can be used as a foundation for very good psychology, but it leads to very bad philosophy. The rest of this paper explains why.

II. Computational States Are Not Mental States

John Searle used his now famous "Chinese Room Argument" to show that mental states are not computational states. The argument runs as follows: Imagine Dr. Searle, who is fluent in English, but knows absolutely no Chinese, is given a set of Chinese characters. Additionally, he is given rules, written in English, instructing him to associate certain symbols with certain other symbols. He is then placed in a closed room with a mail slot. A second set of Chinese symbols is fed into the mail slot. John follows the English rules and matches these symbols up with other symbols. Then a third set of Chinese symbols is fed into the slot. Again, John follows the English rules, which this time tell him to slip certain Chinese symbols out of the slot. Unbeknownst to poor John, the first symbols fed through the slot constituted a story in Chinese. The second set fed through the slot constituted questions about that story. The symbols he fed out of the slot constituted appropriate answers to the questions.

John went through all the computational states that someone reading a Chinese story and responding to questions about it would go through. He manipulated the symbols correctly. However, no one would claim that John was in the mental state of reading a Chinese story, that John understood the story, or any similar things about John's mental states. Clearly then, cognition must be something more than computation and mental states must be something more than computational states.

One response to this, called the systems response, is that John does not know Chinese, but that he is part of a system that does, namely, him and his sets of characters and rules. To many people this may seem ridiculous. Can we seriously accept that the system of John, sets of Chinese characters, and rules written in English knows Chinese? Further, the response can be dissolved by internalizing the system. Let John memorize the symbols and the rules. Now John can carry out all the computational aspects of reading a Chinese story and answering questions about it. Still, he clearly has not read a Chinese story and answered questions about it. Computation is not sufficient for mental activity. The computational theory of mind is a failure.

Searle asks what would have ever made anyone think that mental states are computational states. Yes, computers have come a long way in simulating intelligent behavior, but computers have come a long way in simulating many things. Searle points out that we do not worry about a computer simulation of a hurricane getting us wet. Why then do we even consider that a computer simulation of a mind might think? Presumably, the error originates in our conception of computers as information processors and of the brain similarly. However, what the above argument is meant to show is that computers do not do information processing at all. Computers only manipulate formal symbols, which to the computer have no meaning, no aboutness. How could this be considered information? Critically, information is always information about something; it always involves some meaning.

Further, while the mind certainly does information processing, that by no means exhausts its function. There is a difference between seeing something, say a chair, in front of you and in being told that there is a chair in front of you while your eyes are closed. Even if you are given a rich description of the chair, one that includes all the information you could get from looking at the chair, the experience is much different. This is because seeing something involves more than processing visual information. It involves having an experience.

III. Physical States Are Not Computational States

Remember that a functionalist description of a system comes in two parts: physical and functional. What is the relationship between these two descriptions? We have already stated that the functional description does not determine physical description. Does a complete physical description determine the functional description? No. When you followed the machine table in figure 1 to compute a sum, you were also following the machine table below:

Fig 2. A B C
(s1) 1 s1RA s1RB s3LC
(s2) + s1LB s2CC s2CC
(s3) blank s3CC s3RC s3CC

Work through a sum following this table and carefully monitor your actions. They will be identical to the actions performed while computing the same sum following the previous table.

Any system that is accurately described by the table in figure 1 is also accurately described by the table in figure 2, given that both are always started in state A. Not only do the two tables describe identical behavior for the machine in calculating a sum, they also provide identical description for its behavior given degenerate input, that is, input not in the form of a sum. I believe that examples such as this destroy the entire functionalist program. There are no objective computational states. I have not simply rearranged the original states. There is no longer the same number of states. How can we talk objectively about computational states when we cannot specify what they are or even how many there are for a given physical system?

Why is there no unique functional description of a given system? Because functional states have no independent existence. They exist only in that we assign them to a physical system. They are not properties of the physical system. I think this illustrates the hopelessness of explaining mental states via computational states. Computational states exist only in virtue of the mind that assigns them. Because a mind is needed to explain computational states, they can play no role in an explanation of mind. The result is circular and worthless.

Not everyone accepts that computational states do not exist independent of an observer. I know of no way to hammer this point home other than to refer to the above example. Two different computational descriptions are given for the same physical system. There is no objective reason to choose between the two, that is, neither is more correct.

It is also worth noting that any physical system can be given a computational description. After all, any finite physical system can, in principle, be modeled on a computer. The computer models the system by carrying out some computation. But if the model is accurate then the physical system being modeled is also accurately described as carrying out the computation. Why then do we not usually think of planets as busy computing their orbits? Because the planets were not designed to do so. Even better, why do we not usually think of the motion of the planets as computing the time? This is a commonly assigned interpretation. The simplest device to accomplish this assignation is a sundial. Still, we recognize that the planets are not computing the time. We compute the time by observing them. Ultimately, the same is true of any so-called computing system. When I type a sum into an adding machine, the adding machine does not compute the sum. The adding machine just acts in accordance with physical law. It just is. I compute the sum using the adding machine. Because of the adding machine's design, I am able to compute the sum by pressing buttons and reading a display rather than by doing math.

IV. Syntax is Not Semantics.

There is only a little to say in this section that was not said in sections II and III. In carrying out all the syntactic operations to read a Chinese story and answer questions about it, John did not understand the story. This shows us that mental states are not computational states. Further, when John output a list of Chinese characters, he did not mean what they said. Knowing all the formal relations of a system of symbols tells you nothing about what those symbols might mean. This is proven mathematically in the Löwenheim-Skolem theorem. This theorem says that any sentence that has a model in any domain has a model over the natural numbers. That is, given a string of symbols, and supposing their exists some consistent interpretation of those symbols as meaning something, it follows that we can come up with an interpretation according to which the string of symbols is about the natural numbers. Thus, we can not assign a unique meaning to any physical system based on its physical characteristics alone.

V. Mind Meets Language

I believe much of the confusion regarding the relation of mind and body is intimately tied up with the confusion regarding the relation of meaning and form. I think the program-computer relation reveals this link. Although mental states cannot simply be computational states, as Searle has shown, there certainly is some useful analogy, as the successes of cognitive science have shown. But the problem of assigning a computational interpretation to a physical system seems more like the problem of assigning meaning to some physical representation. When we assign a part of a computer to count as a one or a zero, we are likely to start talking about meaning, saying something like, "This voltage means zero and this voltage means one."

While I hold that both meaning and mind are inaccessible to physical explanations, some philosophers have argued otherwise. One is Daniel Dennett. I think it is critical that the two are addressed together. Here, Dennett agrees. That is as far as our agreement goes. Dennett holds that intentionality, that is, meaning as it is assigned to minds, can objectively be assigned to physical systems. He tries to explain how this can be done in his book The Intentional Stance. Basically, he argues that we should assign intentionality to all those systems that can be fruitfully explained from an intentional point of view. That is, because I can explain certain facets of my friend Phil's behavior by talking about things like Phil's beliefs and desires, I am justified in attributing beliefs and desires to Phil.

I think this method of attributing intentionality is totally bogus. One problem lies with determining just how fruitful an intentional explanation need be to warrant intentionality for the explained system. Aristotle's physics, a famous failure, assigns something like intentionality to the natural elements. We know now that Aristotle's physics is painfully wrong. However, it was accepted for more than a millennium, apparently because some people thought it was fruitful. Many psychologists would argue that explanations of human behavior in intentional terms are not fruitful. They are only folk psychology. Fruitful explanations in psychology, they might say, can only be achieved through a discussion of conditioning, brain chemical levels, synaptic connections, or some other means. The fact that we are searching for a scientific explanation of human behavior shows that the intentional stance has not been wholly successful. Despite its universal acceptance for all but the most recent of human history, humans are still unpredictable.

A more serious problem is that intentional explanations can be given for all sorts of systems that do not seem to have mental lives. A popular example in the literature is a thermostat. We can explain the behavior of a thermostat by supposing that it knows certain things: the temperature of the room, whether the AC is on or off, how the AC being on or off will affect the temperature of the room over time, etc; and as desiring certain that the room be a certain temperature. This explanation will predict all the behavior of the thermostat. Maybe you know how a thermostat works. I know how one type of thermostat works. However, the type of thermostat I know about, the old bi-metal coil thermostat, is surely not the type of thermostat in the room I am now in. The thermostat in this room (the ILC) is probably electronic. Understanding its circuitry is probably far beyond my knowledge of physics. However, I can explain its behavior perfectly using an intentional explanation. But I refuse to believe that the thermostat has a mental life. I think this is a knockdown counterexample to Dennett's view. He addresses this objection, but quickly moves on. I can find no persuasive argument against it. Where he claims to be presenting an argument to this counterexample with his theory, I find only fast talk and a quick move to far more complicated situations.

Even if Dennett's program were successful and gave us empirical justification for assigning mental states to certain systems, it is important to note that it would not give us any real access to mental states. It would not allow us to observe them or measure them in any way. Mental states remain in principle unobservable. They exist only in as much as we assign them to a system. Computational states and meanings meet the same fate. Their existence depends on a mind assigning them to a physical system. I hold that this is in fact the role of mind. The mental interprets the physical. The mental gives meaning to a physical system that previously had none.

I believe this stance can be fruitful for future research. As psychology progresses in telling us what physical states are correlated with what mental states, we are building up a wealth of data. This data is like a Rosetta stone. By observing the completed translations, from physical system to interpreted system, we can gain insight into how the mental accomplishes these translations. What do these interpretations have in common? What sets them apart from the other possible interpretations (given that there are others)? Are they always the best interpretations? As this interpretation of the physical is the function of mind, this gives us insight into the nature of the mental.

I must close with a caveat. I have stated that the nature of the mental is to assign meaning to the physical. While I believe this is true, I do not believe it is the whole story. I am also a firm believer in free will. Thus, the nature of the mental is twofold. The mental both interprets the physical world and acts on it. Unfortunately, exploring this aspect of mind is a job for another treatise. The problems of interaction are complex and require an understanding of quantum mechanics that I currently lack and an understanding of the physical function of the brain that humanity currently lacks. Ultimately, our understanding of physics and the brain may not leave any doors open for interaction. For now, I can only cross my fingers and continue studying.

Bibliography

Davidson, Donald. 1973. "The Material Mind" originally in Logic, Methodology and Philosophy of Science IV ed. Pat Suppes, et al., reprinted in Haugeland 1981.

Dennet, Daniel C. 1971. "Intentional Systems" originally in Journal of Philosophy 68, reprinted in Haugeland 1981.

-------- 1987. The Intentional Stance.

Hales, Steven D., ed. 2001. Analytic Philosophy.

Haugeland, John, ed. 1981. Mind Design.

Marr, David. 1977. "Artificial Intelligence-A Personal View" originally in Artificial Intelligence 9, reprinted in Haugeland 1981.

Putnam, Hillary. 1960. "Minds and Machines" originally in Dimensions of Mind ed. Sidney Hook, reprinted in Hales 2001.

-------- 1973. "Reductionism and the Nature of Psychology" originally in Cognition 2, reprinted in Haugeland 1981.

-------- 1988. "Much Ado About Not Very Much" in AI Debate ed. Stephen R. Graubard.

Searle, John R. 1980. "Minds, Brains, and Programs" originally in The Behavioral and Brain Sciences 3, reprinted in Haugeland 1981.

-------- 1992. Rediscovery of the Mind.

Weizenbaum, Joseph. 1976. Computing Power and Human Reason.