Functionalism.

             Functionalism is the doctrine that minds exist because, and only because, some systems, usually but not necessarily human brains, perform certain functions. The functions in question are of course all information processing functions. According to this theory, you have a mind because, and only because your brain processes certain information in certain ways. According to this theory, you have the particular mind you do because your particular brain processes information in its own particular way. Your personality, your preferences, desires, hopes, fears and dreams are all encoded in the structure of your brain. You exist as the person you are because your brain works the way it does.
             Functionalism is actually a pretty obvious implication of the mind brain identity theory. Mind brain identity theory holds that minds only exist because brains do what they do. From this it follows that whenever a brain does the kinds of things that make a mind exist, a mind will exist. A healthy, normally functioning human brain will produce a conscious, thinking mind whenever it performs those normal functions. But if minds exist whenever brains do those brain-functions, minds will also exist whenever anything does those functions, even if the thing during the functions is not itself a human brain. So, if mind brain identity theory is true, functionalism is necessarily also true. (Or so I think. But of course, I could be wrong.)
             Functionalism has several interesting implications. This is "interesting" in the sense of either "really cool" or "really scary," depending on your attitudes about personal identity. Or maybe it's just interesting in the sense of "weird." In any case, all of the following ideas follow logically from functionalism, and so if functionalism is true, each one of them is at least in principle possible.
             The first cool implication is that, if functionalism is true, then free-willed conscious computers are possible. If you've ever read or seen a science-fiction story in which a computer or a mechanical humanoid interacts with human beings in the same intelligent, mindful, and free willed fashion as the human beings, then you know what I'm talking about. Functionalism says that such machines are not logically or physically impossible. (And modern cognitive and computer sciences at least appear to say that such machines are technologically feasible, even if we won't be able to make them in the immediate future.)
             For the other, and scarier implications of functionalism, I would like you to follow me through a certain thought experiment. Imagine that medical science creates both a fully-functional artificial neuron and microscopic "nanobots" capable of assembling and programming such artificial neurons in parallel with dead or dying real neurons in the human brain. Regular injections of such nanobots (together with the appropriate neuronal subassemblies) would thus constitute a kind of "immunization" against mental deterioration.
             The way the process would work would be as follows. The nanobots, together with the microscopic parts they would use to assemble artificial neurons, would be injected into your bloodstream, find their way into your brain, and then lie dormant until needed. At some point, a neuron in your brain will begin to degenerate. This will be detected by the nanobots, and they will construct an identical artificial neuron alongside the degenerating natural one, and program it to behave exactly as the natural one would it all possible circumstances. Not only will this artificial neuron behave exactly the same as the natural one would at the time it was constructed, it will grow and change in response to inputs exactly the same way the natural one would have if it had remained healthy. In fact, whatever happens in the future, this artificial neuron will do exactly what the natural one would have done. It is functionally identical to the natural neuron that it replaced.
             The first question to ask is, would anyone notice a difference? The second question to ask is, would there be any actual difference in the way your mind functioned?
             It is hard to see how you could notice any difference. Your brain, now slightly less than 100% natural, would process information in exactly the same way that it would have if it had remained 100% natural. All of your thoughts, all of your feelings, all of your mental everything would be processed in exactly the same way, and thus would be experienced in exactly the same way, as it would have if the natural neuron had remained in place, and remained healthy. It sometimes happens that people experience a change in brain function without noticing any difference, but in such cases their intimate friends and family often notice that something has changed. Let us assume that your friends are such close observers of your behavior that they would notice absolutely any change in the way you thought or acted. Would there be any visible change for them to notice? The answer is no. You behave the way you behave because of the way your brain functions. If the brain changes its physical composition without changing its function, there could not possibly be any change in behavior for anyone to notice. Similarly, if the only difference in how your brain works is that one of your neurons was artificially constructed rather than having grown inside your brain, then it follows that there will be absolutely no difference in the way your mind works.
             Now imagine that more time passes. You get older. More of your brain cells begin to deteriorate. As each neuron begins to fail, the nanobots spring into action, replacing it with an artificial neuron that will continue to function in exactly the same way the failing one would have functioned if it had remained healthy. At a certain point, 10% of your neurons will have been replaced. Then it's 50%, 60%, 75% and eventually absolutely every one of your biological neurons has been replaced by an artificial one that works exactly the same way as a biological neuron. Your brain is now a machine. The question is, would that be any difference beyond the fact that you do not suffer the mental degeneration you would have if those biological neurons had been allowed to die without being replaced.
             If functionalism is not true, then you would no longer exist if the information processing in your brain was done by machines rather than biological cells. Conversely, if your mind still exists under these circumstances, then functionalism is true. Given that everything your natural neurons did that contributed to you having a mind is now being done by your artificial neurons, there is no reason to say that you do not have a mind. Given that your mind is created by the information processing functions of your neurons, it seems to follow that functionalism is true.
             What is scarier implications of functionalism is that, if we had the technology to do so, we could copy you. Imagine that the nanobots that build your artificial neurons contain communication devices that can transmit the exact specifications of every neuron they construct together with precise information on how that neuron connects with other neurons. Now imagine that that information is received and acted upon by other nanobots sitting in a soup of fuel elements and subassemblies in a big saucepan on your kitchen stove. As the neurons in your head are replaced, exact replicas of those neurons are constructed in the saucepan and hooked up to each other in exactly the same way as the neurons in your head. At the end of the process, a mechanical device exists in that saucepan that is an exact mechanical replica of your brain, right down to the individual programming of each individual neuron. Let us say that at this point this artificial brain is placed in the artificial brain cavity of an artificial humanoid body, and hooked up to the nervous system in exactly the same way that your brain is hooked up to your nervous system. Given appropriate hormones and so on, this brain will wake up in this body and function exactly the same way that your brain functions in your body.
             The scary part of this is that the new brain will think she's you. She will have all of your memories in exactly the same way as you have those memories. She will therefore remember everything you remember in exactly the same way as you remember it. Since there is no difference in information processing, there will be no difference in experience. There will be absolutely nothing in the new brain's experience of its own thoughts to tell her that she is not you. This will probably lead to some very awkward conversations.
             It is currently possible to create a computer model of a human neuron that processes information exactly the same way as a human neuron. These virtual neurons can be hooked to each other in exactly the same way that human neurons connect to each other in your brain. The only things holding us back are lack of computer power, and the fact that we don't yet know exactly how all your neurons are connected to each other. Still, if we did have that information, we could program it into a computer and get exactly the same result, with the same awkward conversations, that we would get if we created an unofficial copy of your brain.

Turing Tests My Chinese Roomba

             An important objection to functionalism was articulated by the philosopher John Searle. Now, I don't think that this objection succeeds in undermining functionalism, but I don't think it's stupid. In fact, I think that it is one of the most important and provocative arguments in philosophy of mind.
             Searle points out that there is a difference between computation and semantics. To see this difference, think about the difference between the speech recognition software I am using to write my materials for this class, and a human stenographer who transcribes dictation. Both systems take in a stream of words but where the mechanical system could not possibly understand anything that I say, the human stenographer will understand absolutely everything. This is a vitally important difference. The only reason the stenographer can understand what I dictate is because the stenographer has a mind, and understanding is one of the functions of the mind. The mechanical system cannot possibly understand the words it is hearing, and so it cannot possibly have a mind.
             Searle's objection is that computers and computer-like systems will only ever be able to do the kinds of things that speech recognition systems do, which is mechanically convert strings of symbols into other strings of symbols without understanding anything. They will never be able to do what human beings do, which is convert strings of symbols into forms of conscious understanding. There is a story about the emperor Caligula, and how he died. Supposedly, Caligula would periodically call for the captain of his guard and give him a list of people to be executed that day. One day, Caligula decided that the captain of the guard should himself be executed. Unfortunately for him, he issued the execution order in his usual manner. The Captain of the guard read the list, saw his own name on it, and decided to kill Caligula instead of having himself executed. (It doesn't really matter how Caligula died, as long as it was painful.) The point here is that, if Searle is correct, a robot Captain of the guard would not have recognized that his name on the list meant that he would be executed, and so a robot Captain of the guard would have passed on the order without understanding its meaning.
             Another way to understand Searle's objection is to consider the issue of what is called the "Turing Test." The Turing test is basically a failed attempt to determine the appropriate criterion for deciding when a computer is producing a mind. Alan Turing, inventor of the device we now call a "computer" and founder of modern computational theory, was once asked when computers would become conscious. ("Consciousness" is not the same as "mind," but that doesn't matter here.) Turing replied that computers would be conscious when human beings could not tell the difference between a computer and another human being purely on the basis of verbal output. Imagine that you are in a room with nothing but a computer terminal. The screen suddenly displays the word "hello," and you reply by typing in a response. A conversation follows. Now imagine that there are two possibilities. The first is that the computer is connected to another terminal being operated by a human being. The second is that the computer is connected to another computer that is running some piece of software designed to make you think that you are interacting with a human being. Turing's answer to the question of when computers would be conscious was to say that computers would be conscious when you couldn't tell the difference between a computer and a human being on the other end of a communication system like this.
             The problem with the Turing test is that it is possible to write programs that people cannot distinguish from human beings without those programs being conscious. A program called PC Therapist II regularly fools people into thinking that it is a real human being. It does so by taking its input and repackaging it in the form of "active listening" questions. Although it sounds like it understands, all it is doing is taking keywords from its input and reshuffling them into thoughtful looking sentences. Although some people, especially those who know how this program works, can distinguish between the software and a human being it seems clear that the software could be expanded to avoid the giveaways, and to supply more and more outputs on the type that characterize real human thinking. And so I at least think it is possible to write a sophisticated program that takes input that it does not understand, re shuffles and processes it in various ways to create mindless output that is indistinguishable from the kind of output produced by real thinking human beings. I think that if the programmers are bound and determined to create a program that absolutely no one will be able to distinguish from a real program, they will eventually be able to do it, and they will be able to do it without actually making the program actually able to understand any of its inputs. In other words, I think the Turing test is too easy.
             John Searle believes that no computer program will ever be able to produce a mind because, he thinks, computers cannot ever understand the symbols they manipulate. He bases his argument on the following thought experiment. Imagine there is a man who does not understand written Chinese, but who is very good at recognizing and remembering symbols, and at looking things up based on symbols that are incomprehensible to him. This man is placed in a room with several thousand numbered books. This room is separated from the outside world by a door with a small slit in it. Messages are passed in through this slit. These messages are written in Chinese. The man does not read Chinese, but he is able to look up symbols. When he gets a message he looks up the first symbol in book number one. When he finds that symbol, he also finds next to it the number of another book in which to look up the second symbol. This next book directs him to another book, and so on until finally some book or combination of books directs him to write down a new sequence of Chinese characters, none of which he understands. He then passes this response out through the slit.
             Outside of the room there is a woman who does understand Chinese. The messages she writes and passes into the room are general knowledge questions, written in Chinese. Because of the way information is encoded in the enormous library of books in the room, the messages written but not understood by the man in the room constitute answers to those general knowledge questions. Searle's argument rests on the claim that it is obvious that the Chinese room does absolutely everything that a computer ever could do in the way of processing symbols according to rules, and that it is obvious to the room does not understand Chinese. And, it may even be true that the room passes the Turing test, because the woman outside might come to believe that the room contains a human being who actually understands Chinese. But this is not so, because the man in the room would not know the difference between a set of translation rules that gave meaningful answers to serious questions, and a set of translation rules that returned absolute gibberish. We can imagine someone sneaking in and replacing a couple of the books so that the room starts producing nonsense. The woman outside would notice the difference because she understands Chinese, but the man in the room would have absolutely no idea that anything was amiss. From this, Searle draws the conclusion that mere computations can never produce a mind because mere computations can never constitute understanding in the sense that the woman outside understands Chinese, and the man inside doesn't.
             Some people have replied to this argument with the claim that it is the room, not the man that understands Chinese. Searle has replied that rooms don't understand things, and that there is no understanding going on anywhere in the room. All kinds of computations are going on, but there is no understanding anywhere in the room.
             Searle's argument is certainly a good illustration of the idea that computation cannot produce understanding, but I do not find it convincing. I don't find it convincing because I don't believe that the Chinese room represents everything that computers can be programmed to do in a matter of understanding. And I also think that Searle basically mistakes the nature of understanding. Basically, Searle's Chinese room is too simple, and it ignores the fact that strings of symbols can also be translated into behaviors.
             Imagine that we mount a version of the Chinese room into one of those Roomba self-directed vacuum cleaners. You know, the things that look like big frisbees and scuttle about like helpful cockroaches. Say we put in a Chinese room, perhaps staffed by a hyperactive hamster, that takes in strings of sound and converts them into strings of symbols by a set of rules that takes simple English instructions and turns them into steering directions. Then we put another Chinese room in next to it with another hamster and a set of rules that convert strings of symbols into sequences of button pushes, lever pullings and wheel turnings. This is the control room of the Roomba, and the hamster in it directs the Roomba according to the instructions passed in from the next room. Neither hamster understands what he is doing, but together their Chinese-room computations have turned our Roomba into a voice-operated device. Let's add a room where another hamster unknowingly converts inputs from pressure sensors around the body of the Roomba into more symbols, and another room where these symbols are unknowingly converted into corrections to driving directions. Then we put a third new room between the original two rooms that uncomprehendingly takes the original driving instructions and creates new instructions based on the way Roomba has bumped into or rubbed against things in the room. Now we have a Roomba that is much more capable of executing our voice commands. Now imagine adding other Chinese rooms to our Roomba. Maybe we had one that has the effect of storing information about the shape of the room. Maybe one that can compare new information to what is expected from old information. We can also add a room that stores new information. Maybe even one that stores general instructions and contains rulebooks that allow it to work out long term plans to execute those instructions in the most efficient way. Let us say we add rooms that process visual information, and rooms that help process temperature information, and add extra capacity to the processing of aural information. For any cognitive capacity you can think of, we can think of a way to add that capacity to our Roomba by adding another little Chinese hamster room. And let's also add manipulator arms and other kinds of equipment. Eventually, we will have a Roomba with a very high level of cognitive functioning that is supplied by a large number of very hard-working hamsters, none of whom has the slightest idea what is going on.
             Imagine that, somehow, the wastepaper basket in the room catches fire. You say "oh no, the wastepaper basket has caught fire." This string of sounds is picked up by the external microphones on the Chinese Roomba. Without understanding anything, the hamster in the verbal reception room terms of this sound string into a string of characters and passes it on to the hamster in the next room. This room has a set of rules for recognizing emergencies. These rules just look for certain strings of symbols. The hamster applying the rules doesn't know the difference, but the symbols for "on fire" caused this room to produce instructions that cause other instructions to be passed to a large number of other rooms where a large number of uncomprehending hamsters start working very hard indeed. Because of the way the simple translation rules are written in the various rooms, your statement "oh no, the wastepaper basket has caught fire" clauses the Chinese Roomba to go to the closet, open the closet, take out a fire extinguisher, take the fire extinguisher to the waste paper basket, hold the fire extinguisher in the correct way, and squirt extinguishing compounds at the fire until the fire goes out. Once the emergency is over, the complicated symbol processing systems inside Chinese Roomba result in it deciding to go around the room with the fire extinguisher looking for more fires, and then put the fire extinguisher away, and close the closet.
             The first thing to realize is that all this is possible even if the Chinese Roomba has not been specifically programmed to fight fires. All that is necessary is that various Chinese rooms contain rules for identifying dangerous situations, rules for matching responses to threats, rules for deciding the most appropriate action, and so on. All that is necessary to make Chinese Roomba respond in more sophisticated ways to more complicated situations is to add more Chinese rooms, or further elaborate the rules in the existing rooms. Ultimately, there is no in principle reason that we cannot construct a Roomba that has the same computational capacities linked together in the same way as a human brain. Certainly, neither John Searle nor anyone else has given us any reason to think that it is not logically possible to link together computational devices in such a way as to create something with all the information processing capacities of the human brain. Even the generalized effects and controls of various hormones can in principle be built into such a system.
             The second thing to realize is that we have good reason to think that Chinese Roomba understands lots of things. In the example above, it understood the verbalization "on fire" and, by dint of carefully constructed computational rules, knew what to do, and how to do it. This is what understanding is. If you want to believe that only biological brains can understand things, then you have to come up with something the biological brains actually do, that is necessary for understanding to exist, and which non-biological systems cannot do. No one has come up with any such criterion, and so we have no reason to think that nonbiological systems cannot understand.
             The third thing to realize is that John Searle never came up with an example of understanding. He gave us an example of a system that performs various computations and yet does not understand Chinese, but he did not specify any system that does understand Chinese. That is, he did not describe what goes on in the brain of someone when that person is understanding Chinese. If he had, the Chinese room example would be easy to answer because, for every step in the process of understanding Chinese, it is logically possible to specify a Chinese room that accomplishes that step. Searle has given us no reason to think that a specification of how people understand Chinese could not be turned into a specification of a series of Chinese rooms that accomplish the same purpose.
             As a final example, imagine a Chinese Roomba that has been developed to such a high level of complexity that it can respond effectively to a warning of danger. Imagine that the phrase "you are in danger" is processed in such a way that Roomba responds by checking its power supply, surveying its surroundings for escape routes, turning on all its sensors and looking all ways at once, and perhaps looking around for something that can be used as a weapon. Although the hamsters that process the information have no idea what is going on, their computations result in Roomba getting ready to deal with whatever comes. If Roomba, as a complete system, behaves in this way, do we have any reason to think that Roomba does not understand the concept of danger? On the contrary, we have every reason to think that Roomba understands danger, because Roomba responds to the warning in the same way that a thinking being might. Certainly, there is nothing that human beings do in terms of how they go about understanding things that computational devices cannot do.
             John Searle is wrong to think that his Chinese room proves that computers can never have semantics. His room is not equipped with the ways of responding to information that demonstrate understanding. When you modify his example to include ways of responding to information, and computational rules to decide which way is deployed in response to which information, you create a machine that does understand, because what it does is exactly what human beings do when they respond to information, and so if the machine does not understand, it follows that human beings do not understand either.

The Problem of Qualia

             In the text, Donald Palmer presents Thomas Nagel's explanation of qualia which is a way to refer to how it feels to experience something. There's a difference between knowing that an object is a red cube, and seeing the color red, and seeing and feeling the cubical shape in your hands. Functionalism theorizes that qualia are caused the same way as other mental events - by physical processes in the brain - and that the difference between conscious and unconscious knowledge is caused by a difference in brain processing. Thomas Nagel points out that, no matter now we study bats from the outside we will never be able to have the faintest idea what it is like for a bat to be a bat. When we are thinking about what it's like to be a different human being, we can analogize from our own qualia and correct for the known differences between ourselves and that person to get a rough idea of that person'r qualia, but we can't do that for a bat because we have no idea how bats experience anything. This is not just because bats have different lifestyles and senses, it's also that their brains have evolved along very different paths, and they might have very different qualia even for the same kinds of experiences that humans have. This implies that no outside study can determine what anybody's internal experience is, whether they be bat, badger or human being. The only way to know what it is like to be any kind of sentient being, even a human being, is to be that kind of being and have experiences of qualia.
             This can be taken as an attack on functionalism at two levels. The first level is the crude allegation that machine brains cannot create qualia because . . . um, because . . . because only human brains can produce qualia, and that's it! This objection fails quickly because it is obvious that human brains do something to make qualia and whatever that something is, it is done by neurons, and anything that is done by neurons can (eventually) be equally well done by computers. Anyone trying this tack against functionalism has to do two things: first, they have to demonstrate how qualia are made, and second, they have to prove that machine brains cannot do whatever this thing is. Since no-one has even begun to do this, this objection fails to even begin to be a problem.
             At a second, deeper level, Nagel's analysis of qualia can be taken as an attack on the adequacy of functionalism. Palmer writes that "Nagel and his followers conclude . . . that exhaustive studies of the human brain and nervous system may be able to explain what causes human mental experiences, but not what that experience is." This is an important difference. Consider a person whose brain is open and being stimulated in exactly the place and manner that it would be stimulated if she were seeing a field full of vividly green grass. Now, a researcher could tell you that the subject is having this "green, grassy field" experience, and he could tell you why she's having the experience, in the sense that he could point out the set of neurons being stimulated, but he couldn't tell you why the "green" experience feels the way it does to the subject. Supporters of this objection think that functionalism should explain that and, because it doesn't, they think it fails. In fact, functionalism seems unable to give any account of mental experience at all, and this seems to them to be a fatal flaw.
             Even deeper, they point out that functionalism is a scientifically based theory, and scientific theories are concerned only with objective, measureable facts. The strength of a force, the mass of a particle, and so on. Subjective experiences, such as seeing a grassy field, they say, are not objective, and therefore cannot be fitted into an objective theory. In fact, these objectors might claim, as an objective theory, functionalism does not include subjective phenomena, and therefore does not seem to allow subjective phenomena like qualia to exist at all.
             Now, here is why I think these objections are misguided. Firstly, functionalism is theory that neuroscience will explain the existance of all mental phenomena, and neuroscience is not aiming to explain what it is like to have mental experiences. It aims to explain how it happens that people have mental experiences. In the case of the grass-green experience, all it aims to do is show the mechanisms that produce various qualia, it's not about what it feels like to have those qualia, just about how the brain goes about producing the various qualia that it does. Remember, scientific theories are deployed to explain specific sets of phenomena. If a theory is offered to explain X, it is not a flaw in that theory if it fails to explain Y.
             The deeper, "objective/subjective" objection is also misguided. In fact, it is deeply, shockingly misguided. Scientific theories are not developed to explain objective facts in the sense that the strength of a force or the mass of a particle are "objective." Force and mass are not things that we explain, rather they are theoretical constructs we use to explain our experiences. We believe in forces because our best theories require forces to exist. We believe in mass because it allows us to explain subjective experiences like inertia and weight. What "objective" means here is shared experience. An objective fact is an experience that is presumed to be the same for everyone who experiences it. Force and mass are considered "objective" because they are thought to be the same no matter who is experiencing the objects involved. Color is considered "subjective" because it is one of the things that we think might be experienced differently by different people. (Neuroscience explains these differences quite neatly in terms of brain architecture.) But consider this, supposedly objective inertia and supposedly subjective color are in fact exactly alike in terms of how they are experienced. Push against a hanging heavy object and you will feel a sensation of resitance to your push. Look at a field of healthy Bermuda Grass, and you will feel a sensation of green. Newtonian physics explains the resistance you feel, the neuroscience of kinesthesis explains the fact that you feel it, but does not explain what it feels like to be resisted. The neuroscience of vision explains the greenness you feel, but not what it feels like to see green. Ulitimately, all scientific theories aim to explain experiences. Experiences of qualia are experiences to be explained, and functionalism, through neuroscience, aims to explain them. It does not matter that the theory is "objective" - concerned with physical things like light quanta and nerve cells - it includes qualia as the experiences it is attempting to explain. Attacking functionalism for not including subjective qualia is like attacking the theory of gravity for not including the subjective fact that some objects feel heavier than others.

Copyright © 2010 by Martin C. Young

Use your browser's "back" button to return to your previous screen.