next up previous
Next: About this document

Can a Machine Think? P.

Mon Feb 24 10:54 PST 1997 Can a Machine Think?

Introduction

In his book Minds, Brains and Science, John Searle presents arguments which purport to show that it is impossible for a machine to think. Recognizing that in a relatively trivial sense one can assert that a human is a `biological' machine which is obviously able to think, Searle restricts his argument to `digital computers.'

Searle enunciates the core of his argument in a sequence of four premises and four conclusions as follows:

PREMISE 1.
Brains cause minds.

PREMISE 2.
Syntax is not sufficient for semantics.

PREMISE 3.
Computer programs are entirely defined by their formal, or syntactical, structure.

PREMISE 4.
Minds have mental contents; specifically, they have semantic contents.

CONCLUSION 1.
No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.

CONCLUSION 2.
The way that brain functions cause minds cannot be solely in virtue of running a computer program.

CONCLUSION 3.
Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.

CONCLUSION 4.
For any artefact that we might build which had mental states equivalent to human mental states, the implementation of a computer program would not by itself be sufficient. Rather the artefact would have to have powers equivalent to the powers of the human brain.

Searle observes that these statements are simply put, `perhaps with excessive crudeness.' He also claims that `the argument has a very simple logical structure, so you can see whether it is valid or invalid.'

Let us take Searle up on his invitation to see whether any of it is valid. Perhaps we should begin by exploring Searle's notion of `simple logical structure,' so that we can understand what sort of argumentation he uses and see what sorts of objections might defuse his claims. Specifically, let us look at Searle's third `conclusion,' and his argument to support it, which begins:

And this third conclusion is a trivial consequence of our first premise. It is a bit like saying that if my petrol engine drives my car at seventy-five miles an hour, then any diesel engine that was capable of doing that would have to have a power output at least equivalent to that of my petrol engine.

Although Searle hedges his bets by using the phrase `it is a bit like saying,' it is clear that he intends the analogy to embody the form of the argument. However, the argument is insufficient. Let us look at the situation from a slightly different perspective. Suppose that I see two (apparently) identical cars, each driving at 75 miles per hour. The cars stop, I look under the hood of one, and see an engine that can generate 200 horsepower. What can I conclude about the engine in the other car? Can I conclude that it must be able to generate at least 200 horsepower? No! All I can logically conclude is that it doesn't need to generate any more than 200 horsepower. In other words, I know that 200 horsepower is enough to drive the car at 75 miles per hour, but it could well be that less horsepower is sufficient. Of course, perhaps the other engine generates exactly the same power, or more, but it could also generate less! (If you don't quite see what is happening here, imagine that the two engines are not the same, and imagine two people each opening the hood of one of the cars; think about what conclusions they each can draw about the other engine,if any.) Searle has hidden a vital unstated assumption, perhaps to the effect that the engine in his car is working as hard as it can when it drives at 75 miles per hour, or, more precisely, that his engine is the smallest engine that could possibly drive his car that fast. But this is exactly the conclusion he is claiming to draw! From his premise that brains cause minds, the most that Searle can logically conclude is that to generate a mind, there is no absolute need for anything more powerful than a brain.

I am not about to suggest that this flawed argument by itself renders all of Searle's discussion impotent. Indeed, it is peripheral to the main thrust of his ideas. However, it points out that Searle depends heavily on persuasion to carry his points, invoking logic (and technical definitions, as we shall see) as rhetorical devices. He leaves fundamental assumptions unstated, or lets them appear in the guise of conclusions. This means that when we try to evaluate his arguments, we must not rely solely on finding more or less trivial logical flaws; we need to look behind the forms and examine the meaning he intends to convey. Of course, Searle is not alone in this approach to difficult ideas - we all rely on a vast array of devices and tools to express ourselves, formal logic and rigorous definitions being just two within a host.

Definitions, Digital Computers and Chinese Rooms

The notion which Searle places at the center of his argument is the distinction between syntax and semantics. Any language, whether a natural human language or a programming language to be used in specifying the operations to be performed by a digital computer, has a syntax, that is, a set of formal rules. Determining whether a sentence or a program is syntactically correct is a purely formal process. Determining the semantics, the meaning, of a sentence or program is not a purely formal process. It is necessary to look at context to find out what a sentence or program is about.

Searle might object here that I have already slipped in an illicit assumption, that programs can be `about' something, or have meaning or content or semantics. He adamantly asserts that programs are purely formal. He says ``It is essential to our conception of a digital computer that its operations can be specified purely formally.'' He emphasizes the fact the a computer program is expressed in terms of abstract symbols, zeroes and ones, which have no meaning in themselves, which are not about anything. We get to the heart of the matter when he says:

...the argument rests on a very simple logical truth, namely, syntax alone is not sufficient for semantics, and digital computers insofar as they are computers have, by definition, a syntax alone.

Searle introduces this sentence in the middle of the discussion of his thought-experiment of the Chinese room, to which we will return. It is essential to recognize what Searle has done here, and why and how he has done it. Searle's fundamental argument, baldly expressed, is that a `mind,' or `thinking,' requires meaning (semantics, intentionality) which digital computers cannot have. But notice how he guarantees that his argument cannot fail: he has included the lack of semantics as part of his definition of a digital computer! As Searle observes, his argument rests on this very point. Notice also how he allows the ambiguous nature of the English conjunction `and' to encourage us to feel that his definition is really just an expression of a `simple logical truth.' Imagine that Searle had been slightly more explicit in his argument and had expressed it as:

DEFINITION:
A digital computer is a machine without the possibility of semantics, capable of executing purely syntactic formal programs.

PREMISE:
Thinking, or the existence of a `mind', requires both syntax and semantics.

CONCLUSION:
A digital computer cannot think - that is, cannot have a mind.

PROOF:
The theorem is true by the definition of a digital computer, combined with the premise, because of the lack of semantics.

Had Searle expressed things this directly, we would most likely reject his discussion out of hand as lacking any content. Given a carefully chosen set of abstract definitions, we can `prove' essentially anything we want. The value of the proof rests on the applicability of the definitions. If there is nothing which satisfies the definitions, we have then effectively proven nothing. Again, though, we ought to look more deeply to see what Searle really intends to say. We need to explore Searle's putative definition and see whether it is perhaps reasonable, something we should accept.

Is a digital computer by its nature lacking semantics? We could begin with the trivial observation that without electricity to make it run, a computer is just an inert lump of silicon, iron and copper, to which we would almost certainly not want to attribute semantics. Let us then restrict our attention to running computers. A further observation is that without a program to execute, we hardly want to consider the hardware to be acting in its capacity as a computer. In fact, we should probably modify our original question `can a machine think?' to something more like `can the activity of a running computer (artefact) actually be thinking?' The relative subtlety of how the precise phrasing of the question affects the form of our possible responses is evidenced in the approach taken by Searle in exploring the issues. For the time being, let us bypass the question of what `thinking' might be, and focus on the approach taken by Searle to understanding the activity of a computer.

In order to free himself of trivial objections of the form ``Oh, you mean today's computer can't think, but we'll just build a more powerful machine,'' Searle prefers to discuss an abstract digital computer, capable of executing any of a variety of precisely specified programs. This is essentially a sound approach, because it frees us from the distractions of thinking about some particular chunk of hardware and allows us to focus on the fundamental questions. We do, however, have to be at least a little careful about how we interpret things. For example, we would not want to end up making the trivial observation that ``Of course an abstract machine can't think, since abstract things can't do anything at all, much less think!''

With the notion, then, of an abstract computer, we are immediately led to concentrate on the program which might run on some particular hardware implementation of a computer. Searle does precisely this, as have most computer scientists investigating the area. Searle thus is led to the question ``can a program embody thinking?'', or, in slightly more traditional phrasing, ``is a mind just a program running on the hardware of a brain?'' He sets out to refute this notion, and uses a variety of approaches in his attempt. The primary concept he employs is the distinction between syntax and semantics. His fundamental argument is that since a program is a formal object (that is, since it is defined purely formally or syntactically) it cannot have `content' or `semantics' and hence cannot embody thinking. We are apparently, though, treading somewhat slippery ground here ...

Let us investigate briefly the notion that a program is a purely formal object which cannot have meaning. Certainly in order that our hypothetical program be appropriate for (that is, able to run on) any particular implementation of our abstract digital computer, we must not make any special assumptions about the hardware, and must thus use abstract symbols (typically thought of as zeroes and ones) to specify the program. These abstract symbols are not by themselves `about' anything. Perhaps a better name than `symbol' is `token.' Note that in the same way, we use the letters of the alphabet when we write an essay in English. The letter `r' in Searle's name is an abstract symbol, a token, which is not `about' anything by itself. Thus, one can assert that both an abstract program and a page of English text are made up of abstract, meaningless tokens. (We would, perhaps, look a little foolish if we were to further assert that any page of English text is therefore meaningless ...)

A computer program has another purely formal aspect. Our hypothetical abstract digital computer will have to be sufficiently precisely specified that it could actually be realized in hardware; there must therefore also be a sufficiently precise specification of the language to be used in writing programs it can execute. Thus, a string of abstract symbols must satisfy the set of purely formal syntactic rules of the language in order to qualify as a valid program for our computer. This is the origin of Searle's notion that programs are defined purely formally or syntactically. In a similar sense, if we are handed a string of letters from the alphabet, we can determine in a purely formal way if that string is an English sentence or not, by seeing if it is made up of English words, if there is a subject, a verb, etc. Searle might object here that when we write English, we don't just follow the syntactic rules - following the rules is a more or less trivial part of what we do when we write. Computer programmers would raise the same objection - certainly they follow the syntactic rules, to make sure the program will run, but that is the least part of writing a useful program.

Thus, though it is clear that Searle's assertion that programs are defined purely formally or syntactically is in some sense true (in the sense that in order to be a program, as opposed to just a string of zeroes and ones, it must satisfy certain formal rules, have certain purely syntactic properties), he has ignored other, non-syntactic, properties that a computer program may have. It is as though he argued that English writings are meaningless since they are made up of `letters' and follow the syntactic rules of English! This confusion on Searle's part about the syntactic nature of programs renders much of his discussion pointless and irrelevant, but we need to be careful about concluding that Searle is left with no defenses.

Suppose Searle granted us that a program can `be about something' or have meaning or content in essentially the same sense that a book can be about something. Would he also have to grant us therefore that a machine can think? Almost certainly not! Would we want to say that since a book is about something, that it thinks? There must be some difference between the way a book is about something, and the way our thoughts are about something, mustn't there? (Or perhaps that's not really the problem, and there is something else which keeps a book from thinking ...) The notion of `being about something,' or `having content' or `intentionality' is certainly fundamental for Searle, and we can't leave this discussion without engaging it, but let us approach things from a different direction, and try to return to the topic later.

A computer program written on a piece of paper, sitting on a shelf, certainly is not thinking, does not have a mind. In that sense, the mind is not `just the program which runs on the hardware of the brain.' But this is a trivial observation, not worth much further discussion. The real question is ``can a system, consisting of a physical computer running a particular program, be thinking?'' Searle, in his thought experiment of the `Chinese room,' focuses to the particular question ``can a physical computer, running a particular program, understand Chinese?'' Certainly if in principle it can't even do that, then it would be pointless to assert that it could really think in the sense we do, or that it had a mind.

Searle establishes the context of his discussion as follows:

Imagine that a bunch of computer programmers have written a program that will enable a computer to simulate the understanding of Chinese. So, for example, if the computer is given a question in Chinese, it will match the question against its memory, or data base, and produce appropriate answers to the questions in Chinese. ...does it literally understand Chinese, in the way that Chinese speakers understand Chinese?

To make the discussion more immediate, he suggests that the reader

...imagine that you are locked in a room, ...you are given a rule book in English for manipulating these Chinese symbols. The rules specify the manipulations of the symbols purely formally, in terms of their syntax, not their semantics.

It is worth noting several things about the way he introduces various aspects of the situation. He slyly slips the word `simulate' into his description of what the computer does, thus implicitly bolstering a later argument to the effect that although clever programmers might get a machine to simulate the activity of a human mind, that would not constitute duplicating a human mind. He also implicitly specifies the way the programmers would write the program (``match[ing] the question against its memory, or data base''). He invites his readers to participate as a component of the system, so that they can `feel' the difference between what the machine does in its `simulation' and what we do when we `literally understand.' Indeed, he is begging the question when he inserts the adjective `literally' before `understand!' Notice also that he has shifted the `formal' and `syntactic' emphasis to the Chinese symbols, specifying that they will be manipulated without regard to semantics. Again, he has effectively defined the system to be non-thinking by requiring that the system work without regard to meaning (although Searle would probably object that it is not his requirement, but rather the nature of machines).

Implicitly, Searle seems to be arguing as follows: Suppose you were inside the room, following the rules. You wouldn't feel like you understood Chinese, just by following the rules, even though you might convince people outside that you did understand Chinese. Now certainly `literally understanding' Chinese must include `feeling like you understand' Chinese. Since you wouldn't feel like you understood Chinese, neither could the machine, and therefore the machine couldn't `literally understand,' and, by implication, couldn't be thinking. As Searle puts it,

You understand the questions in English because they are expressed in symbols whose meanings are known to you. Similarly, when you give the answers in English you are producing symbols which are meaningful to you. But in the case of Chinese, you have none of that. In the case of the Chinese, you simply manipulate formal symbols according to a computer program, and you attach no meaning to any of the elements.

On the face of it, this argument seems quite convincing. In fact, one almost wonders why Searle has spent so much time talking about the `formal' nature of computer programs, and has been so adamant about their supposed lack of semantics. The answer comes in Searle's attempts to refute various replies to his argument. He begins with the disingenuous comment that ``Various replies have been suggested ...They all have something in common;'' (we wait for Searle to explain what flaw they have all have, but he continues vacuously ...) ``they are all inadequate.'' What Searle doesn't dare say is that in particular they all include assumptions that there is a way to get from `interaction with the environment' to `attaching meaning to symbols,' that there doesn't have to be a secret `I' inside to do the `understanding,' and that with a reasonable interpretation of the words meaning and thinking and without an unreasonable definition of a digital computer, Searle's argument carries no force, and we have no good reason to doubt that a machine could think.

Searle's comments on these issues include:

There is no way that the system can get from the syntax to the semantics. I, as the central processing unit have no way of figuring out what any of these symbols means; but then neither does the whole system.

and

As long as all I have is the symbol with no knowledge of its causes or how it got there, I have no way of knowing what it means. The causal interactions between the robot and the rest of the world are irrelevant unless those causal interactions are represented in some mind or other.

and

If it really is a computer, its operations have to be defined syntactically, whereas consciousness, thoughts, feelings, emotions, and all the rest of it involve more than syntax. Those features, by definition, the computer is unable to duplicate however powerful may be its ability to simulate.

The failures of Searle's arguments are here evident. First, he appeals to a fundamental misconception about formal properties, in particular of computer programs. He seems somehow to believe that since a program has formal syntactic properties, that it therefore cannot have other (e.g., semantic) properties. But consider how, for example, human language seems to work. A word such as `transistor' was a purely formal object consisting of purely formal tokens (letters) until a meaning was attached to it through its use. In a similar sense, a symbol manipulated (used) by a running program can acquire meaning through its use. ``Aha,'' replies Searle, ``but a thinking person used the word `transistor' and thereby attached meaning.'' Thus we see Searle's second failure - he seems secretly to be (or perhaps, since he gives so much evidence for it, not so secretly ...) a dualist, demanding a homunculus, an `I' to do the understanding and attaching of meaning, `some mind or other' to do the representing. He refuses to allow the possibility that a running program could do the understanding, the `attaching of meaning,' and he goes so far as to build the impossibility into his `definition' of a computer. But this is exactly the question at hand! It is certainly no answer arbitrarily to define the question away.

The fundamental question is ``Suppose that we write a particular program and let it run on a particular machine. Suppose further that the resulting system interacts causally with its environment, responds coherently and appropriately to questions and asks questions of its own, talks about poetry, sometimes complains about being confused, claims that it thinks about where it came from, says that it understands some things but not others, and discusses cogently the nature of meaning and intention. Do we have any philosophical or logical grounds for concluding that it is not `really' thinking?''

At the end of Searle's discussion, before his summary, he comes right down to the core of his argument. The point on which his whole feelings about the matter rests is the notion that ``the mind is just a natural biological phenomenon in the world like any other.'' Stated this way, it appears to be a simple incontestable truth. But let us state it in a slightly different way: ``the minds that we know now are just natural biological phenomena in the world.'' Stated this way, we see that the `simple truth' tells us essentially nothing about the possibility of other minds, machine minds, and that Searle's argument is at its core just an appeal of one biological mind to others to have faith in their uniqueness, to disbelieve in the possibility of non-biological minds.

Let us review Searle's four premises and conclusions. His first premise, that brains cause minds, still seems right to me. The second, that syntax is not sufficient for semantics, also seems right. For example, we can have syntactically valid sentences like ``The sky emotes furiously'' which have no meaning, no semantics.

Premise three, however, that computer programs are entirely defined by their formal, or syntactical, structure, seems to me to be at best misguided and at worst nonsense. When I sit down to write a computer program, I don't just string together the symbols of the programming language according to the syntactic rules; in exactly the same sense, when I talk or write, I don't just string together words according to the syntactic rules of English. Granted, I have to be more careful about syntax when I write a computer program, because the computer is more finicky about syntax than people are, but that doesn't mean that syntax is all there is to a computer program. I can point to the part of a program which is about opening a file, the part about reading data, the part about someone pushing a particular key on the keyboard. I can observe that this variable represents the number of times that key has been pushed. This must not be the sort of `aboutness' that Searle is concerned with.

What, then, can Searle be claiming? Perhaps he is suggesting that the activity of a computer system under the control of a particular program is determined only by the form of the program, and by nothing else. But for a real computer, with input and output capabilities, this is clearly wrong. The particular activity of the system depends on the specific details of the input, not just on its program. That is, if the environment in which the system is running provides different stimuli, the system will react differently. We would typically say that the program responds to its input, meaning of course that the activity of the combined computer/program system depends on its environment. The particular response depends on the input `symbol,' the program, and the current state of the system. Of course, the current state of the system depends on the past input. The system may or may not react in the same way to two occurences of some particular input symbol. Ultimately, the most sensible interpretation I can make of Searle's claim is that he has confused the notions of `having formal properties' and `being a purely formal object.'

Searle's fourth premise, that `minds have mental contents; specifically they have semantic contents' I find relatively mystifying. He says of this premise:

And that, I take it, is just an obvious fact about how our minds work. My thoughts, and beliefs, and desires are about something, or they refer to something, or they concern states of affairs in the world; and they do that because their content directs them at these states of affairs in the world.

In what sense, though, can a mind have `contents'? Is Searle suggesting that a mind is some sort of container, a `bag' in which these contents are kept? And what are these contents? Are they some sort of `non-physical objects' which people can carry around? In fact, this seems to be more or less what Searle is suggesting, and he seems almost to be suggesting further that these `non-physical objects' are generated or `secreted' by the human brain through some biological process. This may seem like a malicious misinterpretation of Searle's concept of intentionality, but let us listen to him in his discussion purporting to explain why a computer might simulate but never duplicate thinking:

After all, we can do computer simulations of any process whatever that can be given a formal description. So, we can do a computer simulation of the flow of money in the British economy, or the pattern of power distribution in the Labour party. We can do computer simulation of rain storms in the home counties, or warehouse fires in East London. Now, in each of these cases, nobody supposes that the computer simulation is actually the real thing; no one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down.

The implication is that Searle somehow believes that a thought or an intentional mental state (one of these mysterious contents that minds have) is like a raindrop generated by the `wetware' of the brain. Searle attributes some mysterious biological power to the brain, and says ``that the computational properties of the brain are simply not enough to explain its functioning to produce mental states." Indeed, he concludes the chapter with the assertion that:

...names, mental states are biological phenomena. Consciousness, intentionality, subjectivity and mental causation are all a part of our biological life history, along with growth, reproduction, the secretion of bile, and digestion.

This notion that `mental states' are somehow `biological products of the brain' (like bile?) leaves Searle in a precarious position. If they are a physical product, a secretion of some sort, we should be able to find them, extract them, analyse them in a chemical laboratory. If they are not physical then Searle can only deny being a `dualist' by the dubious dodge of claiming that he doesn't believe in a non-physical mind, that it is just the contents which are non-physical! Well, let us leave this for a while, and look at Searle's `conclusions.'

His first conclusion asserts that programs are not minds, and further that programs by themselves are not sufficient for having minds. As I observed above, I agree that a program written on paper, sitting on a shelf, is not thinking, does not have a mind. In Searle's terms, if I consider only the formal, purely syntactic aspects of a program, and never let it run on a piece of physical hardware, then it can't be a mind or have a mind or cause a mind. If, however, we go beyond the purely formal aspects and actually allow the program to run, and allow the combined system of hardware and software to interact with its environment, then we are outside the realm of Searle's supposed `definition' of a computer program as a purely syntactical object. In essence, Searle has fallen into the trap of asserting that an abstract object can't cause thinking, since an abstract object can't do anything. To the extent that Searle restricts himself to claiming that a program by itself isn't a mind, I agree with him completely. We must take care, however. We can go further, and say that ``the mind is not just a program which might run on the hardware of a brain.'' We would be saying something very different if we were to say that ``the mind is the activity of a system consisting of a program running on the hardware of a brain.'' The point is that there is a fundamental difference between a program itself, and the activity of a system running that program! As far as the `logical entailment' of Searle's arguments relating to syntax and semantics, he seems not to recognize this distinction.

Searle's other three `conclusions' revolve around his mysterious notion of the brain's `biological power' to produce `semantic contents.' If we set aside Searle's arbitrary definiton of a digital computer as a purely syntactical system without the possibility of semantics, and if we accept Searle's rejection of dualism (even the bizarre form I attributed to Searle above), then we are left with no compelling reason to believe that a machine/program combination cannot think, cannot have a mind.

Something a bit more subtle ...

Searle's arguments are flimsy and often seem quite contrived. He doesn't seem to have taken particular care with logic, or the relevance or applicability of his definitions. In my discussion, I have pointed out flaws in his arguments and tried to expose misconceptions. But on the other hand, I certainly haven't proven that Searle's conclusion (that machines can't think) is false. Could Searle patch up his arguments? Are there really sound philosophical or logical reasons for abandoning the search for artificial intelligence? On the other hand, why is it so important to Searle to try to prove that a machine can't think?

In a profound sense, I think that the discussion above misses the crucial points. Someone could go through my arguments as I have gone through Searle's, and explain how I have misunderstood Searle, or how my points are irrelevant, or that my analogies are just that, analogies. Apparently missing is meaningful discussion of what thinking really is, what mind really is.

Although Searle doesn't mention it, his `Chinese room' thought-experiment is really just a variant of the classic Turing test. Alan Turing, in his paper ``Computing Machinery and Intelligence'' proposed to consider the question ``Can machines think?'' With his mathematical background, he was immediately led to the question of definitions of the terms `machine' and `think.' He had ready at hand a very precise formal definition of an abstract machine, his own Turing machine. This is today taken to be the appropriate definition, so much so that his name is no longer capitalized, and we call it the turing machine. But Turing didn't immediately state the definition and proceed with his discussion. Instead, he begins:

The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ``machine'' and ``think'' are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ``Can machines think?'' is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

Turing then describes an ``imitation game'' wherein a machine and a human each try to convince an interrogator that they are really the human. His implicit suggestion is that if the machine can win the game half the time, we might as well consider it to be thinking. There are a variety of difficult questions lurking in the reformulation. Indeed, to forestall various implementation questions, he does use the `digital computer' as his definition of a `machine.' Hence, it is clear that his digression and reformulation end up being some sort of subterfuge to avoid defining `thinking.'

Why does he work so hard to come up with a formulation which avoids the direct question of what constitutes thinking? If we look carefully, we see that Searle has done a similar thing. He tries to be relatively careful in his definitions of digital computer and program but he is much less careful in his definition of thinking and mind. One might suppose that the reason is just that most people probably don't have at hand a good definition of computer or program, but know perfectly well, and at first hand by virtue of direct experience, what thinking and mind are. The alternative, though, is worth considering, however briefly - that Searle doesn't have a clear idea of what thinking is.

If we embark on an exploration of what thinking `really' is, there are a variety of byways we should avoid. One semi-classical notion equates thinking with sequences of physical brain states. There is an implicit association between particular states of the brain with particular thoughts or feelings or ideas or emotions. This, however, leads us to some very awkward suggestions. For example, we might imagine a super-scientist, with access to appropriate testing equipment, being able to tell us that ``no, you are not actually feeling pain, since I can see that your brain is not in one of the `pain' states.'' We would also discover that two people can never have the same thought, since their brain states would be different. We would acquire a trivial, but in some sense effective, test for whether a machine was thinking: just see if it was in one of the `thought' states of our `reference human brain'. However, we would also acquire the associated difficulty that we would probably also be led to the conclusion that other people don't think, since their brain states would be different from those of our `reference human' also.

A partial remedy might be to define a `thought' to be any one of some general class of brain states. We simply take as our class the collection of all brain states of all humans past, present or future. We could hypothetically more or less arbitrarily divide the class into subclasses for `feelings,' `desires,' `pain,' and so on. Unfortunately, we are at this point left with a fundamentally meaningless definition - we are essentially reduced to saying that `thinking is what brains do, anything and everything ...', so that for example we would be thinking when we were sleeping or otherwise unconscious.

If we refuse to accept all brain states into our collection, then we must provide some other criterion for deciding which states to include and which to exclude. If we refuse to provide some other criterion, we have in essence abandonded our attempt at definition. Imagine that we are asked to define `red', and we refer to some hypothetical collection of all `red' objects. When asked how we know which objects are in the collection, we reply that to determine if something is in the collection, all we do is check to see if it is red! On the other hand, if we provide some other criterion for determining which brain states actually correspond to `thinking', then we really have no need for our supposed set of brain states - we simply use the other criterion.

Already we are going somewhat far afield, and I see no straightforward (or even complicate) way to rescue the `brain state' notion of thinking. However, I will return to the topic later, from a different perspective.

A second generic approach to the thinking question might be labelled the `behaviorist' approach. We could simply deny that anyone ever really thinks. We simply react, via conditioning and genetic predisposition, to stimuli in our environment. The restless activity of our brains is just that - restless activity, which is unrelated in any causal way to our actions. With this `un-defining' of thinking, we would not need to bother with the question `can machines think,' since people wouldn't be thinking either. We would all be automata, never acting, just reacting.

Neither of these simplistic approaches seems at all satisfactory to me. Certainly Searle also would not accept them. But then what alternatives do we have? On first reading, Turing's approach seems to slip easily into the `behaviorist' model: if the machine can act like a human, then we might as well say it is thinking. However, I think there is more to Turing's approach than that. In particular, I don't think he meant to suggest that people don't really think.




next up previous
Next: About this document

Tom Carter
Mon Feb 24 12:59:35 PST 1997