This text deals with arguments against the possibility of so-called strong artificial intelligence, with a particular focus on the Chinese Room Argument devised by philosopher John Searle. We start with a description of the thesis that Searle wants to disprove. Then we describe Searle’s arguments. Subsequently, we take a look at some objections to Searle by other influential philosophers. Finally, I conclude with my own objection that introduces a more accurate definition of strong artificial intelligence which renders Searle’s arguments invalid. Along the way, we will dispose of some common misconceptions about artificial intelligence.
Searle’s Argument
The Semantic Argument
In his essay Can Computers Think? [11], Searle gives his own definition of strong artificial intelligence, which he subsequently tries to refute. His definition is as follows:
One could summarise this view […] by saying that the mind is to the brain, as the program is to the computer hardware.
Searle’s first attempt at refuting the possibility of strong artificial intelligence is based on the insight that mental states have, by definition, a certain semantic content or meaning. Programs, on the other hand, are purely formal and syntactical, i.e. a sequence of symbols that do not have a meaning in themselves. Therefore, a program could not be equivalent to a mind. A formal reconstruction of this argument looks as follows:
- Syntax is not sufficient for semantics
- Programs are completely characterized by their formal, syntactical structure
- Human minds have semantic contents
- Therefore, programs are not sufficient for creating a mind
Searle emphasizes the fact that his argument is based solely on the property that programs are defined formally, regardless of which physical system is used to run the program. Therefore, it does not state that it is impossible for us today to create a strong artificial intelligence, but that this is generally impossible for any conceivable machine in the future, regardless of how fast it is or which other properties it might have.
The Chinese Room Argument
In order to make his first premise more plausible (“Syntax is not sufficient for semantics”), Searle describes a thought experiment – the Chinese Room. Assume there were a program that is capable of answering Chinese questions in Chinese. No matter which question you pose in Chinese, it gives you an appropriate answer that a human Chinese speaker might also give. Searle now tries to argue that a computer running this program doesn’t actually understand Chinese in the same sense as a Chinese human being understands Chinese.
To this end, he assumes that the formal instructions of the program are carried out by a person who does not understand Chinese. This person is locked in a room, and the Chinese questions are passed into the room as a sequence of symbols. The room contains baskets with many other Chinese symbols, along with a list of formal instructions, which are purely syntactical rules that tell the person how to produce an answer to the question by assembling the symbols from the baskets. The answer generated by these instructions are then passed out of the room by the person. The person is not aware that the symbols that are passed into the room are questions and the symbols that are passed out of the room are answers to these questions. He just blindly carries out the instructions strictly and correctly. And these instructions generate meaningful Chinese sentences that are answers to the questions which couldn’t be distinguished from the answers a real Chinese speaking person would give.
Now Searle raises attention to the fact that the person in the room doesn’t understand Chinese simply by following formal instructions for generating answers. He continues to argue that a computer running a program that generates Chinese answers to Chinese questions therefore also doesn’t understand Chinese. Since this experiment could be generalized to arbitrary tasks, Searle concludes that computers are inherently incapable of understanding something.
Replies to the Chinese Room Argument
There are numerous objections to the Chinese Room argument by various authors. Many of these arguments are similar in nature. In the following, I will present the most commonly presented ones, including answers to these objections by Searle himself.
The Systems Reply
One of the most commonly raised objection is that even though the person in the Chinese Room does not understand Chinese, the system as whole does – the room with all its constituents, including the person. This objection is often called the Systems Reply and there are various versions of it.
For example, artificial intelligence researcher, entrepreneur and author Ray Kurzweil says in [5] that the person is only an executive unit and that its properties are not to be confused with the properties of the system. If one looks at the room as an overall system, the fact that the person does not understand Chinese doesn’t entail that this also holds for the room.
Cognitive scientist Margaret Boden argues in [1] that the human brain is not the carrier of intelligence, but rather that it causes intelligence. Analogously, the person in the room causes an understanding of Chinese to arise, even though it does not understand Chinese itself.
Searle responds to the Systems Reply with the semantic argument: Even the system as a whole couldn’t go from syntax to semantics and, hence, couldn’t understand the meaning of the Chinese symbols. In [9], he adds that the person in the room could theoretically memorize all the formal rules and perform all the computations in its head. Then, he argues, the person is the entire system, could answer Chinese questions without help and perhaps even lead Chinese conversations, but still wouldn’t understand Chinese since it only carries out formal rules and can’t associate a meaning with the formal symbols.
The Virtual Mind Reply
Similar to the Systems Reply, the Virtual Mind Reply states that the person does not understand Chinese, but that a running system could create new entities that differ from both the person and the system as a whole. The understanding of Chinese could be a new entity of this sort. This standpoint is argued for by artificial intelligence researcher Marvin Minsky in [15] and philosopher Tim Maudlin in [6]. Maudlin notes that Searle didn’t provide an adequate answer to this reply thus far.
The Robot Reply
Another reply changes the thought experiment in such a way that the program is put into a robot that can perceive the world through sensors (like cameras or microphones) and interact with the world via effectors (like motors or loudspeakers). This causal interaction with the environment, the argument goes, is a guarantee that the robot understands Chinese, since the formal symbols are endowed with semantics this way – namely objects in the real world. This view presupposes an externalist semantics. This reply is raised, for example, by Margaret Boden in [1].
Searle answers to this argument in [17] with the semantic argument: The robot still only has a computer as its brain and couldn’t go from syntax to semantics. He makes this more plausible by adapting the thought experiment such that the Chinese Room itself is integrated into a robot as its central processing unit. The Chinese symbols would then be generated by sensors and passed into the room. Analogously, the symbols passed out of the room would control the effectors. Even though the robot interacts with the external world this way, the person in the room still doesn’t understand the meaning of the symbols.
The Brain Simulator Reply
Some authors, e.g. philosophers Patricia and Paul Churchland in [2], suggest that one should imagine that instead of manipulating the Chinese symbols, a computer should simulate the neuronal firings in the brain of a Chinese person. Since the computer operates in exactly the same way as a brain, the argument goes, it must understand Chinese.
Searle answers to this argument in [10]. He argues that one could also simulate the neuronal structures by a system of water pipes and valves and put it into the Chinese Room. The person in the room then has instructions on how to guide the water through the pipes in order to simulate the brain of a Chinese person. Still, he says, no understanding of Chinese is generated.
The Emergence Reply
Now I present my own reply, which I have coined the Emergence Reply.
I grant that Searle’s arguments prove that a mind can not be equated with a computer program. This is immediately obvious from the semantic argument: Since a mind has properties that a program does not have (namely semantic content), a program can not be equal to a mind. Hence, it refutes the possibility of strong artificial intelligence by his own definition.
However, one can phrase another definition of strong artificial intelligence which, as I will argue, is not affected by Searle’s arguments:
A system exhibits strong artificial intelligence if it can create a mind as an emergent phenomenon by running a program.
I explicitly include any type of system, regardless of the material from which it is made – be it a computer, a Chinese Room or a gigantic hall of falling dominos or beer cans that simulate a Turing machine.
I will not try to argue for the possibility of strong artificial intelligence according to this definition. It is doubtful whether this is even possible. However, I will argue why this definition is not affected by Searle’s arguments.
Non-Applicability of the Semantic Argument
In my proposed definition, no analogy between the program and the mind created by the program is demanded. Therefore, the semantic argument becomes obsolete: Even though a program as a syntactical construct doesn’t create semantics (and therefore couldn’t be equal to a mind), it doesn’t follow that a program can’t create semantic contents in the course of its execution.
Moreover, this definition doesn’t state that the computer hardware is the carrier of the mental processes. The hardware is not enabled to think this way. Rather, the computer creates the mental processes as an emergent phenomenon, similarly to how the brain creates mental processes as an emergent phenomenon. So, if one considers the question in the title of Searle’s original essay “Can Computers Think?”, the answer would be “No, but they might create thinking.”
How a mind can be created through the execution of a program, and what sort of ontological existence this mind would have, is a discussion topic of its own. In order to make this more plausible, imagine a program that exactly simulates the trajectories and interactions of elementary particles in a brain of a Chinese speaker. This way, the program does not only create the same outputs for the same inputs as the Chinese’s brain, but proceeds completely analogously. There is no immediate way to exclude the possibility that the simulated brain can’t create a mind in exactly the same way as a real brain can. The only assumption here is that the physical processes in a brain are deterministic. There are some theories claiming that a mind requires non-deterministic quantum phenomena that can’t be simulated algorithmically. One such theory is presented by physicist Sir Roger Penrose in [7], who has founded the Penrose Institute to explore this possibility. If such theories turn out to be true, then this would be a strong argument against the possibility of strong artificial intelligence.
Non-Applicability of the Chinese Room Argument
As regards the Chinese Room Argument, it convincingly shows that the fact that a system gives the impression of understanding something doesn’t entail that it really understands it. Not every program that the person in the Chinese Room could execute in order to converse in Chinese does in fact create understanding. This is an important insight that refutes some common misconceptions, like the fact that IBM’s Deep Blue understands chess in the same way as a human does, or that Apple’s Siri understands spoken language. Deep Blue just calculates the payoff of certain moves, and Siri just transcribes one sequence of numbers into another (albeit in a sophisticated way). This definitely doesn’t create understanding or a mind.
Moreover, the Chinese Room Argument shows that the Turing Test is no reliable indicator of strong artificial intelligence. In this test, described by Alan Turing in [12], a human subject should converse with an unknown entity and decide whether it is talking to another human or a computer, solely based on the answers that the entity gives. If the computer repeatedly manages to trick the subject, we call it intelligent. This test only measures how good a computer is at giving the impression of being intelligent without making any restrictions as to how the computer does it internally, which, as we argued already, is an important factor in determining whether a computer really exhibits strong artificial intelligence.
Additionally, Searle’s argument shows that it is not the hardware itself that understands Chinese. Even if a hardware running a program creates a mind that understands Chinese, the person in the Chinese Room is the hardware and doesn’t understand Chinese.
It does not, however, refute the possibility that the hardware can create a mind that understands Chinese by executing the program. Assume there is a program that answers Chinese questions and creates mental processes that exhibit an understanding of the Chinese questions and answers. This assumption can not be refuted by the Chinese Room Argument. If we let the person in the room execute the program via pen and paper, it is correct that the person doesn’t understand Chinese. But the person is only the hardware in this case. Its mind does not equal the mind that is created by the execution of the program.
It might seem intuitively implausible that arithmetical operations carried out with pen and paper could give rise to a mind. But this can be made more plausible by assuming, as before, that the neuronal processes in the brain are simulated in the form of these arithmetical operations. The fact that a mind could not arise in such a way may be a false intuition. There is no immediately obvious logical reason to exclude this possibility. Similar things hold for Searle’s system of water pipes, beer can domino or other unorthodox hardware. If one assumes that a computer hardware can create a mind, one must grant that this is also possible for other, more exotic mechanical systems.
Whether it is indeed possible to create a mind by the execution of a program is still an open question. Maybe Roger Penrose turns out to be right that consciousness is a natural phenomenon that can’t be created by the deterministic interaction of particles. Are organisms really just algorithms? How can the parallel firing of tens of billions of neurons give rise to consciousness and a mind? As of now, neuroscience has not the slightest idea. However, I would say with some certainty that this question cannot be answered by thought experiments alone.
If you liked this article, you may also be interested in my article Gödel’s Incompleteness Theorem And Its Implications For Artificial Intelligence.
References
[1] Boden, Margaret A: Escaping from the Chinese Room. University of Sussex, School of Cognitive Sciences, 1987.
[2] Churchland, Paul M und Patricia Smith Churchland: Could a Machine Think? Machine Intelligence: Perspectives on the Computational Model, 1:102, 2012.
[3] Cole, David: The Chinese Room Argument. In: Zalta, Edward N. (Herausgeber): The Stanford Encyclopedia of Philosophy. Summer 2013. http://plato.stanford.edu/archives/ sum2013/entries/chinese-room/.
[4] Dennett, Daniel C: Fast thinking. 1987.
[5] Kurzweil, Ray: Locked in his Chinese Room. Are We Spiritual Machines: Ray Kurzweil vs. the Critics of Strong AI, 2002.
[6] Maudlin, Tim: Computation and consciousness. The journal of Philosophy, pp 407–432, 1989.
[7] Penrose, Roger: The Emperor’s New Mind (1990). Vintage, London.
[8] Russell, Stuart Jonathan et al.: Artificial Intelligence: A Modern Approach. Prentice hall Englewood Cliffs, 1995.
[9] Searle, John: The Chinese Room Argument. Encyclopedia of Cognitive Science, 2001.
[10] Searle, John R: Minds, brains, and programs. Behavioral and brain sciences, 3(03):417–424, 1980.
[11] Searle, John R: Minds, brains, and science. Harvard University Press, 1984.
[12] Turing, Alan M: Computing machinery and intelligence. Mind, pp 433–460, 1950.
Where did the comments go?
I switched to a new commenting system, but now I’ve reimported the old ones, so they should all be back.
Thanks for the info!
Your argument is still flawed because a computer cannot create thinking in the first place u say “Assume there is a program that answers Chinese questions and creates mental processes that exhibit an understanding of the Chinese questions and answers.” but refer to the chinese nation arg by Block (1978) where said thinking [consciousness] can’t arise from itself. the suppression is defeaning (and i’m sure u’ve come across this arg before to know u havent mentioned it)
Nothing arises from itself, I don’t see why you infer that.
He explicitly said in his argument that “the understanding” arises from “mental processes” created by “a program”. Not unlike how our mental processes are created from neurological connections that have been “programmed” in our brain during our development, growth and maturation.
In my opinion, Searle’s answer to the rebuttals does not really help his case. He keeps focusing on the man’s mind. The man is not the one that needs to be conscious of the semantics.
That’s analogous to trying to prove that humans are not conscious of their thoughts by showing that their mouth and ears do not understand the semantics of the words exchanged, and saying that even if the mouth was able to memorize all the correct movements to utter the right words it would still not understand them.
Whether the man memorizes the rules or not is not important. The man is just an organ, it’s the combination of man and program what’s showing understanding, the same way as it’s not our body/brain (hardware) or its neural interconnections (program) the ones that are capable of understanding, but the combination of both.
I think Searle in the end did understand that his logic was flawed in that it does not prove that it’s impossible for a machine, “system”, “virtual mind” or “robot” to hold understanding, but kept going on with it because nobody was capable to show how the system would have consciousness.
The thing is: even if it was a real Chinese person and not someone following rules, how do you even prove that the Chinese person understands what he is saying? How do you tell apart a Chinese that memorizes the hypothetical program from a Chinese that really understands Chinese?
How do you even prove that a human (other than yourself) has consciousness?
I don’t think that would be possible. Arguing that we are from the same species and have same organs and thus must be the same is just not proof, and assuming everything that isn’t from the same species can’t have consciousness would be a pretty self-centered and narrow idea. If we ever find intelligent aliens with different organs, how do yo know if they are conscious?
If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.
The title is misleading, but grabs attention (was that the purpose?). Searle’s argument is not flawed, but I’m not convinced that it’s “semantics” that distinguish computation from “understanding”, in the human/mind sense.
These arguments frustrate me because 1) understanding is generally assumed to be magic in the sense that it is never explained or reified as epi-phenomenon or for that matter in any sense, 2) there is never any discussion of whether understanding can be simulated adequately to say be a useful robot, and most importantly, 3) there is always an unstated but urgent point missing in these args about the nature of simulation, reproduction, and artificiality, all of which by definition make AI axiomatically distinct from life (but so what? That’s begging the question of the purpose and efficacy of the artificial entity or reproduction.)
We always run off onto the head of the pin as if we’re trying by definition to reproduce my wife as a machine, implicitly because the task is either that me and Godel have to double my love or it’s a worthless machine. Then we have to argue about it, usually from some Chinese room or gymnasium. The ‘Who cares’ and ‘what AHI is useful mimicry’ sets are soo much more broad and cover so much more interesting ground. None of these arguments even bother to assume they should even touch on basic middle ground important issues that are key partialities,,like the challenge of leveraging and/or mimicking emotion, optimal aspects of (necessarily variable) consistency, the ontological and epistemical differentials between a mouse and a crow and a girl (how far down do we go before we say, “Stop!, right there- dude, we can totally do the paramecium and get past Searle”?), or the mechanical process of animal thought that Millikan et al obsess over with great success. I mean, seriously—- endless talk-talk about whether or not Godel nixes AI – at what level? What, for God’s sake, is a mind, given only the specific purposes at hand? Does a reproduction require anything, a priori? It does not- it does not. the notion of successful reproduction of mind, however sacred or unstated one’s conception, is inherently bound up in specific contingency. The axiomatic notion that a reproduction is not the same entity as the original (whether it’s a magical entity or not) varies in importance between key and exactly the point, just like it does if I’m fucking my wife at a seedy hotel or a mistress, or if I should use my spare knife (or spare robot) so’s i don’t risk the main one.
The word artificial is never leveraged in these discussions- in fact, it’s the most critical and the most helpful part of the phrases AI, AHI, and AGI. it’s a kind of synonym for reproduction, which is a highly highly contingent concept- yet nowhere does anyone address the practical implications like say synthetic mental states, synthetic (partial; effective; useful; unclouded; better in context) understanding, or, heaven forbid, the advantages of artificiality like effective immortality, or reporting from Venus, or a much more consistent, untiring artificial empathy than humans might achieve in a specific context. Instead, we assign massive importance to arguing about an ill-defined unity between machine and (usually some kind of super) human. I fecking guarantee you that artificial intelligence will approach life asymptotically as it’s been doing- i don’t care about a limit condition i don’t even want to achieve- one wife is already too many.
Bah, humbug- leave me, Jacob Marley, stop clanking about.
How about turing test? Searle argument goes against it in the first place, in the sense that, a program that exhibits properties similar to humans mind, is not necessarily a human mind.
Your argument goes circular as it seems. Your argument considers exhibition of properties to be similar in nature to the simulated brain.
In my point of view, as far as the Searle argument is concerned, your argument is incomplete, if not circular.
The argument you called emergent reply, is quite stronger a’d interesting, yet, although it’s a plausibility, remains without proof. Emergent properties remain fact, as far as emergent entities are concerned.
Thank you for this very insightful article, i highly appreciated the quality of the literature presented and the arguments made to enrich the debate.
Here is one counter to Searle:
I propose that Searle is in fact, a Chinese Room flesh robot walking around, giving all the answers perfectly, but not understanding the semantics.
I challenge Searle to prove that he does in fact understand the semantics. Of course no answer will be sufficient to prove such, as the Chinese Room is, according to Searle himself, programmed to perfectly simulates that.
The Chinese Room is, by definition, indistinguishable from something that understands semantics. Therefore, nobody can prove they are not such a Chinese Room. Nobody can possibly prove they understand semantics.
The Chinese Room as postulates is equivalent in the real world to real intelligence that understands semantics. By arguing that the Chinese Room does not understand semantics, Searle argues that it is impossible to know whether semantics is understood by anybody, and you get the same result regardless of whether the other person understands semantics. In other words, there is equal logical validity to the claim that the entire population of humanity is nothing but Chinese Room flesh robots as there is to the claim that humans understand semantics.
Therefore, I revert to Turing’s conclusion – if it is impossible to differentiate the results of a program from intelligence, the program must be considered intelligent.
I am pretty sure Turing though about arguments like Searle’s (as they are a necessary consideration in devising the Turing Test), but decided that if something passes the Turing Test, there is absolutely no way to tell the difference.
Searle’s Chinese Room is a naive logical thought experiment that, in the end, can prove nothing. It is the worst type of Philosophy – trying to convince people of something that cannot be tested. At least Zeno’s Paradoxes were testable.
It doesn’t make sense to claim consciousness is an “emergent” phenomena.
All emergent phenomena can be described in terms of the lower base phenomena. For example “wetness” and everything we know about how it feels to be “wet” can be described on a molecular level. You feel extra weight, because of weak attraction bonds between the water molecules and the cloth, you feel cold because of the evaporation of water in air, etc. Every aspect of the “emergent” property is describable in terms of the base phenomena.
The same is not true for consciousness. Though you can explain memory and thoughts and emotions and computations in terms of the material base, none of this even starts to explain why we are aware of each of these functions. Its conceivable that a non aware computer could have each of the functions of a mind, but has no awareness that it’s doing it. In fact, not only is it conceivable, we have no physical explanation of why it would be any other way! Further we lack the capacity to experiment empirically on it.
A truly empirical study on consciousness would require one aware person to be able to experience another aware person’s experience. That is impossible.
Searle’s most significant flaw in attempt to posit the Chinese Room argument is that he refuses to define what it means to “understand.” He brushes off the issue like a strawman argument with arguments by analogy in a poor attempt to establish a standard if what it means to understand something. This led me to viewing his paper, “Minds, brains, and programs” as pseudophilosophy. It fascinates me how some people can build a rapport in academia on nonsensical publications.