Work in Artificial Intelligence (AI) has produced computer programs that can beat the world chess champion and defeat the best human players on the television quiz show Jeopardy. AI has also produced programs with which one can converse in natural language, including Apple's Siri. Our experience shows that playing chess or Jeopardy, and carrying on a conversation, are activities that require understanding and intelligence. Does computer prowess at challenging games and conversation then show that computers can understand and be intelligent? Will further development result in digital computers that fully match or even exceed human intelligence? Alan Turing (1950), one of the pioneer theoreticians of computing, believed the answer to these questions was yes. Turing proposed what is now known as The Turing Test: if a computer can pass for human in online chat, we should grant that it is intelligent. By the late 1970s some AI researchers claimed that computers already understood at least some natural language. In 1980 U.C. Berkeley philosopher John Searle introduced a short and widely-discussed argument intended to show conclusively that it is impossible for digital computers to understand language or think.

Searle argues that a good way to test a theory of mind, say a theory that holds that understanding can be created by doing such and such, is to imagine what it would be like to do what the theory says would create understanding. Searle (1999) summarized the Chinese Room argument concisely:

Searle goes on to say, The point of the argument is this: if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have.

Thirty years later Searle 2010 describes the conclusion in terms of consciousness and intentionality:

Searle's shift from machine understanding to consciousness and intentionality is not directly supported by the original 1980 argument. However the re-description of the conclusion indicates the close connection between understanding and consciousness in Searle's accounts of meaning and intentionality. Those who don't accept Searle's linking account might hold that running a program can create understanding without necessarily creating consciousness, and a robot might have creature consciousness without necessarily understanding natural language.

Thus Searle develops the broader implications of his argument. It aims to refute the functionalist approach to understanding minds, the approach that holds that mental states are defined by their causal roles, not by the stuff (neurons, transistors) that plays those roles. The argument counts especially against that form of functionalism known as the Computational Theory of Mind that treats minds as information processing systems. As a result of its scope, as well as Searle's clear and forceful writing style, the Chinese Room argument has probably been the most widely discussed philosophical argument in cognitive science to appear since the Turing Test. By 1991 computer scientist Pat Hayes had defined Cognitive Science as the ongoing research project of refuting Searle's argument. Cognitive psychologist Steven Pinker (1997) pointed out that by the mid-1990s well over 100 articles had been published on Searle's thought experimentand that discussion of it was so pervasive on the Internet that Pinker found it a compelling reason to remove his name from all Internet discussion lists.

This interest has not subsided, and the range of connections with the argument has broadened. A search on Google Scholar for Searle Chinese Room limited to the period from 2010 through early 2014 produced over 750 results, including papers making connections between the argument and topics ranging from embodied cognition to theater to talk psychotherapy to postmodern views of truth and our post-human future as well as discussions of group or collective minds and discussions of the role of intuitions in philosophy. This wide-range of discussion and implications is a tribute to the argument's simple clarity and centrality.

Searle's argument has three important antecedents. The first of these is an argument set out by the philosopher and mathematician Gottfried Leibniz (16461716). This argument, often known as Leibniz Mill, appears as section 17 of Leibniz Monadology. Like Searle's argument, Leibniz argument takes the form of a thought experiment. Leibniz asks us to imagine a physical system, a machine, that behaves in such a way that it supposedly thinks and has experiences (perception).

Notice that Leibniz's strategy here is to contrast the overt behavior of the machine, which might appear to be the product of conscious thought, with the way the machine operates internally. He points out that these internal mechanical operations are just parts moving from point to point, hence there is nothing that is conscious or that can explain thinking, feeling or perceiving. For Leibniz physical states are not sufficient for, nor constitutive of, mental states.

A second antecedent to the Chinese Room argument is the idea of a paper machine, a computer implemented by a human. This idea is found in the work of Alan Turing, for example in Intelligent Machinery (1948). Turing writes there that he wrote a program for a paper machine to play chess. A paper machine is a kind of program, a series of simple steps like a computer program, but written in natural language (e.g., English), and followed by a human. The human operator of the paper chess-playing machine need not (otherwise) know how to play chess. All the operator does is follow the instructions for generating moves on the chess board. In fact, the operator need not even know that he or she is involved in playing chessthe input and output strings, such as NQB7 need mean nothing to the operator of the paper machine.

Read more:
The Chinese Room Argument (Stanford Encyclopedia of ...

Related Posts
June 17, 2015 at 10:59 pm by Mr HomeBuilder
Category: Room Addition