AI: Thinking Outside the Chinese Room
In 1980, the philosopher John Searle published a famous thought experiment called The Chinese Room, and thirty-five years later it is still considered by some to provide an insuperable refutation of the idea of strong artificial intelligence. Here is how Searle restated the thought experiment in 1999:
“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”
Searle and his followers contend that this argument demonstrates the impossibility that machines can ever have any conscious understanding of the computations they perform, no matter how sophisticated those computations may be. But does it really establish that?
It just so happens that I too am a native English speaker, born in 1980, the year Searle first published this idea. In another interesting coincidence, the same year that Searle reiterated this argument – 1999 – was the year that I left to live in China as a foreign exchange student. Like the inhabitant of the Chinese Room, I knew no Chinese at the outset. However, by the end of my year there, I had acquired a good working proficiency, at least in spoken Chinese, and a modest competency in reading and writing. What’s more, unlike the inhabitant of Searle’s Chinese Room, I felt that I really understood Chinese when I spoke it with my teachers, classmates, and friends.
What was the difference? I think the most obvious difference is that, unlike the person in the Chinese Room, I learned Chinese by immersion in Chinese society and culture. I went out to eat with Chinese friends, went to the market to buy clothes or a bicycle from Chinese vendors, went out on dates with Chinese women, and all the other daily activities that make up a normal human life. I think what Searle demonstrated was not the impossibility of machine intelligence, but rather the impossibility of learning a language in complete isolation from any social context.
This is exactly what Wittgenstein had said decades earlier. To speak a language is to participate in a form of life. Understanding language requires that we learn rules, but rules are learned by applying them in context. Ascertaining the nature of the context, and whether a particular rule applies to it or can be made to apply to it, calls for interpretation, sometimes creative interpretation, the likes of which Searle makes impossible within the confines of the Chinese Room.
So the appropriate question, I think, is whether Searle’s characterization of machine learning is fair. Is it necessarily the case that a machine would have to behave like the inhabitant of the Chinese Room, or could it acquire its competency just as I did, by trial and error and participation in a form of life? The use of adaptive neural networks in contemporary AI programs look a lot more like the latter, since they can change their “understanding” of the rules as they go, by virtue of their interactions with their environment, including humans.
Ultimately, I think the Chinese Room merely demonstrates what Wittgenstein set out to prove in the Philosophical Investigations: the impossibility of a private language. But whereas Wittgenstein devoted most of his energy to the inductive problem of figuring out how to interpret and apply the rules of language, Searle showed that even when we are given all the rules, we can have no appreciation of their content if we are cut off from the real world contexts in which they should be applied.
“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”
Searle and his followers contend that this argument demonstrates the impossibility that machines can ever have any conscious understanding of the computations they perform, no matter how sophisticated those computations may be. But does it really establish that?
It just so happens that I too am a native English speaker, born in 1980, the year Searle first published this idea. In another interesting coincidence, the same year that Searle reiterated this argument – 1999 – was the year that I left to live in China as a foreign exchange student. Like the inhabitant of the Chinese Room, I knew no Chinese at the outset. However, by the end of my year there, I had acquired a good working proficiency, at least in spoken Chinese, and a modest competency in reading and writing. What’s more, unlike the inhabitant of Searle’s Chinese Room, I felt that I really understood Chinese when I spoke it with my teachers, classmates, and friends.
What was the difference? I think the most obvious difference is that, unlike the person in the Chinese Room, I learned Chinese by immersion in Chinese society and culture. I went out to eat with Chinese friends, went to the market to buy clothes or a bicycle from Chinese vendors, went out on dates with Chinese women, and all the other daily activities that make up a normal human life. I think what Searle demonstrated was not the impossibility of machine intelligence, but rather the impossibility of learning a language in complete isolation from any social context.
This is exactly what Wittgenstein had said decades earlier. To speak a language is to participate in a form of life. Understanding language requires that we learn rules, but rules are learned by applying them in context. Ascertaining the nature of the context, and whether a particular rule applies to it or can be made to apply to it, calls for interpretation, sometimes creative interpretation, the likes of which Searle makes impossible within the confines of the Chinese Room.
So the appropriate question, I think, is whether Searle’s characterization of machine learning is fair. Is it necessarily the case that a machine would have to behave like the inhabitant of the Chinese Room, or could it acquire its competency just as I did, by trial and error and participation in a form of life? The use of adaptive neural networks in contemporary AI programs look a lot more like the latter, since they can change their “understanding” of the rules as they go, by virtue of their interactions with their environment, including humans.
Ultimately, I think the Chinese Room merely demonstrates what Wittgenstein set out to prove in the Philosophical Investigations: the impossibility of a private language. But whereas Wittgenstein devoted most of his energy to the inductive problem of figuring out how to interpret and apply the rules of language, Searle showed that even when we are given all the rules, we can have no appreciation of their content if we are cut off from the real world contexts in which they should be applied.
Published on October 02, 2015 14:56
No comments have been added yet.


