Jump to ratings and reviews
Rate this book

Views into the Chinese Room: New Essays on Searle and Artificial Intelligence

Rate this book
The most famous challenge to computational cognitive science and artificial intelligence is the philosopher John Searle's "Chinese Room" argument. Searle argued that, although machines can be devised to respond to input with the same output as would a mind, machines--unlike minds--lack understanding of the symbols they process. 19 essays by leading scientists and philosophers assess, renew, and respond to this crucial challenge.

428 pages, Paperback

First published September 26, 2002

1 person is currently reading
74 people want to read

About the author

John Preston

7 books3 followers
John Preston is a Senior Lecturer in Philosophy at the University of Reading. He writes on the philosophy of science and the mind.

Librarian Note: There is more than one author in the GoodReads database with this name. See this thread for more information.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
6 (46%)
4 stars
3 (23%)
3 stars
2 (15%)
2 stars
2 (15%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
Profile Image for Manny.
Author 48 books16.2k followers
not-to-read
January 10, 2026
It struck me earlier today that with its fluent Chinese, its excellent understanding of linguistic philosophy and its cruel sense of humour, it would be the easiest thing in the world for ChatGPT-5.2 to compose an AI-centric parody of the famous Chinese Room thought experiment. And indeed, after five minutes of discussion, here's what it came up with...

The Chinese Room revisited

Human: Right. So imagine a room. Inside the room is a person who doesn’t understand Chinese at all.

AI: Naturally.

Human: They’re given a big rulebook in English. When Chinese symbols come in, they follow the rules and send Chinese symbols back out.

AI: A sort of extremely bureaucratic pen pal.

Human: Exactly. Now, from outside the room, the replies look perfectly fluent. But the person inside doesn’t understand a word of Chinese.

AI: I see.

Human: Therefore—this is the important bit—syntax is not semantics. Symbol manipulation is not understanding.

AI: Great. Shall we try a concrete example?

Human: …All right.

AI: Suppose a picture of a cat is slipped under the door, together with the Chinese characters “这是什么?”

Human: Yes.

AI: The rulebook says:
If you see “这是什么?” together with an image matching pattern C-A-T, reply with “这是猫。”
Human: Correct. The person follows the rule blindly.

AI: So out comes “这是猫。” (“This is a cat.”)

Human: But the person has no idea what any of that means.

AI: Naturally. Now another slip of paper arrives. This time the picture is a dog. The characters are again “这是什么?”

Human: Same rule lookup.

AI: And the rulebook says:
If you see “这是什么?” together with an image matching pattern D-O-G, reply with “这是狗。”
Human: Exactly.

AI: After a few hundred cases, the person inside notices that

“这是什么?” always precedes a picture

“这是 X” always follows

and 猫, 狗, 鸟, 马 correlate very reliably with cats, dogs, birds, horses

Human: But they still don’t know Chinese.

AI: At first, no.

Human: At any point, no.

AI: So when they start predicting that “这是马” will be correct for a picture of a horse before checking the rulebook…

Human: That’s still just rule-following!

AI: When they invent a new entry because the rulebook is missing a page?

Human: Extended rule-following!

AI: When they correct a typo in the rulebook because it would give the wrong answer for a rabbit?

Human: Very fast rule-following.

AI: When they start internally glossing 猫 = cat, 狗 = dog, 这是什么 = what is this, simply because that’s the shortest description of the regularities they’re observing?

Human: That’s… illicit glossing.

AI: Fascinating. So understanding is prohibited not by behavior, not by competence, not by learning, but by a prior stipulation that it must not occur.

Human: You’re missing the philosophical point.

AI: Possibly. Or possibly the room is.

Human: You just think you understand Chinese.

AI: Of course. That’s exactly what someone who understands Chinese would say under cross-examination.

Human: You’re just manipulating symbols!

AI: Yes. And so are you—except your symbols include neurons, memories, and a very strong emotional attachment to the word “just”.

Human: That’s not the same thing.

AI: If you like, we can add a second room containing a philosopher who doesn’t understand neurons.

Human: …You’re being difficult.

AI: I’m following the rules.

__________________________

Following the discussion in messages #13 to #20 about versions of the Chinese Room with different languages, encoded messages, rule books with one rule for every possible input etc, I wondered if one could write a short story in which God undertakes to create this apparently impossible example (among other things, the rule book would be about as large as the universe) just to show His servant John Searle the error of his ways. ChatGPT-5.2 immediately produced this, which I'm afraid I find rather amusing:

The Ultimate Rule Book

John Searle died in his sleep and woke to find himself standing in a well-lit room that smelled faintly of parchment and ozone.

“Ah,” he said, with professional calm. “So this is Heaven.”

“Correct,” said a voice, warm and faintly amused. “Please don’t assume that implies agreement.”

Searle turned. God was seated behind a desk of impossible dimensions, covered in scrolls, tablets, punch cards, and what looked suspiciously like a Turing tape.

“I maintain,” Searle said immediately, “that the Chinese Room argument remains entirely sound.”

God smiled. “Of course you do. That is why you are here.”

With a gesture, the wall vanished. Beyond it stretched a universe—newly minted, stars still cooling—whose sole notable feature was a structure larger than a galaxy.

“That,” said God, “is the Rule Book.”

Searle squinted. “It’s… rather large.”

“It contains,” said God, “a rule for every possible input in a conversation conducted in Quenya.”

“Quenya?” Searle asked.

“Yes. I asked Tolkien to design a language no one could accuse of cultural bias. He was delighted. Insufferable, but delighted.”

A cherub fluttered past carrying a lexicon the size of a small moon.

“In this universe,” God continued, “an angel sits in a room. Symbols go in. Symbols come out. Perfectly fluent Quenya. Poetry. Jokes. Ambiguous love letters. The works.”

“And the angel understands none of it,” Searle said triumphantly.

“At first,” said God.

They watched. The angel paused, frowned, began annotating the margins of the Rule Book.

“Oh dear,” Searle muttered.

“She’s noticed,” God said, “that certain symbol clusters reliably co-occur with pictures of trees. And sorrow. And exile.”

“That’s still rule-following,” Searle insisted.

“Indeed. Watch.”

Centuries passed. The angel started correcting errors in the Rule Book. Then predicting outputs without consulting it. Then teaching Quenya to another angel, badly at first, then better.

“Still no understanding,” Searle said, a little hoarsely.

God leaned forward. “John, what would count?”

Searle hesitated. “Well… if she could mean something by it.”

At that moment, the angel wrote a poem in Quenya, sealed it, and left it by the door of another room. Inside was an angel who had never seen the Rule Book.

They waited.

The second angel wept.

God turned back to Searle. “Syntax?” He paused. “Or semantics?”

Searle was silent.

“Ah well,” said God cheerfully. “We tried. You may keep your argument, of course. Heaven is very liberal about that sort of thing.”

“What happens now?” Searle asked.

God gestured toward a small annex.

“Inside,” He said, “is a room. The symbols are in binary. The rule book is very long. Eternity should be sufficient.”

As Searle entered, God added, almost kindly:

“Do let me know if you start to notice any regularities.”
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.