The Turing Test and Searle’s Response to it
One test for determining that a machine is intelligent is the Turing Test, described in “Conversation and AI (Thinking from the Turing Test)”
In a typical Turing test, a tester (human) interacts with a computer or a human counterpart, and if the computer behaves like a human and the tester cannot distinguish between the computer and the human, then the computer is considered intelligent. If the computer behaves like a human and the tester cannot tell the difference, the computer “passes the Turing test. The basic idea of this Turing test is based on the hypothesis that if an AI is so intelligent that it cannot be distinguished from a human in a dialogue with a human, then the AI may be considered to be as intelligent as a human.

The American philosopher John Searle’s “Chinese Room” thought experiment is used to critique this Turing Test. This “Chinese Room” would be as follows.
A person who does not understand Chinese characters (hereafter referred to as “A”) is confined in a small room. In this small room, there is a small hole to exchange a piece of paper with the outside world, and a piece of paper is inserted into A through this hole. Through this hole, a piece of paper is inserted into A. On the paper are letters that A has never seen before. These are a sequence of Chinese characters, but to A, they appear to be nothing more than a series of symbols such as “★△◎∇☆□”. A’s job is to add new symbols to this sequence of symbols and then return the piece of paper to the outside. The symbols that should be added to the sequence of symbols are all written in a manual in the room. For example.
“To the piece of paper marked “★△◎∇☆□”, add “■@◎∇” before taking it outside.”
And so on.
A just repeats this process. A receives a piece of paper with a list of symbols from outside (actually, this piece of paper is called a “question” outside the room), adds a new symbol to it, and returns it to outside (this is called an “answer”). The person outside the room then thinks, “There is someone in this small room who understands Chinese. However, there is only A in the small room, who cannot read Chinese characters at all, and who just repeats the same tasks as in the manual, without understanding the meaning of the tasks at all. Yet, from the outside of the room, a dialogue in Chinese is taking place.”
The people in this small room are now computers themselves, and they have only formal syntax (that is, they know about language, forgetting about meaning, what forms of symbols make a sentence, or how to rearrange the symbols that make up a sentence to make another sentence, e.g. “this is a pen” is a sentence, but “pen a this is” is not a sentence), but not understanding the meaning,
From this viewpoint, Searle concluded, “Computational systems that follow algorithms are not intelligent because computation is, by definition, formal. From this viewpoint, Searle concludes, “Computational systems that follow algorithms are not intelligent because computation is, by definition, a formal symbolic operation, and there is no understanding of meaning.
Response to Searle.
At first glance, this logic seems perfect, and one might think that it would be impossible to create an intelligent machine. However, in order to achieve a situation where “people outside the room think that there is someone in this small room who understands Chinese,” it is necessary to produce appropriate outputs for every input situation, and it is impossible to create a manual for every input variation (Creating such a manual was the goal of the second generation of AI, as described in “The History of AI and Deep Learning” but it has not been achieved.), we can assume that Searle’s logic is incorrect, that either “it is impossible to set up in the first place” or “the meaning cannot be understood because it is a formal operation.
Turing was talking about intelligence in the first place, but Searle’s argument is replaced by a discussion of meaning, which is also a leap in logic. However, language is the primary means of expressing and communicating intelligence, and the ability to understand language, grasp its meaning, and share complex ideas can be said to be the result of advanced cognitive functions and intelligence. It can also be said that logical thinking, which is part of the function of intelligence, is realized through the understanding of meaning.
As described in “Handling the Meaning of Symbols with Computers” meaning is not the symbol itself, which is the object of formal symbol manipulation, but the relationship that is associated with the symbol and constitutes the Semantic triangle, as shown below.
In the Semantic Triangle, the symbol “CAT” is assigned to a real cat, and what “CAT” means is further associated with it. The symbol “CAT” here does not mean a cat, but is merely a symbol associated with what it means.
To deal with meaning is to deal with the “hidden information” attached to this symbol, and what Searle is doing in his formal symbol manipulation is only manipulating the symbol, not directly dealing with meaning.
Is there a way to handle meaning through computation (formal manipulation)?
As described in “Introduction to Programming in Python (1) What is Programming? Therefore, artificial intelligence is not possible without the ability to handle meaning through computation (formal manipulation).
Here, let us consider how to achieve this.
First, as mentioned above, to handle meaning means to handle the “hidden information” associated with these symbols, and to use machines to handle hidden information can be achieved by devising models such as those described in “Overview of Hidden Markov Models, Various Applications, and Implementation Examples” and elsewhere. The model based on deep learning described in “Automatic Generation by Machine Learning” can also be said to be like hidden information from the viewpoint of modeling serial information.
In addition to such means of infinite generation, what is needed is the introduction of “purpose” as described in “Life as Information – Purpose and Meaning. Specifically, one approach is to introduce David Papineau’s cognitive model of “what has needs (deeder)” and “if C and D, then R”. This is a rule-based expert system that follows the conventional cognitive model of “Opportunisy” and “If C, then R.” By adding the user’s state to the rule-based expert system, it gives direction to the unregulated generation.
As a concrete example of the above, consider an expert system combined with a Finite State Machine (FSM). The expert system is built by incorporating the knowledge and rules of the experts into the program, and the FSM is used to keep track of which states the system is in and to model the behavior of the expert system in terms of specific states and transitions.
An example implementation in python would look like the following. This example combines an expert system for simple medical diagnostics and an FSM for managing patient states.
class ExpertSystem:
def __init__(self):
# expert system knowledge base
self.rules = {
'Symptom1': 'Disease1',
'Symptom2': 'Disease2',
# ... other rules
}
def diagnose(self, symptoms):
# The part of the expert system that diagnoses diseases based on symptoms
for symptom in symptoms:
if symptom in self.rules:
return self.rules[symptom]
return 'Unknown Disease'
class FSM:
def __init__(self, initial_state):
# Initial state of finite state machine
self.state = initial_state
def transition(self, action):
# The part of the FSM that controls state transitions
if action == 'Treatment1':
self.state = 'State2'
elif action == 'Treatment2':
self.state = 'State3'
# ... Other Transition Rules
def get_state(self):
return self.state
# Combining expert systems and FSM
class MedicalSystem:
def __init__(self):
self.expert_system = ExpertSystem()
self.fsm = FSM(initial_state='State1')
def perform_diagnosis(self, symptoms):
# Diagnosis with expert systems.
disease = self.expert_system.diagnose(symptoms)
# Perform state transitions
self.fsm.transition('Treatment1' if disease == 'Disease1' else 'Treatment2')
return disease, self.fsm.get_state()
# test
medical_system = MedicalSystem()
patient_symptoms = ['Symptom1', 'Symptom3']
diagnosis, current_state = medical_system.perform_diagnosis(patient_symptoms)
print(f'Diagnosis: {diagnosis}')
print(f'Current State: {current_state}')
In this example, the ExpertSystem class is responsible for the expert system part, the FSM class is responsible for controlling the finite state machine, and finally, the MedicalSystem class combines these classes to perform medical diagnosis and patient state transitions.
To make these more flexible, the approach described in “Strategies for Similarity Matching Methods (7) Improvement of Alignment and Disambiguation” and “Overview of Automatic Sentence Generation Using Huggingface” can be used to combine similarity using machine learning and to use machine learning for generative systems. An approach using the generative machine learning described in “Overview of Automatic Sentence Generation Using Huggingface” is possible.
Rodney Brooks of MIT states that his “academic motivation” is to create a fully autonomous robot (creature) that can coexist with humans in the world and be recognized by humans as an intelligent being in its own right, with the necessary condition that the creature “have some purpose for its survival. The creature must “have some purpose for its survival.
The construction of an artificial intelligence system based on these approaches is considered to be one effective means.
Reference
1. Foundational Works on the Turing Test
Alan Turing (1950)
“Computing Machinery and Intelligence”
Mind, Vol. 59, No. 236, pp. 433–460.
-
The original paper that introduced the Imitation Game, later known as the Turing Test.
-
Shifts the question from “Can machines think?” to behavioral indistinguishability from humans.
-
Addresses objections such as consciousness, mathematical limits, learning machines, and theological concerns.
Why it matters:
This paper defines the behavioral criterion that still frames AI evaluation and debate today.
Jack Copeland (ed.)
The Turing Test: The Elusive Standard of Artificial Intelligence
Springer, 2004.
-
A comprehensive academic analysis of the Turing Test.
-
Includes historical context, philosophical critiques, and modern interpretations.
-
Explores whether the test is sufficient, necessary, or misguided.
2. Searle’s Critique: The Chinese Room Argument
John R. Searle (1980)
“Minds, Brains, and Programs”
Behavioral and Brain Sciences, Vol. 3, pp. 417–457.
-
Introduces the Chinese Room Argument.
-
Argues that symbol manipulation (syntax) alone cannot produce understanding (semantics).
-
Concludes that passing the Turing Test does not imply genuine understanding or consciousness.
Key distinction:
-
Strong AI: A program literally has a mind.
-
Weak AI: A program simulates mental processes but does not understand.
John Preston & Mark Bishop (eds.)
Views into the Chinese Room: New Essays on Searle and Artificial Intelligence
Oxford University Press, 2002.
-
A collection of responses to Searle, including:
-
Systems Reply
-
Robot Reply
-
Brain Simulator Reply
-
-
Includes Searle’s counter-responses.
This book shows that the Chinese Room is not a single argument, but a long-standing philosophical battleground.
3. Philosophy of Mind and AI (Broader Context)
Douglas Hofstadter & Daniel Dennett (eds.)
The Mind’s I: Fantasies and Reflections on Self and Soul
Basic Books, 1981.
-
Explores consciousness, self-reference, and intelligence.
-
Contains discussions directly related to Turing, Searle, and computational models of mind.
-
Often used as a bridge between cognitive science and philosophy.
Daniel C. Dennett
Consciousness Explained
Little, Brown and Company, 1991.
-
Critiques the idea of a “central understanding” or inner observer.
-
Argues for a functional and distributed view of consciousness.
-
Often positioned as philosophically opposed to Searle.
Roger Penrose
The Emperor’s New Mind (1989)
Shadows of the Mind (1994)
-
Argues that human consciousness is non-computational.
-
Uses Gödel’s incompleteness theorem and speculative physics.
-
Frequently cited in AI skepticism discussions.
4. Contemporary and Ethical Perspectives
David J. Gunkel
The Machine Question: Critical Perspectives on AI, Robots, and Ethics
MIT Press, 2012.
-
Explores whether machines can or should be moral agents.
-
Moves beyond intelligence toward responsibility, ethics, and rights.
Brian Christian
The Alignment Problem
W. W. Norton & Company, 2020.
-
Examines how modern AI systems fail in ways not predicted by classic debates.
-
Shows how Turing-style evaluation is insufficient for real-world AI behavior.
5. Conceptual Map (How These Fit Together)
-
Turing Test
→ Behavioral criterion for intelligence -
Searle’s Chinese Room
→ Critique of behavior-only evaluation -
Is intelligence merely doing the right things, or does it require understanding?
-
Modern AI
→ Revives the debate: systems can pass narrow tests but still fail semantically, ethically, or contextually. - The Turing Guide
- The Master Algorithm
コメント