Conversation and AI (Thinking from the Turing Test)

Machine Learning Artificial Intelligence Natural Language Processing Semantic Web Ontology Technology Digital Transformation Probabilistic Generative Model Web Technology Deep Learning Online Learning Reinforcement Learning User Interface Reasoning Technology Knowledge Information Processing Chatbot and Q&A Technology Navigation of this blog

Conversation and AI (Thinking from the Turing Test)

In the previous article, we discussed “What is meaning?” via robots. in the previous article.

In this article, I will start with the Turing Test, which is a famous study of robots. The Turing Test is a game that Alan Turing, who established Turing’s theory of computation, devised in 1950 to determine whether a machine could have intelligence, and if so, how to determine it. It is now called the Turing Test.

In the Turing Test, consider a person A as the referee, and a person B or a machine connected to A by a display and a keyboard in an invisible position. In this game, B and the machine each try to convince A that they are human, and A judges whether the person he is talking to is human or not, relying only on the content of the conversation on the display. In this game, if A judges the machine to be human at a significant rate, then the machine is allowed to have intelligence.

The question arises, can we judge such intelligence by mere “dialogue”? The question arises. In response to this question, the act of dialogue is a treasure trove of context-dependence and flexibility, as the topic of conversation constantly jumps from place to place without limit, the previous conversation is rehashed, and the previous conversation forms a common understanding and implicit context. In order to carry on a conversation, you need to be able to check the meaning of a word when the other person uses a word you don’t know at all, and add it to your own vocabulary. This is a very high level of intellectual activity, so it is appropriate as a test item.

One artificial intelligence that has done quite well against the Turing test is a natural language manipulation system called ELIZA, developed by Weizenbaum. It is a chatbot-like interactive system that acts like a psychiatrist, and some of the referees who played the role of the patients believed that they were human until the very end, and it is rumored that they sometimes solved the referees’ problems and cured them.

The mechanism of Eliza is a “syntactic approach” (forget about the meaning of the language, how and what kind of symbols are arranged to make a sentence, how to arrange the symbols that make up a sentence, and so on) in which the words input by the referee are analyzed by natural language processing, and the input sentence is rewritten by a set algorithm according to the words contained in it. It is a “syntactic approach” that just rewrites the sentence (forgetting about the meaning of the language, it shows how the symbols are arranged to make a sentence, or how the symbols that make up the sentence are rearranged or replaced to make another sentence), not a “semantic approach” that understands the input words and gives a response (it shows what the symbols mean and how they are synthesized from the meaning of the symbols of the semantic components of the whole sentence).

Even if this Eliza’s intelligence is sufficiently evolved (she responds flexibly to various inputs from the judges and gives various answers), that does not determine if she understands the meaning. In other words, the Turing test is a “test of intelligence”, but not a “test of understanding of meaning”. This also shows that a system with sufficient practical intelligence can be constructed by simply using a smarter syntactic approach.

When thinking about artificial intelligence systems, it is important to think about this “intelligence” and “understanding meaning” in a very special way. Creating a thinking machine and creating an intelligent machine are two completely different things.

コメント

タイトルとURLをコピーしました