History of Digital Game AI(1)(Intelligent Man Machine Interaction)

Machine Learning Artificial Intelligence Natural Language Processing Artificial Intelligence Algorithm Artificial Life and Agent Reasoning Technology Knowledge Information ICT Technology Digital Transformation Automata/State Transitions/Automatic Programming Navigation of this blog
History of Digital Game AI(1)(Intelligent Man Machine Interaction)

In my previous article on “Meaning”, I discussed it from the perspective of interaction with robots and PCs. This kind of intelligent human-machine interaction has been going on for a long time in the world of games. In this article, as a base for thinking about it, I first picked up its history from “Digital Game Textbook: The Latest Trends in the Game Industry You Should Know” and summarized it.

Contents are as follows

  1. History of Digital Game AI (1)(this article) Reflective AI, FSM, GA, BT
  2. History of Digital Game AI (2) Autonomous Agents, C4, Hierarchical FSM
  3. Technologies for Digital Game AI (Spatial Recognition) Knowledge representation
  4. Technologies for Digital Game AI (Time Awareness) Past memory, future planning
  5. Behavior Tree Overview

First of all, in early digital games (1970s-1980s), AI was only “part of the game stage mechanism. For example, in “Space Invaders,” which was introduced in 1978, the AI called Invaders would descend at a predetermined timing and make a predetermined move, regardless of the player’s movement. In other words, what players used to do in early digital games would be to “read enemy patterns and attack to defeat them.

The next generation of AI will be “reflective”. Reflexive AI is AI that moves in response to the environment and the character’s actions. It is a combination of predetermined responses, such as moving away when the player approaches, or using a shield to guard against a sword swing. The more responses you add, the more complex the AI’s movements become. The more responses, the more complex the AI’s movements become (simple reactive agents). The strategy for the player is to learn the AI’s multiple movement patterns and defeat it.

At the same time during this period (1980’s), the “task system” was being used to control in-game events through tasks. In this system, AI control, movement control of each object, and destruction control are all processed as tasks for each frame in the task system.

In the 1990s, games became 3D, and as a result, the complexity of input and output information increased exponentially. As a result, the complexity of input and output information has increased exponentially. In order to cope with this, the internal processing had to change from simple reflective algorithms to more complex algorithms, and in conjunction with the mainstreaming of high-level languages (mainly C) in the development environment during the same period, various complex AI algorithms were implemented.

One of these algorithms is the Finite State Machine (FSM), which defines multiple states and transition conditions between the states, and switches between them based on the conditions. In particular, those in which a part of the finite state is a loop are called cyclic FSMs, and they have come to be used in many FPS (First Person Shooter) games. (e.g. Quake, Halo, No one Lives Forever).

On the other hand, a one-way FSM without cyclic transition paths is called a Directed Acyclic Graph DAG (DAG) FSM, and a Hierarchical FSM (HFSM) constructed hierarchically using it is also used.

Haloで用いられたHFSM

The FSM can be extended by adding states and transitions, which greatly improved the scalability compared to previous games. Many GUI tools were available for the FSM.

Another aspect of algorithms is learning, but there are not many tools for learning built into games. One of them is the Genetic Algorithm (GA). Genetic Algorithm (GA) is a method of statistically evolving a group of AIs in one direction. Specifically, (1) each AI is given parameters to customize its attributes and behavior as genes (e.g., physical strength, attack power attribute parameters, attack hit rate, ease of escape, etc.). (2) Throw all the AI into the game world and make it actually act (simulation. After a certain period of time, the AI are taken out and their evaluation scores are calculated. (3) Based on the evaluation scores of the AI, the parameters of the better AI are multiplied to create the next generation of AI. The result is as follows.

Although there are only a few of them, AI using neural networks (Neural Network) is also used, and an algorithm called NEAT (Neuro Evolving of Augmenting Topology) combined with previous algorithms is also used. There are also more advanced algorithms that use Reinforcement Learning. For example, duncan, which implements MIT Lab’s C4 architecture (C4 architecture will be discussed in the next article), is an example of such a method.

In the next article, I would like to discuss the autonomous agents that appeared in the last generation (after 2000).

コメント

タイトルとURLをコピーしました