Machine Learning Artificial Intelligence Natural Language Processing Semantic Web Ontology Digital Transformation Probabilistic Generative Model Web Technology Deep Learning Online Learning Reinforcement Learning User Interface Knowledge Information Processing Reasoning Technology Navigation of this blog
About Artificial Life and Agent Technologies
Artificial life is life designed and created by humans. It is the field of study of systems related to life (life processes and evolution), using biochemistry, computer models and robots to simulate life. The term “artificial life” was coined by American theoretical biologist Christopher Langton in 1986. Artificial life complements biology in that it attempts to “reproduce” biological phenomena. Artificial Life is also sometimes referred to as ALife. Depending on the means, it is called “soft ALife” (software on a computer), “hard ALife” (robotics), and “wet ALife” (biochemistry), respectively.
In recent philosophical discussions, one of the kick-starters for the expression of intelligence is life, and from intention (the direction of life), meaning arises in relationships. (xx semantics) From that point of view, it would be a reasonable approach to have a machine (computer) replace the function of some kind of life and feed it back to the artificial intelligence system.
In this blog, I will discuss the following approaches to those statements.
Technical Topics
Algorithmic Thinking (Algorithmic Thinking) refers to the ability or process of thinking about logical procedures and approaches in problem solving and task execution. Having algorithmic thinking is an important skill that can help when dealing with a variety of complex challenges. ‘Problem partitioning’ in algorithmic thinking is the process of dividing a large problem into a number of smaller sub-problems, an approach that allows complex problems to be broken down into manageable units, making large tasks more understandable and allowing each sub-problem to be solved individually and efficiently. Problem Partitioning can be seen as the first step in the general problem-solving process.
Multi-agent systems using graph neural networks (GNNs) are a suitable approach when multiple agents interact in a graph structure and relationships and dependencies between agents are modelled.
Generative AI refers to artificial intelligence technologies that generate new content such as text, images, audio and video. As generative AI (e.g. image-generating AI and text-generating AI) generates new content based on given instructions (prompts), the quality and appropriateness of the prompts is key to maximising AI performance.
There are several methods for implementing multi-agent systems by deep reinforcement learning (DRL). The general methods are described below.
Unity is an integrated development environment (IDE) for game and application development developed by Unity Technologies and widely used in various fields such as games, VR, AR, and simulations. This paper describes the integration of Unity with artificial intelligence systems such as CMS, chatbot, ES, machine learning, and natural language processing.
Simulation involves modeling a real-world system or process and executing it virtually on a computer. Simulations are used in a variety of domains, such as physical phenomena, economic models, traffic flows, and climate patterns, and can be built in steps that include defining the model, setting initial conditions, changing parameters, running the simulation, and analyzing the results. Simulation and machine learning are different approaches, but they can interact in various ways depending on their purpose and role.
This section describes examples of adaptations and various implementations of this combination of simulation and machine learning.
An overview of reinforcement learning and an implementation of a simple MDP model in python will be presented.
This section describes the method of planning based on the maze environment described in the previous section. Planning requires learning “value evaluation” and “strategy. To do this, it is first necessary to redefine “value” in a way that is consistent with the actual situation.
Here, we describe an approach using Dynamic Programming. This approach can be used when the transition function and reward function are clear, such as in a maze environment. This method of learning based on the transition function and reward function is called “model-based” learning. The “model” here refers to the environment, and the transition function and reward function that determine the behavior of the environment are the reality.
A Finite State Machine (FSM) is a type of computer that transitions states with respect to an input sequence. diagram to change the current state upon receiving an input and generate the appropriate output.
Automata theory is a branch of the theory of computation and one of the most important theories in computer science. By studying abstract computer models such as finite state machines (FSMs), pushdown automata, and Turing machines, automata theory is applied to solve problems in formal languages, formal grammars, computability, computability, and natural language processing. This section provides an overview of this automata theory, its algorithms and various applications and implementations.
- Life from a philosophical point of view
- Mathematical Models of Life
- Artificial Intelligence Simulation and Cellular Automata
- MAS (Multi-Agent Simulation System) by Pyhton external link
Due to the impact of the coronas, we have seen an increase in the number of different studies on the spread of infectious diseases. One of these studies utilizes a method called multi-agent simulation to reproduce the spread of infectious diseases. In general predictive analysis, macro data are directly predicted. On the other hand, in the case of multi-agent based forecasting, macro data is represented from the interaction of micro data. Currently, this method is being used in the transportation and disaster prevention fields, but its use in the business side is also expected to increase in the future.
NetLogo is a programmable modeling environment for simulating natural and social phenomena, created by Uri Wilensky in 1999 and continuously developed since then at the Center for Connected Learning and Computer-Based Modeling. Since then, it has been continuously developed at the Center for Connected Learning and Computer-Based Modeling.NetLogo is well suited for modeling complex systems that evolve over time. The modeler can give instructions to hundreds or thousands of “agents” that operate independently. This allows for the exploration of the interrelationships between the micro-level behaviors of individuals and the macro-level patterns that emerge from their interactions.
NetLogo is also an authoring environment that allows learners, instructors, and curriculum developers to create their own models. Netlogo is simple enough for learners and instructors, yet advanced enough to be a powerful tool for researchers in a variety of fields.
Intelligent human-machine interaction has long been practiced in the world of games. In this article, we first summarize the history of AI as a base for thinking about it, picking up information from the “Digital Game Textbook: The Latest Trends in the Game Industry You Should Know”.
In the last generation (FY00 and beyond), much architecturalization of AI took place. First, there was the “Agent” architecture, which was gradually formed from the 1980s to the 1990s. This is an AI that fulfills its purpose by being assigned some role, and in the game world, it is synonymous with a character.
An “autonomous agent” is an agent that has the ability to develop its own goals while judging the surrounding environment and situation. Multi-Agent.
The quality of digital game character AI is determined by “how much control over time and space the AI can exercise. This also means how much it can recognize its surroundings and how much it can construct its actions within a certain time range and time scale. The following is a description of the corresponding technologies for each of the above.
First, let’s look at the recognition of space and objects. The field of digital games is much more complex than that of Go or Shogi. Generally, in order to represent such a space, we use a method of laying out points that serve as indicators of locations (WayPoints) or triangles (Navigation Mesh), and connect these elements to form a network The graph is processed as a graph.
In this issue, we will discuss time-based recognition techniques. In digital games, it is very important to view AI from the perspective of time. In order to give AI the ability to recognize the time axis, it is first necessary to give it a past (memory), which requires securing a storage space (memory) and determining its memory format.
For example, in the game F.E.A.R., memory formation is performed using a common knowledge format of “location, direction, stimulus, desire, time of information acquisition, and information reliability as seen from the AI” for the objects and targets in that stage. Dumcan maintains a time-stamped memory of the same object. This time-stamped memory allows Dumcan to predict when a ball seen behind a wall will emerge from behind the wall.
Next, there are various approaches to the means of giving AI a sense of the future. The most commonly used of these is goal-oriented (goal-based) planning. Goal-oriented is a behavioral principle of AI that first determines a goal and then designs actions to achieve it. While reflective AI reacts to the current environment, goal-oriented AI first designs a goal in the future and then acts on it. When there is more than one of these goals, the decision-making algorithm decides which goal to carry out before acting.
AlphaGo, a computer Go program developed by Google DeepMind, became the first computer Go program to defeat a human professional Go player in an even-handed (no handicap) game in October 2015. The victory of an artificial intelligence in Go, a field that had been considered the most difficult for computers to beat humans, was a shock to the world, and the appearance of AlphaGo went beyond a simple victory or defeat in a single game to widely publicize the usefulness of artificial intelligence, triggering a global AI boom. It also triggered a global AI boom. In this issue, I will discuss the relationship between AI and board games, including Go, and will also include notes from my reading of the book “Why AlphaGo Beat Humans” and “The Strongest Go AI: AlphaGo Demystified: Its Mechanism from the Perspective of Deep Learning, Monte Carlo Trees, and Reinforcement Learning”.
Behavior Tree is a framework for building complex AI behaviors that also appear in game AI. Originally developed for robotics, it is now used as an improved hierarchical state machine for designing AI for non-player characters (NPCs) in games such as FPS. The advantage is that it is easy to design and implement, reusable and portable, and can accommodate large and complex logic.State machines are defined as “mathematically abstract models of behavior consisting of a finite number of states, transitions, and actions. In contrast, the Behavior Tree is a model of behavior that is mathematically abstract.In contrast, the Behavior Tree is a tree-like structure with mutually nested states, and modularity is enhanced by restricting transitions to only these nested states.
A multi-agent system is a system composed of multiple interacting computing elements called agents. An agent is a computer system with two important capabilities. First, agents can act at least somewhat autonomously. That is, they can decide for themselves what to do to meet their design goals. Second, they can interact with other agents. Not merely exchange data, but can engage in similar acts of cooperation, coordination, negotiation, and other social activities we engage in on a daily basis.
- Agent-Based Semantic Web Service CompositionThe fundamental goal of the Semantic Web is to create a layer on top of the existing Web that allows for highly automated processing of Web content, further enabling the sharing and processing of data by both humans and software. Semantic Web services can be defined as self-sufficient, reusable software components that can be used to perform specific tasks.Here, we focus primarily on agent-based Semantic Web service compositions. Multi-agent-based Semantic Web service composition is based on the argument that a multi-agent system can be regarded as a service composition system, where different involved agents represent different individual services. Services are viewed as intelligent agent capabilities implemented as self-sufficient software components.
- Frame problem in agent systems
The frame problem in agent systems refers to the difficulty for agents to properly understand the state and changes in the environment and to make decisions when acquiring new information. This is specifically the case in the following cases.
- Artificial Intelligence and TV Drama
- FSM (finite state machine)
- A Theory of Action for MultiAgent Planning
- A Robust, Qualitative Method for Robot Spatial Learning
- Estimating the Absolute Position of a Mobile Robot Using Position Probability Grids
- Learning to Coordinate Behaviors
- REACTIVE REASONING AND PLANNING
- The Interactive Museum Tour-Guide Robot
- Learning ROS for Robotics programming
Microservices and multi-agent systems
Microservices is an architectural style for functionally dividing an application into multiple services. An architectural style is a “style” of architecture, which is not only the structural design (architecture), but also the development and operation methods, organizational structure and management, and all other aspects of the development and operation of the structure. style.
In the following pages of this blog, we will discuss the architecture, development style, testing, maintenance, and other aspects of microservices, and then describe a concrete implementation using Clojure.
コメント