Artificial Intelligence technology

This Page discusses information on artificial intelligence technology in the areas shown in the map below. Click on an item in the table of contents to jump to the corresponding summary.

Theory and Algorithms of Artificial Intelligence

Artificial Intelligence Technology

Artificial Intelligence Technology. Artificial intelligence refers to devices and software designed to behave like humans and was created at the Dartmouth Conference in 1956. Early artificial intelligence was not good at large-scale computation as we know it today, and focused on inference, search, and code breaking utilizing conditional branching. Almost 70 years have passed since the Dartmouth Conference, and artificial intelligence technologies have developed extensively, with this paper focusing on technologies other than machine learning in particular.

Theory and basic algorithms of artificial intelligence technology

Theory, Mathematics and Algorithms of Artificial Intelligence Technology. Artificial intelligence technology is a collection of theory, mathematics, and algorithms that aim to mimic human intellectual activity. Underlying these technologies are mathematical frameworks such as statistics, linear algebra, probability theory, and optimization theory, which support the reasoning, search, and decision making of artificial intelligence. From an algorithmic perspective, a wide range of technologies have been developed, from classical methods such as conditional branching and search algorithms to machine learning, reinforcement learning, and deep learning. This section explores the interrelationships between the theoretical and mathematical foundations underlying artificial intelligence technologies and the algorithms that concretely implement them, focusing on techniques other than machine learning.

Graph Data Algorithm

Graph data processing algorithms and their application to machine learning/artificial intelligence tasks. Graphs are a way to represent connections between objects, and many problems can be transformed into graph problems. Algorithms related to this include search algorithms, shortest path algorithms, minimum global tree algorithms, data flow algorithms, and strongly connected component decomposition. Also described are algorithms such as DAG, SAT, LCA, and decision trees, as well as applications such as graph structure-based knowledge data processing and Bayesian processing.

Automata and state transitions/Petri nets, automatic planning and counting problems

Automata and state transitions/Petri nets, automatic programming and counting problems. Automata theory is an important branch of computational theory with applications to formal languages, computational power, and natural language processing. A finite state machine (FSM) represents a computational model that transitions states in response to inputs and produces outputs. Petri nets, on the other hand, are a method for modeling concurrent systems and are used in industrial control and software development. Furthermore, automatic planning is a technique for automatically generating action sequences that lead to a target state and is studied in AI and robotics. This section details the theory, application examples, and implementation of these techniques.

Hardware and AI
    Hardware approaches to machine learning – FPGA, optical computing, quantum computing

    Hardware approaches to machine learning – FPGA, optical and quantum computing. Modern computers primarily use electricity to perform calculations, but steam, light, magnetism, and quantum have also been used in the past. In this section, after explaining the basic principles of computers, machine learning using Raspberry Pi will be discussed, followed by examples of dedicated computers using FPGAs and ASICs. Finally, quantum computers and optical computers will be touched upon as next-generation technologies.

    Natural Language Processing and Knowledge Utilization

    Natural Language Processing  Technology

    Natural Language Processing Technology. Language is a means of communication between humans and is something that people naturally acquire, but it is very difficult for computers to handle. Natural language processing is a field of research in which computers are used to handle that language. In the early days, natural language processing was based on rules, but since the late 1990s, statistical methods using actual language data have become the mainstream. Here, we review the philosophical, linguistic, and mathematical aspects of natural language, describe natural language processing techniques in general, language similarities, and tools and programming implementations for use on computers, and describe how to apply them to actual tasks.

    Knowledge Data and its Utilization

    Knowledge Information Processing Technology. How to handle knowledge as information is a central issue in artificial intelligence technology, and various methods have been examined since the invention of the computer. This section discusses how to convert knowledge from natural language to computer-processable information, and describes the definition of knowledge, Semantic Web technologies and ontologies, predicate logic based on mathematical logic, logic programming using Prolog, and solution set programming and its applications.

    Ontology Technology

    Ontology Technology. An ontology is a formal model for systematizing and structuring data in information science, representing concepts, things, attributes, and relationships in a shareable format. This improves data consistency and interoperability in information retrieval, database design, knowledge management, natural language processing, artificial intelligence, and the Semantic Web. In addition, domain-specific ontologies, which are specific to a particular field, are also used for information sharing and integration. In this section, we discuss the use of ontologies from an information engineering perspective.

    Semantic Web Technology

    Semantic Web Technology. Semantic Web Technology is a project to evolve the WWW from a “web of documents” to a “web of data” by developing standards and tools to handle the meaning of web pages. and utilize them for DX and AI tasks. In this paper, we discuss Semantic Web technologies, ontology technologies, and papers presented at the ISWC (International Semantic Web Conference).

    Reasoning Technology

    Reasoning Techniques. Inference techniques include deduction and four non-deductive methods (induction, projection, analogy, and abduction), defined as methods that trace relationships between facts. Classical approaches include forward and backward inference, while machine learning methods include relational learning, decision trees, sequential pattern mining, and probabilistic generative methods. In this section, we discuss inference techniques including expert systems, SAT problems, solution set programming, and inductive logic programming.

    Agents and Autonomous AI

    Artificial Life and Agent

    Artificial Life and Agent Technology. Artificial Life (ALife) is a field that studies life processes and evolution by simulating human-designed life using biochemistry, computer models, and robots; named by Christopher Langton in 1986, it is classified by methodology into “soft ALife” (software), “hard ALife” (robotics), and ‘Wet ALife’ (biochemistry). In recent philosophy, life is considered to be the occasion for the expression of intelligence, and from intention, meaning arises in relationships. From this perspective, feeding artificial life back into AI systems is considered a reasonable approach, and is discussed in detail here from philosophical, mathematical, and artificial intelligence perspectives.

    Autonomous artificial intelligence and self-expanding machines

      Autonomous Artificial Intelligence and Self-Extending Machines. Autonomous AI is AI that has the ability to set its own goals and make decisions and take actions based on environmental information, and is characterized by self-determinism, learning ability, adaptability, safety, and ethics. Applications include self-driving cars, robotics, financial transactions, space exploration, medical assistance, and others, which utilize related technologies such as reinforcement learning and deep learning. Self-enhancing machines refer to AI systems that improve and evolve their own capabilities and structures, and are characterized by learning-based evolution, physical self-improvement, software updates, and target adaptability. Applications include robotics, medicine, automated diagnostics, and space exploration, where evolutionary algorithms and self-healing techniques are used as related technologies.

      Chatbots and Question & Answer Technology

      Chatbots and Question-and-Answer Technology. Chatbot technology is used as a general-purpose user interface in various business fields, and is an area in which many companies are entering. Based on question-and-answer technology, it is not just a user interface technology, but also utilizes advanced technologies that combine artificial intelligence and machine learning technologies such as natural language processing, inference, deep learning, reinforcement learning, and online learning. At present, however, many chatbots do not make full use of these technologies and remain rule-based and simple. This section discusses a variety of topics regarding chatbots and question-and-answer technology, from origins, business aspects, and technical overviews including the latest approaches, to specific implementations that are available in practice.

      Data Processing and Visualization

      User Interface and DataVisualization

      User Interface and Data Visualization Technology. Using computers to process data is tantamount to creating value by visualizing the structure of the data. Data is capable of multiple perspectives and interpretations, and a cleverly designed user interface is essential for visualizing it. This section describes examples of various user interfaces, based mainly on papers presented at conferences.

      Workflow & Services

      Workflow & Service Technology. A compilation of papers on service platforms, workflow analysis, and real-world business applications presented at various conferences, describing Semantic Web-based service platforms in business domains such as healthcare, law, manufacturing, and science.

      Image Processing Technology

      Image Information Processing Technology. With the development of modern Internet technology and smartphones, a vast number of images exist on the Web, and image recognition technology is needed to create new value from them. This technology requires expertise in image-specific constraints, pattern recognition, machine learning, and further applications, and covers a wide range of areas. In addition, the success of deep learning has led to a rapid increase in research on image recognition, making it difficult to grasp the whole picture. Here, we describe the theory and algorithms of image information processing techniques, the practice of deep learning using Python/Keras, and approaches using sparse and stochastic generative models.

      Speech Recognition

      Speech Recognition Technology. Machine learning techniques play an important role in the area of signal processing, especially as applied to sensor data and speech signals, which are one-dimensional data that vary over time. Various machine learning techniques, including deep learning, are used for speech signal recognition. In this section, we discuss applications of natural language and speech mechanisms, speaker adaptation, speaker recognition, and noise-resistant speech recognition using AD transform, Fourier transform, dynamic programming (DP), hidden Markov model (HMM), and deep learning with respect to speech recognition techniques.

      Geospatial Information Processing

      Geospatial information processing technology. Geospatial information refers to location information or information linked to location, and it is estimated that 80% of the information handled by government is related to location information. By utilizing location information, it is possible to plot information on a map to determine its distribution, guide people to their destinations using GPS data, and track their trajectories. This information can also be used to provide services based on past, present, and future events. Utilization of geospatial information will advance scientific discovery, business development, and the solution of social problems. Specific uses of this information are described in combination with QGIS, R, machine learning tools, and Bayesian models.

      Sensor Data and IOT

      Sensor Data & IOT Technology. The use of sensor information is central to IoT technology, and the focus here is on time-varying, one-dimensional data; IoT approaches include detailed analysis of measurement targets by setting up individual sensors and anomaly detection using multiple sensors. Specific technologies discussed include IoT standards (such as WoT), statistical processing of time series data, probabilistic approaches using hidden Markov models, sensor placement using sub-modular optimization, hardware control such as BLE, and smart city-related findings.

      Anomaly detection and change detection

      Anomaly detection and change detection techniques. Machine learning anomaly detection is a technique for detecting anomalies that deviate from normal conditions, while change detection is a technique for detecting changes in conditions. They are used to detect anomalous behavior, such as manufacturing line failures, network attacks, and fraudulent financial transactions. Techniques for anomaly and change detection include Hotelling’s T2 method, Bayesian methods, neighborhood methods, mixed distribution models, support vector machines, Gaussian process regression, and sparse structure learning, and these approaches are described.

            Stream Data Technology

            Machine learning and system architecture for data streams (time series data). Modern society is full of dynamic data, with huge amounts of data being generated in factories, plants, transportation, economics, social networks, and other areas. For example, factory sensors, mobile data, and social networks make tens of thousands of observations per minute, requiring real-time data analysis in a variety of use cases. Specifically, solutions to detailed problems are needed, such as predicting turbine failures, locating public transportation, and tracking people’s discussions. This book describes a real-time distributed processing framework for handling these stream data, machine learning processing of time-series data, and examples of smart city and Industry 4.0 applications that make use of them.

            AI Research and Latest Trends

            Collected AI Conference Papers

            Collection of AI conference papers. Here is a collection of conference papers/preliminary proceedings from AAAI, ISWC, ILP, RW, and other AI-related conferences.

            Exit mobile version
            タイトルとURLをコピーしました