Artificial Intelligence Technology

Machine Learning Artificial Intelligence Natural Language Processing Algorithm ICT Computer Architecture IT Infrastructure Digital Transformation Deep Learning Mathematics Probabilistic Generative Models Navigation of this blog
About Artificial Intelligence Technology

Artificial intelligence, according to wiki, “Artificial intelligence or artificial intelligence, or AI, is “a branch of computer science that studies ‘intelligence’ using the concept of ‘computation’ and the tools of ‘computers. AI is a term that refers to “a branch of computer science that studies ‘intelligence’ using the concept of ‘computation’ and the tools of ‘computers. It is also referred to as “a technology that allows computers to perform intelligent actions such as language understanding, reasoning, and problem solving on behalf of humans,” or “a field of research on the design and realization of intelligent information processing systems using computers. AI education and research at universities is conducted by organizations such as the Department of Information Engineering and the Department of Computer Science in the Department of Information Science and Technology”.

In simpler terms, artificial intelligence can be defined as “a device (or software) that is artificially designed to behave in a human-like manner.

The term artificial intelligence was coined at the Dartmouth Conference in 1956. Early artificial intelligence was not good at large-scale calculations that consumed large amounts of memory and CPU power, as is the case with today’s machine learning, due in part to a significant lack of hardware power. Turing’s famous deciphering of the German Enigma during World War II.

Introduction to Turing’s Theory of Computation From Turing Machines to Computers

This Turing considered the question “Can computers think?” and considered the Turing Test, which is also described in “Conversation and AI (Thinking from the Turing Test)” and the basic principle of computers, the theory of computation, as described in “Introduction to Turing’s Theory of Computation Reading Memo.” Turing’s theory of computation, which is the basic principle of computers.

As time went on and semiconductor technology advanced and computer power increased, it became possible to fully utilize the memory and CPU power, and various machine learning techniques, such as those described in “Machine Learning Techniques,” came to be considered.

Nearly 70 years have passed since that Dartmouth Conference, and the scope of artificial intelligence technologies has become very broad. For example, the map of AI research compiled by the Japanese Society for Artificial Intelligence in 2020 represents the following.

In addition to the technical domains described in the machine learning technologies, technologies in various domains such as “inference, knowledge and language,” “discovery, exploration and creation,” “evolution, life and growth,” “people, dialogue and emotion,” and “body, robotics and motion” are being considered.

This section describes the theory, practice, etc. of these artificial intelligence technologies, especially those other than machine learning technologies, in the following comprehensive manner within the following scope.

Overview of Artificial Intelligence

Strong AI (or Artificial General Intelligence, AGI: Artificial General Intelligence) refers to AI with broad intelligence that is not limited to a specific problem domain; AGI aims for AI that can learn and understand knowledge like humans and respond flexibly in different environments and situations, Beyond mere pattern recognition and optimisation, it needs to be capable of ‘intelligent reasoning’ and ‘deliberate action selection’. The ability to perform causal reasoning is said to be essential for a strong AI to have the same intellectual capacity as humans.

IA (Intelligence Augmentation) will be a term that refers to the use of computers and other technologies to augment human intelligence. In other words, IA can be described as the use of computers to supplement and extend human intelligence by providing analysis and decision-making support to improve human capabilities, and to combine human and computer power to create more powerful intellectual capabilities. This is a term that, depending on how you take the meaning, corresponds to the whole area called DX.

In contrast, Artificial Intelligence (AI) refers to the technology and concept of using computers and other machines to realize human intelligence and behavior. AI is evolving in areas such as machine learning, deep learning, natural language processing, and computer vision, and whether or not it has been realized AI can be defined as the ability of machines to solve problems autonomously.

First, a definition of artificial intelligence (AI) is in order. Artificial intelligence (AI) emerged in the 1950s, when a handful of pioneers in the then-new field of computer science pondered whether it was possible to make computers think. While the answer to this question remains unanswered, a simple definition of the field can be summarized as “the effort to automate intelligent tasks that would normally be performed by humans. This concept encompasses many approaches that have nothing to do with learning. Early chess programs, for example, incorporated rules hard-coded by programmers and could not be called machine learning.

Heuristics are expedient or heuristic methods used when there is a need to solve a problem or make a decision on an uncertain matter, but there are no clear clues to do so. Contrasted with these heuristics are algorithms. For example, the formula for finding the area of a triangle is a good example of an algorithm, and the area of a triangle can always be found by applying the formula (base x height)/2. Heuristics, on the other hand, include proverbs and aphorisms such as “hurry up and get going” or “just do it” that capture an aspect of truth that is useful in everyday life.

By studying “cognitive science,” which has developed along with artificial intelligence, we can understand how thinking works and how the brain is used. How do machines understand logic and reasoning? How is the human brain different from machines? The author, who studied cutting-edge cognitive science at an American graduate school, explains in an easy-to-understand manner. He even approaches the author’s new theory, the “Super Information Field Hypothesis”!

  • History of Artificial Intelligence Technology
  • A Weasel That Doesn’t Want to Work and a Robot That Can Understand Its Own Language
  • Castle of the automaton

コメント

タイトルとURLをコピーしました