Integration of logic and rules with probability/machine learning

Machine Learning Technology   Artificial Intelligence Technology   Digital Transformation Technology   Reinforce Learning  Intelligent information technology Probabilistic Generative Model   Explainable Machine Learning  Mathematical Logic   Natural Language Processing

Various approaches have been taken to the problem of knowledge representation, or how to represent, acquire, and use knowledge, which is the fundamental problem of artificial intelligence technology. These include machine learning technologies such as deep learning, sensor technologies such as speech recognition and image recognition, and inference technologies such as expert systems.

Today’s knowledge information is used in unstructured form as symbolic knowledge in various and large volumes, such as academic journals, dictionaries, Wikipedia, SNS, news articles, etc., via the Internet.

This knowledge can be classified into various categories, one of which is logical knowledge and the other is probabilistic knowledge.

In this blog, I will discuss the flow of modeling complex reality on computers using establishment models that combine logical knowledge and probabilistic knowledge, which are the two major categories of knowledge, and the connections among probability, logic, computation, and machine learning that lie behind the modeling.

The contents include knowledge based model construction (KBMC) and statistical relational learning (SRL), which are fields that move from probability (Bayesian networks) to logic, as well as Probabilistic logic learning (PPL), etc., are described as a field that moves from probability to logic.

Bayesian nets were proposed in the late 1980s in the community of uncertainty in AI (UAI), a field of artificial intelligence, to model probabilistic dependence among variables. The existence of excellent probability calculation algorithms such as propagation, etc., is characterized by the fact that peripheral distributions and conditional probabilities can be calculated efficiently.

In terms of learning, not only parameter learning but also structural learning, in which graph structures are learned from a large amount of data, has been well researched. It has become one of the standard probabilistic modeling techniques that support data mining because it encompasses naïve Bayes and hidden Markov models that are often used in machine learning, and it is robust enough to work with lax data if given. It is one of the standard probabilistic modeling techniques that support data mining.

Technical Topics

    Large vocabulary continuous speech recognition (LVCSR) is a technology that recognizes unrestricted general speech. The current mainstream of LVCSR is based on HMMs.

    First, we discuss learning and recognition using subword recognition units. Subword units are used exclusively in large-vocabulary continuous speech recognition. In actual large-vocabulary continuous recognition, the size of the recognition dictionary is usually around 30,000 words. However, most of the 30,000 words are words that do not appear very often, making it difficult to collect a sufficient amount of training data for each of them.

    An application of Bayesian estimation previously mentioned is Bayesian nets. Bayesian nets are a modeling method that expresses causal relationships (strictly speaking, probabilistic dependencies) among various events in a graph structure, and are being used and studied in various fields such as failure diagnosis, weather prediction, medical decision support, marketing, and recommendation systems.

    Bayesian nets cannot be described collectively even if there are similarities, and separate Bayesian nets must be created for different variables, making it difficult to describe complex and large models. To solve this problem, research on automatic Bayesian net generation, called knowledge-based model building (KBMC), has been conducted.

    In the previous article, we discussed SRL (statistical relational learning), which was developed in North America. This time, we will discuss PLL (probabilistic logic learning), which is the European counterpart.

    SRLs such as PRM (probabilistic relational model) and MLN (Markov logic network) are based on the idea of enriching probabilistic models with relational and logic formulas. Although SRL uses relational and logical expressions, it does not directly aim to enrich post-operative logic with probability. On the other hand, knowledge representation by post-operative logic has long been studied in the field of artificial intelligence, and attempts to incorporate probability into it to represent not only logical knowledge, which is always valid, but also probabilistic knowledge have been attempted since before the statistical machine learning boom.

    Advances in information extraction have enabled the automatic construction of large knowledge graphs (KGs) like DBpedia, Freebase, YAGO and Wikidata. These KGs are inevitably bound to be incomplete. To fill in the gaps, data correlations in the KG can be analyzed to infer Horn rules and to predict new facts. However, Horn rules do not take into account possible exceptions, so that predicting facts via such rules introduces errors. To overcome this problem, we present a method for effective revision of learned Horn rules by adding exceptions (i.e., negated atoms) into their bodies. This way errors are largely reduced. We apply our method to discover rules with exceptions from real-world KGs. Our experimental results demonstrate the effectiveness of the developed method and the improvements in accuracy for KG completion by rule-based fact prediction.

    Conference Papers

      The 18th International Conference on Inductive Logic Programming was held in Prague, September 10-12, 2008, and while the ILP community clearly continues to cherish its beloved framework of first-order logical representations, the research presented at ILP2008 showed that there is still room for both extensions to established ILP approaches, and the exploration of new logical induction frameworks such as Brave Induction, show that there is still room for both extensions of these approaches, and further into the areas of statistical relational learning, graph mining, the Semantic Web, bioinformatics, and cognitive science.

      For almost two decades, the ILP conference series has been the premier forum for research on logic-based approaches to machine learning, and the 19th International Conference on Inductive Logic Programming, held July 2-4, 2009, in Leuven, continues this tradition. SRL-2009 – International Workshop on Statistical Relational Learning, MLG-2009 – 7th International Workshop on Mining and Learning with Graphs, making it a conference open to the rest of the community. Each of these three events has its own focus, emphasis, and traditions, but fundamentally they share the problem of learning about structured data in the form of graphs, relational descriptions, and logic as a subject of study. Thus, the events were held concurrently to promote greater interaction among the three communities.

      In this issue, we discuss revised papers from the 20th International Conference on Inductive Logic Programming (ILP2010), held in Florence, Italy, June 27-30, 2010.

      The ILP conference series began in 1991 and is a major international event on logic-based approaches to machine learning. In recent years, the scope of research has expanded significantly, with the integration of statistical learning and other probabilistic approaches being explored.

      ILP2011 was held at Cumberland Lodge in the UK from July 31 to August 3, 2011, under the auspices of the Department of Computing at Imperial College London.

      The 31 proceedings papers represent the diversity and vitality of current ILP research, including ILP theory, implementation, probabilistic ILP, biological applications, subgroup discovery, grammatical inference, relational kernels, Petri net learning, spatial learning, graph-based learning, and learning behavioral models.

      Describes the 22nd International Conference on Inductive Logic Programming, ILP 2012, held in Dubrovnik on September 17-19, 2012 The ILP conference series began in 1991 and is the leading international forum on learning from structured data. Initially focused on induction in logic programming, it has expanded its scope in recent years and has attracted a great deal of attention and interest. It now focuses on all aspects of learning from structured data, including logic learning, multi-branch relational learning, data mining, statistical relational learning, graph and tree structure mining, and relational reinforcement learning.

      The papers in ILP2012 provide a good representation of the breadth of current ILP research, including propositionalization, logical foundations, implementation, probabilistic ILP, applications to robotics and biology, grammatical inference, spatial learning, and graph-based learning.

      ILP 2016 took place at the Warren House Conference Centre in London from September 4-6, 2016.Since its first edition in 1991, the annual ILP conference has been the premier international forum for learning from structured relational data It has been functioning. Initially focused on induction in logic programs, over the years it has greatly expanded its research horizons to include learning in logic, multi-relational data mining, statistical relational learning, graph and tree mining, learning in other (non-propositional) logic-based knowledge representation frameworks, exploring the intersection with statistical learning, other probabilistic He has made contributions on all aspects of the approach and others. Theoretical advances in these areas have also been accompanied by challenging applications of these techniques to important problems in areas such as bioinformatics, medicine, and text mining.

      We describe the 27th International Conference on Inductive Logic Programming, ILP2017, held in Orléans, France, in September 2017. Contents include robot control, knowledge bases and medicine, statistical machine learning in image recognition, relational learning, logic-based event recognition systems, the problem of learning Boltzmann machine classifiers from relational data, parallel inductive logic programming, learning from interpretative transitions (LFIT), Lifted Relational Neural Networks (LRNN), and improvements to WOrd2Vec will be described.

      Inductive logic programming (ILP) is a subfield of machine learning that relies on logic programming as a unified expression language for representing examples, background knowledge, and hypotheses. With its powerful expressive form based on first-order predicate logic, ILP provides an excellent vehicle for multi-relational learning and data mining.

      The ILP conference series, initiated in 1991, will be the premier international forum for learning from structured or semi-structured relational data. Originally focused on the introduction of logic programs, over the years the scope of research has expanded significantly to include logic, multi-relational data mining, statistical relational learning, graph and tree mining, other learning (non – proposed) logic-based knowledge representation frameworks, statistical learning and other probabilistic Research into approaches has been reported.

      In this issue, we describe the 29th International Conference on Inductive Logic Programming, held in Plovdiv, Bulgaria, September 3-5, 2019.

      Inductive logic programming (ILP) is a subfield of machine learning that relies on logic programming as a unified representation language for expressing examples, background knowledge, and hypotheses. With its powerful expressive form based on first-order predicate logic, ILP provides an excellent means for multi-relational learning and data mining.

      The ILP conference series, initiated in 1991, provides the premier international forum for learning from structured or semi-structured relational data. Originally focused on introducing logic programs, over the years the scope of research has expanded significantly to include logic, multi-relational data mining, statistical relational learning, graph and tree mining, other learning (non – proposed) logic-based knowledge representation frameworks, statistical learning and other probabilistic approaches and their intersections are being investigated.

      In this issue, we discuss ILP2021, which was skipped a year due to the coronal pandemic. Inductive logic programming (ILP) is a branch of machine learning that focuses on learning logical representations from relational data. the ILP conference series was started in 1991 and is the leading international forum on learning from structured or semi-structured relational data, multi-relational learning and data mining. international forum on learning from structured or semi-structured relational data, multi-relational learning, and data mining. Initially focused on induction of logic programs, over the years the scope of research has broadened considerably to include all aspects of logic learning, statistical relational learning, graph and tree mining, learning other (non-propositional) logic-based knowledge representation frameworks, and exploring the intersection of statistical learning and other probabilistic approaches. The research will.

       

      コメント

      タイトルとURLをコピーしました