From Inductive logic Programming 2016 Proceedings

Machine Learning Technology  Artificial Intelligence Technology  Natural Language Processing Technology  Semantic Web Technology  Ontology Technology  Reasoning Technology   Knowledge Information Technology  Collecting AI Conference Papers    Digital Transformation Technology

ILP 2016 26th International Conference Inductive Logic Programming

In the previous article, we discussed ILP2012. This issue describes the 26th International Conference on Inductive Logic Programming (ILP 2016).ILP 2016 was held at Warren House Conference Centre in London from September 4-6, 2016.Since the first edition in 1991, the annual ILP The conference has served as the premier international forum for learning from structured relational data. Initially focused on induction in logic programs, over the years it has greatly expanded its research horizons to include learning in logic, multi-relational data mining, statistical relational learning, graph and tree mining, learning in other (non-propositional) logic-based knowledge representation frameworks, exploring the intersection with statistical learning, other probabilistic He has made contributions on all aspects of the approach and others. Theoretical advances in these areas have also been accompanied by challenging applications of these techniques to important problems in areas such as bioinformatics, medicine, and text mining.

Following the trend of past events, this edition of the conference solicited three types of submissions: (a) long papers describing original mature work containing appropriate experimental evaluation and/or representing a self-contained theoretical contribution; (b) short papers describing original work in progress, brief accounts of original ideas without conclusive evaluation, and other relevant work of potentially high scientific interest but not yet qualifying for the long paper category; and finally (c) papers relevant to the conference topics and recently published or accepted for publication by a first-class conference such as ECML/PKDD, ICML, KDD, ICDM, AAAI, IJCAI, or a journal such as MLJ, DMKD, JMLR etc.

The conference received 35 submissions: ten long papers, 19 short papers, and six published papers. Each of the long and short paper submissions was reviewed by three Program Committee (PC) members. Only four of the ten submitted long papers were accepted for presentation and publication. Short papers were initially evaluated on the basis of the submitted manuscript and the presentation, and authors of a subset of these papers were invited to submit an extended version. After a second review process, only six extended papers were finally accepted for publication. In summary, together with the four long papers, ten papers were accepted to be included in the present volume. The multiple-stage review process, although rather complex, has enabled the selection of high-quality papers for the proceedings. We thank the members of the PC for providing high-quality and timely reviews. Out of all the submitted papers, an additional 13 papers were accepted for publication in the CEUR workshop proceedings series.

The ILP 2016 program included five large technical sessions: Logic and Learning; Graphs and Databases; Probabilistic Logic and Learning; Algorithms, Optimisations and Implementations; and Applications. The papers in this volume represent well the current breadth of ILP research topics such as predicate invention, graph-based learning, spatial learning, logical foundations, statistical relational learning, probabilistic ILP, implementation and scalability, and applications in robotics, cyber-security, and games, providing also an excellent balance across theoretical and practical research. ILP 2016 received generous sponsorship by the Machine Learning journal for best student paper awards. The two best student paper awards of ILP 2016 were given to Yi Huang for his paper entitled “Learning Disjunctive Logic Programs from Interpretation Transition,” co-authored with Yisong Wang, Ying Zhang and Mingyi Zhang, and to Marcin Malec for his paper “Inductive Logic Programming Meets Relational Databases: An Application to Statistical Relational Learning,” co-authored with Tushar Khot, James Nagy, Erik Blasch and Sriraam Natarajan. The conference also received sponsorship from Springer for a best paper award. This award was given to the paper “Generation of Near-Optimal Solutions Using ILP-Guided Sampling” by Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig and Sarmimala Saikia.

With the intent of stimulating collaborations and discussion between academia and industry, the program also featured three invited talks by academic and industrial distinguished researchers. In the talk Inferring Causal Models of Complex Relational and Dynamic Systems,David Jensen, from the University of Massachusetts, presented key ideas, representations, and algorithms for causal inference, and highlighted new technical frontiers. Frank Wood, from the University of Oxford, gave a talk entitled Revolutionising Decision Making, Democratising Data Science, and Automating Machine Learning via Probabilistic Programming.In his talk, he gave a broad overview of the emerging field of probabilistic programming, from the point of view of both programming (modelling) language and automated inference, and introduced the most important challenges facing this field. Finally, Vijay Saraswat, senior research scientist in the Cognitive Computing Research division at the IBM T.J. Watson Research Center, discussed in his talk Machine Learning and Logic: The Beginnings of a New Computer Science?the open challenges of building cognitive assistants in compliance, and the need to bring together researchers in natural language under- standing, machine learning, and knowledge representation/reasoning to address them.

The conference featured, for the first time, an international competition, designed and managed by Mark Law, a member of our local Organizing Committee. The competition was aimed at testing the accuracy, scalability, and versatility of the learning systems that were entered. The competition had two main tracks for probabilistic and non-probabilistic approaches. The winners of the competition were Peter Schller, from Marmara University, for his non-probabilistic approach and jointly Riccardo Zese, Elena Bellodi, and Fabrizio Riguzzi for their probabilistic approach. Results of the competition are publicly available on http://ilp16.doc.ic.ac.uk/ competition.

The contents are described below.

Probabilistic Inductive Logic Programming (PILP) systems extend ILP by allowing the world to be represented using probabilistic facts and rules, and by learning probabilistic theories that can be used to make predictions. However, such systems can be inefficient both due to the large search space inherited from the ILP algorithm and to the probabilistic evaluation needed whenever a new candidate theory is generated. To address the latter issue, this work introduces probability estimators aimed at improving the efficiency of PILP systems. An estimator can avoid the computational cost of probabilistic theory evaluation by pro- viding an estimate of the value of the combination of two subtheories. Experiments are performed on three real-world datasets of different areas (biology, medical and web-based) and show that, by reducing the num- ber of theories to be evaluated, the estimators can significantly shorten the execution time without losing probabilistic accuracy.

Statistical Relational Learning (SRL) approaches have been developed to learn in presence of noisy relational data by combining probability theory with first order logic. While powerful, most learning approaches for these models do not scale well to large datasets. While advances have been made on using relational databases with SRL mod- els [14], they have not been extended to handle the complex model learn- ing (structure learning task). We present a scalable structure learning approach that combines the benefits of relational databases with search strategies that employ rich inductive bias from Inductive Logic Program- ming. We empirically show the benefits of our approach on boosted struc- ture learning for Markov Logic Networks.

Most event recognition approaches in sensor environments are based on manually constructed patterns for detecting events, and lack the ability to learn relational structures in the presence of uncer- tainty. We describe the application of OSLα, an online structure learner for Markov Logic Networks that exploits Event Calculus axiomatizations, to event recognition for traffic management. Our empirical evaluation is based on large volumes of real sensor data, as well as synthetic data generated by a professional traffic micro-simulator. The experimental results demonstrate that OSLα can effectively learn traffic congestion definitions and, in some cases, outperform rules constructed by human experts.

Experts possess vast knowledge that is typically ignored by standard machine learning methods. This rich, relational knowledge can be utilized to learn more robust models especially in the presence of noisy and incomplete training data. Such experts are often domain but not machine learning experts. Thus, deciding what knowledge to provide is a difficult problem. Our goal is to improve the human-machine inter- action by providing the expert with a machine-generated bias that can be refined by the expert as necessary. To this effect, we propose using transfer learning described in “Overview of Transfer Learning and Examples of Algorithms and Implementations“, leveraging knowledge in alternative domains, to guide the expert to give useful advice. This knowledge is captured in the form of first-order logic horn clauses. We demonstrate empirically the value of the transferred knowledge, as well as the contribution of the expert in providing initial knowledge, plus revising and directing the use of the transferred knowledge.

  • How Does Predicate Invention Affect Human Comprehensibility?

    During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as that of Mitchell, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favour- ing statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present the results of experiments testing human comprehensibility of logic programs learned with and without predicate invention. Results indicate that comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols.

  • Distributional Learning of Regular Formal Graph System of Bounded Degree

    In this paper, we describe how distributional learning techniques can be applied to formal graph system (FGS) languages. An FGS is a logic program that deals with term graphs instead of the terms of first-order predicate logic. We show that the regular FGS languages of bounded degree with the 1-finite context property (1-FCP) and bounded treewidth property can be learned from positive data and membership queries.

  • Learning Relational Dependency Networks for Relation Extraction

    We consider the task of KBP slot filling – extracting relation information from newswire documents for knowledge base construction. We present our pipeline, which employs Relational Dependency Networks (RDNs) to learn linguistic patterns for relation extraction. Additionally, we demonstrate how several components such as weak supervision, word2vec features, joint learning and the use of human advice, can be incorporated in this relational framework. We evaluate the different components in the benchmark KBP 2015 task and show that RDNs effectively model a diverse set of features and perform competitively with current state-of-the-art relation extraction methods.

  • Towards Nonmonotonic Relational Learning from Knowledge Graphs

    Recent advances in information extraction have led to the socalled knowledge graphs (KGs), i.e., huge collections of relational factual knowledge. Since KGs are automatically constructed, they are inherently incomplete, thus naturally treated under the Open World Assumption (OWA). Rule mining techniques have been exploited to support the cru- cial task of KG completion. However, these techniques can mine Horn rules, which are insufficiently expressive to capture exceptions, and might thus make incorrect predictions on missing links. Recently, a rule-based method for filling in this gap was proposed which, however, applies to a flattened representation of a KG with only unary facts. In this work we make the first steps towards extending this approach to KGs in their original relational form, and provide preliminary evaluation results on real-world KGs, which demonstrate the effectiveness of our method.

  • Learning Predictive Categories Using Lifted Relational Neural Networks

    Lifted relational neural networks (LRNNs) are a flexible neural-symbolic framework based on the idea of lifted modelling. In this paper we show how LRNNs can be easily used to specify declaratively and solve learning problems in which latent categories of entities, prop- erties and relations need to be jointly induced.

  • Generation of Near-Optimal Solutions Using ILP-Guided Sampling

    Our interest in this paper is in optimisation problems that are intractable to solve by direct numerical optimisation, but nevertheless have significant amounts of relevant domain-specific knowledge. The category of heuristic search techniques known as estimation of distribution algorithms (EDAs) seek to incrementally sample from probability distributions in which optimal (or near-optimal) solutions have increasingly higher probabilities. Can we use domain knowledge to assist the estimation of these distributions? To answer this in the affirmative, we need: (a) a general-purpose technique for the incorporation of domain knowledge when constructing models for optimal values; and (b) a way of using these models to generate new data samples. Here we investigate a combination of the use of Inductive Logic Programming (ILP) for (a), and standard logic-programming machinery to generate new sam- ples for (b). Specifically, on each iteration of distribution estimation, an ILP engine is used to construct a model for good solutions. The result- ing theory is then used to guide the generation of new data instances, which are now restricted to those derivable using the ILP model in con- junction with the background knowledge). We demonstrate the approach on two optimisation problems (predicting optimal depth-of-win for the KRK endgame, and job-shop scheduling). Our results are promising: (a) On each iteration of distribution estimation, samples obtained with an ILP theory have a substantially greater proportion of good solutions than samples without a theory; and (b) On termination of distribution estimation, samples obtained with an ILP theory contain more near- optimal samples than samples without a theory. Taken together, these results suggest that the use of ILP-constructed theories could be a useful technique for incorporating complex domain-knowledge into estimation distribution procedures.

In the next article, we will discuss ILP2017.

コメント

タイトルとURLをコピーしました