Reasoning Web 2019 Papers

Machine Learning Technology  Artificial Intelligence Technology  Natural Language Processing Technology  Semantic Web Technology  Ontology Technology   Reasoning Technology   Knowledge Information Technology  Collecting AI Conference Papers    Digital Transformation Technology

In the previous article, we discussed Reasoning Web 2018. In this issue, we describe the 15th Reasoning Web, held in Bolzano, Italy, in September 2019.

This time, we will focus on the topic of Explainable AI, with a detailed description and analysis of the main reasoning and explanation methods for ontologies using description logic: tableau procedures and axiom pinpointing algorithms, semantic query answering for knowledge bases, data provenancing, entity-centric knowledge base applications, formal concept analysis, an approach to explaining data by lattice theory, learning interpretable models from data, logical problems such as proposition satisfiability, discrete problems such as constraint satisfaction, and learning full-scale mathematical optimization tasks, distributed computing systems, and explainable AI planning will be described.

Detailed topics are discussed below.

Description Logics (DLs) are a family of languages designed to represent conceptual knowledge in a formal way as a set of ontological axioms. DLs provide a formal foundation of the ontology language OWL, which is a W3C standardized language to represent information in Web applications. The main computational problem in DLs is finding relevant consequences of the information stored in ontologies, e.g., to answer user queries. Unlike related techniques based on keyword search or machine learning, the notion of a consequence is well-defined using a formal logic-based semantics. This course provides an indepth description and analysis of the main reasoning and explanation methods for ontologies: tableau procedures and axiom pinpointing algorithms.

Many tasks often regarded as requiring some form of intelligence to perform can be seen as instances of query answering over a semantically rich knowledge base. In this context, two of the main problems that arise are: (i) uncertainty, including both inherent uncertainty (such as events involving the weather) and uncertainty arising from lack of sufficient knowledge; and (ii) inconsistency, which involves dealing with conflicting knowledge. These unavoidable characteristics of real world knowledge often yield complex models of reasoning; assuming these models are mostly used by humans as decision-support systems, meaningful explainability of their results is a critical feature. These lecture notes are divided into two parts, one for each of these basic issues. In Part 1, we present basic probabilistic graphical models and discuss how they can be incorporated into powerful ontological languages; in Part 2, we discuss both classical inconsistency-tolerant semantics for ontological query answering based on the concept of repair and other semantics that aim towards more flexible yet principled ways to handle inconsistency. Finally, in both parts we ponder the issue of deriving different kinds of explanations that can be attached to query results.

Data provenance is extra information computed during query evaluation over databases, which provides additional context about query results. Several formal frameworks for data provenance have been proposed, in particular based on provenance semirings. The provenance of a query can be computed in these frameworks for a variety of query languages. Provenance has applications in various settings, such as probabilistic databases, view maintenance, or explanation of query results. Though the theory of provenance semirings has mostly been developed in the setting of relational databases, it can also apply to other data representations, such as XML, graph, and triple-store databases.

Entity-centric knowledge bases are large collections of facts about entities of public interest, such as countries, politicians, or movies. They find applications in search engines, chatbots, and semantic data mining systems. In this paper, we first discuss the knowledge representation that has emerged as a pragmatic consensus in the research community of entity-centric knowledge bases. Then, we describe how these knowledge bases can be mined for logical rules. Finally, we discuss how entities can be represented alternatively as vectors in a vector space, by help of neural networks.

We give a brief introduction into Formal Concept Analysis, an approach to explaining data by means of lattice theory.

Learning interpretable models from data is stated as one of the main challenges of AI. The goal of logic-based learning is to compute interpretable (logic) programs that explain labelled examples in the context of given background knowledge. This tutorial introduces recent advances of logic-based learning, specifically learning non-monotonic logic programs under the answer set semantics. We introduce several learning frameworks and algorithms, which allow for learning highly expressive programs, containing rules representing non-determinism, choice, exceptions, constraints and preferences. Throughout the tutorial, we put a strong emphasis on the expressive power of the learning systems and frameworks, explaining why some systems are incapable of learning particular classes of programs.

Constraints are ubiquitous in artificial intelligence and operations research. They appear in logical problems like propositional satisfiability, in discrete problems like constraint satisfaction, and in full-fledged mathematical optimization tasks. Constraint learning enters the picture when the structure or the parameters of the constraint satisfaction/optimization problem to be solved are (partially) unknown and must be inferred from data. The required supervision may come from offline sources or gathered by interacting with human domain experts and decision makers. With these lecture notes, we offer a brief but self-contained introduction to the core concepts of constraint learning, while sampling from the diverse spectrum of constraint learning methods, covering classic strategies and more recent advances. We will also discuss links to other areas of AI and machine learning, including concept learning, learning from queries, structured-output prediction, (statistical) relational learning, preference elicitation, and inverse optimization.

Distributed computing systems provide many important services. To explain and understand why and how well they work, it is common practice to build, maintain, and analyse models of the systems’ behaviours. Markov models are frequently used to study operational phenomena of such systems. They are often represented with discrete state spaces, and come in various flavours, overarched by Markov automata. As such, Markov automata provide the ingredients that enable the study of a wide range of quantitative properties related to risk, cost, performance, and strategy. This tutorial paper gives an introduction to the formalism of Markov automata, to practical modelling of Markov automata in the MODEST language, and to their analysis with the MODEST TOOLSET. As case studies, we optimise an attack on Bitcoin, and evaluate the performance of a small but complex resource-sharing computing system.

Model-based approaches to AI are well suited to explainability in principle, given the explicit nature of their world knowledge and of the reasoning performed to take decisions. AI Planning in particular is relevant in this context as a generic approach to action-decision problems. Indeed, explainable AI Planning (XAIP) has received interest since more than a decade, and has been taking up speed recently along with the general trend to explainable AI. In the lecture, we provide an overview, categorizing and illustrating the different kinds of explanation relevant in AI Planning; and we outline recent works on one particular kind of XAIP, contrastive explanation. This extended abstract gives a brief summary of the lecture, with some literature pointers. We emphasize that completeness is neither claimed nor intended; the abstract may serve as a brief primer with literature entry points.

In the next article, we will discuss Reasoning Web2020.

コメント

タイトルとURLをコピーしました