Reasoning Web2020 Papers

Machine Learning Technology  Artificial Intelligence Technology  Natural Language Processing Technology  Semantic Web Technology  Ontology Technology   Reasoning Technology   Knowledge Information Technology  Collecting AI Conference Papers    Digital Transformation Technology

In the previous article, we discussed Reasoning Web 2019. This time we will discuss the 16th Reasoning Web, which will be held virtually in June 2020 due to Corona.

The main theme will be “Declarative Artificial Intelligence”. Specifically, I will give an overview of high-level research directions and open problems related to lightweight description logic (DL) ontology explainable AI (XAI), stream inference, solution set programming (ASP), limit datalogs, a recent declarative query language for data analysis, and knowledge graphs. An overview of high-level research directions and open problems related to knowledge graphs will be presented.

Details are given below.

Introduction to Probabilistic Ontologies

Ontologies are a popular way of representing domain knowledge, in particular, knowledge in domains related to life sciences. (Semi-)automating the process of building an ontology has attracted researchers from different communities into a field called “Ontology Learning”. We provide a formal specification of the exact and the probably approximately correct learning models from computational learning theory. Then, we recall from the literature complexity results for learning lightweight description logic (DL) ontologies in these models. Finally, we highlight other approaches proposed in the literature for learning DL ontologies.

As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this tutorial we will overview recent approaches showing how these challenges can be addressed by utilising forms of machine arguing as the scaffolding underpinning explanations that are delivered to users. Machine arguing amounts to the deployment of methods from computational argumentation in AI with suitably mined argumentation frameworks, which provide abstractions of “debates”. Computational argumentation has been widely used to support applications requiring information exchange between AI systems and users , facilitated by the fact that the capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in different ways, namely: by building explainable systems with argumentative foundations from linguistic data focusing on reviews), or by extracting argumentative reasoning from existing systems (focusing on a recommender system) .

Stream Reasoning is set at the confluence of Artificial Intelligence and Stream Processing with the ambitious goal to reason on rapidly changing flows of information. The goals of the lecture are threefold: (1) Introducing students to the state-of-the-art of Stream Reasoning, (2) Deep diving into RDF Stream Processing by outlining how to design, develop and deploy a stream reasoning application, and (3) Jointly discussing the limits of the state-of-the-art and the current challenges.

Aiming at ontology-based data access over temporal, in particular streaming data, we design a language of ontology-mediated queries by extending OWL 2 QL and SPARQL with temporal operators, and investigate rewritability of these queries into two-sorted first-order logic with < and PLUS over time

Answer Set Programming (ASP) is logic programming under the stable model or answer set semantics. During the last decade, this paradigm has seen several extensions by generalizing the notion of atom used in these programs. Among these, there are aggregate atoms, HEX atoms, generalized quantifiers, and abstract constraints. In this paper we refer to these constructs collectively as generalized atoms. The idea common to all of these constructs is that their satisfaction depends on the truth values of a set of (non-generalized) atoms, rather than the truth value of a single (non-generalized) atom. Motivated by several examples, we argue that for some of the more intricate generalized atoms, the previously suggested semantics provide unintuitive results and provide an alternative semantics, which we call supportedly stable or SFLP answer sets. We show that it is equivalent to the major previously proposed semantics for programs with convex generalized atoms, and that it in general admits more intended models than other semantics in the presence of non-convex generalized atoms. We show that the complexity of supportedly stable models is on the second level of the polynomial hierarchy, similar to previous proposals and to stable models of disjunctive logic programs. Given these complexity results, we provide a compilation method that compactly transforms programs with generalized atoms in disjunctive normal form to programs without generalized atoms. Variants are given for the new supportedly stable and the existing FLP semantics, for which a similar compilation technique has not been known so far.

Currently, data analysis tasks are often solved using code written in standard imperative programming languages such as Java and Scala. However, in recent years there has been a significant shift towards declarative solutions, where the definition of the task is clearly separated from its implementation, and users describe what the desired output is, rather than how to compute it. For example, instead of computing shortest paths in a graph by a concrete algorithm, one first describes what a path length is and then selects only paths of minimum length. Such specification is independent of evaluation details, allowing analysts to focus on the task at hand rather than implementation details. In these notes we will give an overview of Limit Datalog, a recent declarative query language for data analysis. This language extends usual Datalog with integer arithmetic (and hence many forms of aggregation) to naturally capture data analytics tasks, but at the same time carefully restricts the interaction of recursion and arithmetic to preserve decidability of reasoning. We concentrate on the positive language, but also discuss several generalisations and fragments of positive Limit Datalog, with various complexity and expressivity.

In these lecture notes, we provide an overview of some of the high-level research directions and open questions relating to knowledge graphs. We discuss six high-level concepts relating to knowledge graphs: data models, queries, ontologies, rules, embeddings and graph neural networks. While traditionally these concepts have been explored by different communities in the context of graphs, more recent works have begun to look at how they relate to one another, and how they can be unified. In fact, at a more foundational level, we can find some surprising relations between the different concepts. The research questions we explore mostly involve combinations of these concepts.

In the next article, we will discuss Reasoning Web 2021.

コメント

タイトルとURLをコピーしました