Reasoning Web2016 Papers

機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術   AI学会論文    知識情報処理技術   AI学会論文を集めて     推論技術

In the previous article, we discussed Reasoning Web 2015. This time we describe the 12th Reasoning Web Summer School (RW2016) held in Aberdeen, UK, from September 5 to 9, 2016.

It covered knowledge graphs, linked data, semantics, fuzzy RDF, and logical foundations for building and querying OWL knowledge bases.

Details are given below.

This chapter presents some state of the arts techniques on understanding authors’ intentions during the knowledge graph construction process. In addition, we provide the reader with an overview of the book, as well as a brief introduction of the history and the concept of Knowledge Graph.

We will introduce the notions of explicit author intention and implicit author intention, discuss some approaches for understanding each type of author intentions and show how such understanding can be used in reasoning-based test-driven knowledge graph construction and can help design guidelines for bulk editing, efficient reasoning and increased situational awareness. We will discuss extensively the implications of test driven knowledge graph construction to ontology reasoning.

The question whether an ontology can safely be replaced by another, possibly simpler, one is fundamental for many ontology engineering and maintenance tasks. It underpins, for example, ontology versioning, ontology modularization, forgetting, and knowledge exchange. What safe replacement means depends on the intended application of the ontology. If, for example, it is used to query data, then the answers to any relevant ontology-mediated query should be the same over any relevant data set; if, in contrast, the ontology is used for conceptual reasoning, then the entailed subsumptions between concept expressions should coincide. This gives rise to different notions of ontology inseparability such as query inseparability and concept inseparability, which generalize corresponding notions of conservative extensions. We survey results on various notions of inseparability in the context of description logic ontologies, discussing their applications, useful model-theoretic characterizations, algorithms for determining whether two ontologies are inseparable (and, sometimes, for computing the difference between them if they are not), and the computational complexity of this problem.

One of the key differences between graph and relational databases is that on graphs we are much more interested in navigational queries. As a consequence, graph database systems are specifically engineered to answer these queries efficiently, and there is a wide body of work on query languages that can express complex navigational patterns. The most commonly used way to add navigation into graph queries is to start with a basic pattern matching language and augment it with navigational primitives based on regular expressions. For example, the friend-of-a-friend relationship in a social network is expressed via the primitive (friend)+, which looks for paths of nodes connected via the friend relation. This expression can be then added to graph patterns, allowing us to retrieve, for example, all nodes A,B and C that have a common friend-of-a-friend. But, in order to alleviate some of the drawbacks of isolating navigation in a set of primitives, we have recently witnessed an effort to study languages which integrate navigation and pattern matching in an intrinsic way. A natural candidate to use is Datalog, a well known declarative query language that extends first order logic with recursion, and where pattern matching and recursion can be arbitrarily nested to provide much more expressive navigational queries. In this paper we review the most common navigational primitives for graphs, and explain how these primitives can be embedded into Datalog. We then show current efforts to restrict Datalog in order to obtain a query language that is both expressive enough to express all these primitives, but at the same time feasible to use in practice. We illustrate how this works both over the base graph model and over the more general RDF format underlying the semantic web.

With tens if not hundreds of billions of logical statements, the Linked Open Data (LOD) is one of the biggest knowledge bases ever built. As such it is a gigantic source of information for applications in various domains, but also given its size an ideal test-bed for knowledge representation and reasoning, heterogeneous nature, and complexity. However, making use of this unique resource has proven next to impossible in the past due to a number of problems, including data collection, quality, accessibility, scalability, availability and findability. The LOD Laundromat and LOD Lab are recent infrastructures that addresses these problems in a systematic way, by automatically crawling, cleaning, indexing, analysing and republishing data in a unified way. Given a family of simple tools, LOD Lab allows researchers to query, access, analyse and manipulate hundreds of thousands of data documents seamlessly, e.g. facilitating experiments (e.g. for reasoning) over hundreds of thousands of (possibly integrated) datasets based on content and meta-data. This chapter provides the theoretical basis and practical skills required for making ideal use of this large scale experimental platform. First we study the problems that make it so hard to work with Semantic Web data in its current form. We’ll also propose generic solutions and introduce the tools the reader needs to get started with their own experiments on the LOD Cloud.

An important issue that arises when querying description logic (DL) knowledge bases is how to handle the case in which the knowledge base is inconsistent. Indeed, while it may be reasonable to assume that the TBox (ontology) has been properly debugged, the ABox (data) will typically be very large and subject to frequent modifications, both of which make errors likely. As standard DL semantics is useless in such circumstances (everything is entailed from a contradiction), several alternative inconsistency-tolerant semantics have been proposed with the aim of providing meaningful answers to queries in the presence of such data inconsistencies. In the first part of this chapter, we present and compare these inconsistency-tolerant semantics, which can be applied to any DL (or ontology language). The second half of the chapter summarizes what is known about the computational properties of these semantics and gives an overview of the main algorithmic techniques and existing systems, focusing on DLs of the DL-Lite family.

The aim of this talk is to present a detailed, self-contained and comprehensive account of the state of the art in representing and reasoning with fuzzy knowledge in Semantic Web Languages such as triple languages RDF/RDFS, conceptual languages of the OWL 2 family and rule languages. We further show how one may generalise them to so-called annotation domains, that cover also e.g. temporal and provenance extensions.

Knowledge discovery, as an area focusing upon methodologies for extracting knowledge through deduction (a priori) or from data (a posteriori), has been largely studied in Database and Artificial Intelligence. Deductive reasoning such as logic reasoning gains logically knowledge from pre-established (certain) knowledge statements, while inductive inference such as data mining or learning discovers knowledge by generalising from initial information. While deductive reasoning and inductive learning are conceptually addressing knowledge discovery problems from different perspectives, they are inference techniques that nicely complement each other in real-world applications. In this chapter we will present how techniques from machine learning and reasoning can be reconciled and integrated to address large scale problems in the context of (i) transportation in cities of Bologna, Dublin, Miami, Rio and (ii) spend optimisation in finance.

In the next article, we will discuss Reasoning Web 2017.

コメント

Exit mobile version
タイトルとURLをコピーしました