Reasoning Web 2011 Papers

機械学習技術 人工知能技術 自然言語処理技術 セマンティックウェブ技術 オントロジー技術 デジタルトランスフォーメーション技術   AI学会論文    知識情報処理技術   AI学会論文を集めて     推論技術

In the previous article, we discussed Reasoning Web 2010. In this issue, we describe the 7th Reasoning Web Summer School 2011, held in Galway, Ireland, August 23-27, 2011.

The Reasoning Web Summer School is an established event in the field of applications of reasoning techniques on the Web, targeting scientific discussions of existing researchers and attracting young researchers to this new field. After successful events in Malta (2005), Lisbon (2006), Dresden (2007, 2010), Venice (2008), and Bressanon-Brixen (2009), the 2011 event will be held at the Digital Enterprise Research Institute (DERI) at the National University of Ireland Galway ) hosted the event in Western Ireland. By co-locating this year’s Summer School with the 5th International Conference on Web Reasoning and Rule Systems (RR2011)1 , the Summer School was able to further promote interaction among researchers, practitioners, and students.

The 2011 Summer School included 12 lectures, focusing on the application of reasoning to the “Web of Data”. The chapters in this book will provide educational materials and references, along with slides of the lectures, which will be made available on the Summer School website2 . In addition to providing the summer school students with an excellent overview article provided by the lecturers, this book can also provide the general reader with an entry point into a variety of topics related to reasoning on Web data.

The first four chapters cover the principles of Resource Description Framework (RDF) and Linked Data (Chapter 1), the description logic underlying the Web Ontology Language (OWL) (Chapter 2), the use of the query language SPARQL with OWL (Chapter 3), the efficient and database infrastructure related to scalable RDF processing (Chapter 4) to provide basic knowledge.

Based on these foundations, Chapter 5 presents an approach for scalable OWL reasoning on Linked Data, and the following two chapters introduce rule and logic programming techniques related to Web reasoning (Chapter 6), especially the combination of rule-based reasoning and OWL (Chapter 7).

Chapter 8 describes the Web of data model in detail. Chapter 9 discusses key issues in web reliability management. The last two chapters continue with non-standard reasoning methods for the Semantic Web. Chapter 10 discusses the application of inductive reasoning methods, which have also been applied to software analysis, to the Semantic Web, and Chapter 11 describes an approach that combines logical and Also added are lectures on constraint programming and combinatorial optimization.

The details are given below.

With Linked Data, a very pragmatic approach towards achieving the vision of the Semantic Web has recently gained much traction. The term Linked Data refers to a set of best practices for publishing and interlinking structured data on the Web. While many standards, methods and technologies developed within by the Semantic Web community are applicable for Linked Data, there are also a number of specific characteristics of Linked Data, which have to be considered. In this article we introduce the main concepts of Linked Data. We present an overview of the Linked Data lifecycle and discuss individual approaches as well as the state-of-the-art with regard to extraction, authoring, linking, enrichment as well as evolution of Linked Data. We conclude the chapter with a discussion of is- sues, limitations and further research and development challenges of Linked Data

This chapter accompanies the foundational lecture on Description Logics (DLs) at the 7th Reasoning Web Summer School in Galway, Ireland, 2011. It introduces basic notions and facts about this family of logics which has significantly gained in importance over the recent years as these logics constitute the formal basis for today’s most expressive ontology languages, the OWL (Web Ontology Language) family.

We start out from some general remarks and examples demonstrating the modeling capabilities of description logics as well as their relation to first-order predicate logic. Then we begin our formal treatment by introducing the syntax of DL knowledge bases which comes in three parts: RBox, TBox and ABox. Thereafter, we provide the corresponding standard model-theoretic semantics and give a glimpse of the alternative way of defining the semantics via an embedding into first-order logic with equality.

We continue with an overview of the naming conventions for DLs before we delve into considerations about different notions of semantic alikeness (concept and knowledge base equivalence as well as emulation). These are crucial for investigating the expressivity of DLs and performing normalization. We move on by reviewing knowledge representation capabilities brought about by different DL features and their combinations as well as some model-theoretic properties associated thereto.

Subsequently, we consider typical reasoning tasks occurring in the context of DL knowledge bases. We show how some of these tasks can be reduced to each other, and have a look at different algorithmic approaches to realize automated reasoning in DLs.

Finally, we establish connections between DLs and OWL. We show how DL knowledge bases can be expressed in OWL and, conversely, how OWL modeling features can be translated into DLs.

In our considerations, we focus on the description logic SROIQ which underlies the most recent and most expressive yet decidable version of OWL called OWL 2 DL. We concentrate on the logical aspects and omit data types as well as extralogical features from our treatise. Examples and exercises are provided throughout the chapter.

This chapter accompanies the lecture on SPARQL with entailment regimes at the 7th Reasoning Web Summer School in Galway, Ireland, 2011. SPARQL is a query language and protocol for data specified in the Resource Description Format (RDF). The basic evaluation mechanism for SPARQL queries is based on subgraph matching. The query criteria are given in the form of RDF triples possibly with variables in place of the subject, object, or predicate of a triple, called basic graph patterns. Each instantiation of the variables that yields a subgraph of the queried RDF graph constitutes a solution. The query language further contains capabilities for querying for optional basic graph patterns, alternative graph patterns etc. We first introduce the main features of SPARQL as a query language. In order to define the semantics of a query, we show how a query can be translated to an abstract query, which can then be evaluated ac- cording to SPARQL’s query evaluation mechanism. Apart from the features of SPARQL 1.0, we also briefly introduce the new features of SPARQL 1.1, which is currently being developed by the Data Access Working Group of the World Wide Web Consortium.

In the second part of these notes, we introduce SPARQL’s extension point for basic graph pattern matching. We illustrate how this extension point can be used to define a semantics for basic graph pattern evaluation based on more elaborate semantics such as RDF Schema (RDFS) entailment or OWL entailment. This al- lows for solutions to a query that implicitly follow from an RDF graph, but which are not necessarily explicitly present. We illustrate what constitutes an extension point and how problems that arise from using a semantic entailment relation can be addressed. We first introduce SPARQL in combination with the RDFS entail- ment relation and then move on to the more expressive Web Ontology Language OWL. We cover OWL’s Direct Semantics, which is based on Description Logics, and the RDF-Based Semantics, which is an extension of the RDFS semantics. For the RDF-Based Semantics we mainly focus on the OWL 2 RL profile, which allows for an efficient implementation using rule engines.

We assume that readers have a basic knowledge of RDF and Turtle, which we use in examples. For the OWL parts, we assume some background in OWL or Description Logics (see lecture notes Foundations of Description Logics). The examples for the OWL part are given in Turtle, OWL’s functional-style syntax and Description Logics syntax. Although the inferences that are relevant for the example queries are explained, a basic idea about OWL’s modeling constructs and their semantics are certainly helpful.

As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and query- ing RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As central- ized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query pro- cessing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chap- ter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scal- able management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic infer- ence) with uncertain RDF data scalable to billions of triples.

The goal of the Scalable OWL2 Reasoning for Linked Data lectureis twofold: first, to introduce scalable reasoning and querying techniques to Semantic Web researchers as powerful tools to make use of Linked Data and large-scale ontologies, and second, to present interesting research problems for the Semantic Web that arise in dealing with TBox and ABox reasoning in OWL 2. The lecture consists of three parts. The first part will begin with an introduction and moti- vation for reasoning over Linked Data, including a survey of the use of RDFS and OWL on the Web. The second part will present a scalable, distributed reason- ing service for instance data, applying a custom subset of OWL 2 RL/RDF rules (based on a tractable fragment of OWL 2). The third part will present recent work on faithful approximate reasoning for OWL 2 DL. The lecture will include our implementation of the mentioned techniques as well as their evaluations. These notes provide complimentary reference material for the lecture, and follow the three-part structure and content of the lecture

This lecture script gives an introduction to rule based knowledge representation on Web. It reviews the logical foundations of logic programming and derivation rule languages and describes existing Web rule standard languages such as RuleML, the W3C Rule Interchange Format (RIF), and the Web rule engine Prova

The relationship between the Web Ontology Language OWL and rule-based formalisms has been the subject of many discussions and research investigations, some of them controversial. From the many at- tempts to reconcile the two paradigms, we present some of the newest developments. More precisely, we show which kind of rules can be mod- eled in the current version of OWL, and we show how OWL can be extended to incorporate rules. We finally give references to a large body of work on rules and OWL.

These notes are meant as a companion to a lecture on the topic at the Reasoning Web Summer School 2011. The goal of this work is to present diverse and known material on modeling the Web from a data perspective, to help students to get a first overview of the subject.

Methodologically, the objective is to give pointers to the relevant topics and literature, and to present the main trends and development of a new area. The idea is to organize the existing material without claiming completeness. In many parts the notes have a speculative character, oriented more towards suggesting links and generating discussion on different points of view, rather than establishing a consolidated view of the subject.

The historical accounts and references are given with the sole objective of aiding in the contextualization of some milestones, and should not be considered as signaling intellectual priorities.

Trust and its support with appropriate trust management methodologies and technologies is becoming one crucial element for wider acceptance of web services. In the computing society trust and related issues were addressed already in the nineties of the former century, but the approaches from that period were about security, more precisely security services and security mechanisms. These approaches were followed by more advanced ones, where the first branch was based on Bayesian statistics, the second branch was based on Dempster-Shafer theory of evidence and its successors, most notably subjective logic, and the third branch originated from game theory. It is, however, important to note that at the core of trust there are cognition, assessment processes, and they are governed by various factors. Consequently, trust management methodologies should take these factors, which may ne rational, irrational, contextual, etc., into account. This research contribution will therefore provide an extensive overview of existing methodologies in the computer sciences field, followed by their evaluation in terms of their advantages and disadvantages. Further, some latest experimental results will be given that identify and evaluate some of those most important factors mentioned above. Finally, we will present a new trust management methodology called Qualitative Assessment Dynamics, QAD (aka Qualitative Algebra) that complements existing methodologies mentioned above, and that is aligned with the results of the latest experimental findings.

Exploiting the complex structure of relational data enables to build better models by taking into account the additional information provided by the links between objects. We extend this idea to the Se- mantic Web by introducing our novel SPARQL-ML approach to perform data mining for Semantic Web data. Our approach is based on traditional SPARQL and statistical relational learning methods, such as Relational Probability Trees and Relational Bayesian Classifiers. We analyze our approach thoroughly conducting four sets of experiments on synthetic as well as real-world data sets. Our analytical results show that our approach can be used for almost any Semantic Web data set to perform instance-based learning and classification. A comparison to kernel methods used in Support Vector Machines even shows that our approach is superior in terms of classification accuracy.

The integration of both distributed schemas and data repos- itories is a major challenge in data and knowledge management applications. Instances of this problem range from mapping database schemas to object reconciliation in the linked open data cloud. We present a novel approach to several important data integration problems that combines logical and probabilistic reasoning. We first provide a brief overview of some of the basic formalisms such as description logics and Markov logic that are used in the framework. We then describe the representation of the different integration problems in the probabilistic-logical framework and discuss efficient inference algorithms. For each of the applications, we conducted extensive experiments on standard data integration and matching benchmarks to evaluate the efficiency and performance of the approach. The positive results of the evaluation are quite promising and the flexibility of the framework makes it easily adaptable to other real- world data integration problems.

Computers play an increasingly important role in helping individuals and industries make decisions. For example they can help individuals make decisions about which products to purchase or industries make decisions about how best to manufacture these products. Constraint programming provides powerful support for decision-making; it is able to search quickly through an enormous space of choices, and in- fer the implications of those choices. This tutorial will teach attendees how to develop models of combinatorial problems and solve them using constraint programming, satisfiability and mixed integer programming techniques. The tutorial will make use of Numberjack, an open-source Python-based optimisation system developed at the Cork Constraint Computation Centre. The focus of the tutorial will be on various network design problems and optimisation challenges in the Web.

In the next article, we will discuss Reasoning Web 2012.

コメント

タイトルとURLをコピーしました