KI 2015: Advances in Artificial Intelligence Papers

Machine Learning Technology  Artificial Intelligence Technology  Natural Language Processing Technology  Semantic Web Technology  Ontology Technology   Reasoning Technology   Knowledge Information Technology  Collecting AI Conference Papers    Digital Transformation Technology

KI 2015: Advances in Artificial Intelligence

In the previous article, I described KI 2014. In this issue, we describe the proceedings of the 38th German Artificial Intelligence Conference, KI 2015, held in Dresden, Germany, September 21-25, 2015.

This annual conference, which began in 1975 as the German Workshop on AI (GWAI), traditionally brings together academic and industry in all areas of AI researchers and provides a premier forum for the exchange of news and research results on the theory and application of intelligent systems technology.

This year’s conference was preceded by the International Summer School on Reasoning, organized by the International Center for Computational Logic. In addition, nine workshops were held on the first and second days of the conference, each with its own proceedings. In addition, the doctoral consortium provided an opportunity for doctoral researchers to receive feedback on their research. Research summaries are included in the KI proceedings.

The conference received 59 submissions (authors from 22 countries), of which 48 were research papers (RPs) and 11 were technical communications (TCs), a more concise paper format introduced for the first time this year The TCs included reports on the researchers’ own recent publications, position papers, ongoing previews of ongoing research, and other opportunities to present a broader range of results and ideas of interest to the KI audience. After a rigorous review process, 15 research papers were selected for publication (31% of the 48 RP submissions) and an additional 14 submissions were accepted as TCs. In addition to these conference papers, this volume also includes four submissions accepted by the KI 2015 Doctoral Consortium.

At KI 2015, we were able to secure the participation of three distinguished scientists as keynote speakers: Molham Aref (LogicBlox, USA) on declarative probabilistic programming, Ross D. King (University of Manchester) on working with robot scientists work, and Francesca Rossi (University of Padova) spoke on safety and ethical issues in systems for collective decision making.

Keynote

The future will see autonomous machines acting in the same envi- ronment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need.

In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.

Full Technical Papers

Automated planning is computationally hard even in its most basic form as STRIPS planning. We are interested in numeric plan- ning with instantaneous actions, a problem that is not decidable in gen- eral. Relaxation is an approach to simplifying complex problems in order to obtain guidance in the original problem. We present a relaxation app- roach with intervals for numeric planning and show that plan existence can be decided in polynomial time for tasks where dependencies between numeric effects are acyclic.

The paper proposes an oddness measure for estimating the extent to which a new item is at odds with a class. Then a simple classi- fication procedure based on the minimization of oddness with respect to the different classes is proposed. Experiments on standard benchmarks with Boolean or numerical data provide good results. The global oddness measure is based on the estimation of the oddness of the new item with respect to the subsets of the classes having a given size

This paper considers solving a problem in combinatorial search: the automated arrangement of irregular-shaped objects for indus- trial 3D printing. The input is a set of triangulated models; the output is a set of location and orientation vectors for the objects. The pro- posed algorithm consists of three stages: (1) translation of the models into an octree; (2) design of an efficient test for pairwise intersection based on sphere trees; and (3) computation of an optimized placement of the objects using simulated annealing. We compare several sphere-tree construction methods and annealing parameter settings to derive valid packings.

In this paper we provide a procedure for deciding subsump- tions of the form T |= C ⊑ E, where C is an ELUμ-concept, E an ELU– concept and T a general EL-TBox. Deciding such subsumptions can be used for computing the logical difference between general EL-TBoxes. Our procedure is based on checking for the existence of a certain simu- lation between hypergraph representations of the set of subsumees of C and of E w.r.t. T , respectively. With the aim of keeping the procedure implementable, we provide a detailed construction of such hypergraphs deliberately avoiding the use of intricate automata-theoretic techniques

While the complexity of the optimization problem to be solved when computing the Maximum Entropy distribution PRof a knowledge base R grows dramatically when moving to the relational case, it has been shown that having the weighted conditional impacts (WCI) of R available, PRcan be computed much faster. Computing WCI in a straightforward manner readily gets infeasible due to the size of the set Ω of possible worlds. In this paper, we propose a new app- roach for computing the WCI without considering the worlds in Ω at all. We introduce the notion of sat-pairs and show how to determine the set CSP of all possible combinations of sat-pairs by employing combinatorial means. Using CSP instead of Ω for computing the WCI is a significant performance gain since CSP is typically much smaller than Ω. For a start, we focus on simple knowledge bases consisting of a single condi- tional. First evaluation results of an implemented algorithm illustrate the benefits of our approach.

Domain independent planning systems have been used in an increasing number of applications, such as autonomous robots. However, most such systems either generate the planning goals with a domain spe- cific goal component or they require the user to write fully fledged PDDL goal descriptions. In this paper we present a domain independent method based on referring expressions to implement a menu-driven interface to a planning system.

The integration of wind power generation into the power grid can only succeed with precise and reliable forecast methods. With different measurements available, machine learning algorithms can yield very good predictions for short-term forecast horizons. In this paper, we compare the use of wind power and wind speed time series as well as differences of subsequent measurements with Random Forests, Sup- port Vector Regression and k-nearest neighbors. While both time series, wind power and speed, are well-suited to train a predictor, the best per- formance can be achieved by using both together. Further, we propose an ensemble approach combining RF and SVR with a cross-validated weighted average and show that the prediction performance can be sub- stantially improved.

Recommendation algorithms typically work by suggesting items that are similar to the ones that a user likes, or items that similar users like. We propose a content-based recommendation technique with the focus on serendipity of news recommendations. Serendipitous rec- ommendations have the characteristic of being unexpected yet fortunate and interesting to the user, and thus might yield higher user satisfaction. In our work, we explore the concept of serendipity in the area of news articles and propose a general framework that incorporates the bene- fits of serendipity- and similarity-based recommendation techniques. An evaluation against other baseline recommendation models is carried out in a user study.

Probabilistic interpretations consist of a set of interpreta- tions with a shared domain and a measure assigning a probability to each interpretation. Such structures can be obtained as results of repeated experiments, e.g., in biology, psychology, medicine, etc. A translation between probabilistic and crisp description logics is introduced and then utilized to reduce the construction of a base of general concept inclusions of a probabilistic interpretation to the crisp case for which a method for the axiomatization of a base of GCIs is well-known

The approximation of kernel functions using explicit feature maps gained a lot of attention in recent years due to the tremendous speed up in training and learning time of kernel-based algorithms, mak- ing them applicable to very large-scale problems. For example, approxi- mations based on random Fourier features are an efficient way to create feature maps for a certain class of scale invariant kernel functions. How- ever, there are still many kernels for which there exists no algorithm to derive such maps. In this work we propose an efficient method to cre- ate approximate feature maps from an arbitrary distance metric using pseudo line projections called Distance-Based Feature Map (DBFM). We show that our approximation does not depend on the input dataset size or the dimension of the input space. We experimentally evaluate our app- roach on two real datasets using two metric and one non-metric distance function.

We propose doppelkopf, a trick-taking card game with sim- ilarities to skat, as a benchmark problem for AI research. While skat has been extensively studied by the AI community in recent years, this is not true for doppelkopf. However, it has a substantially larger state space than skat and a unique key feature which distinguishes it from skat and other card games: players usually do not know with whom they play at the start of a game, figuring out the parties only in the process of playing.

Since its introduction in 2006, the UCT algorithm has been the domi- nating approach for solving games in AI research. It has notably achieved a playing strength comparable to good human players at playing go, but it has also shown good performance in card games like Klondike solitaire and skat. In this work, we adapt UCT to play doppelkopf and present an algorithm that generates random card assignments, used by the UCT algorithm for sampling. In our experiments, we discuss and evaluate dif- ferent variants of the UCT algorithm, and we show that players based on UCT improve over simple baseline players and exhibit good card play behavior also when competing with a human player.

Symmetries provide the basis for well-established approaches to tackle the state explosion problem in state space search and in AI planning. However, although by now there are various symmetry-based techniques available, these techniques have not yet been empirically eval- uated and compared to each other in a common setting. In particular, it is unclear which of them should be preferably applied, and whether there are techniques with stronger performance than others. In this paper, we shed light on this issue by providing an empirical case study. We com- bine and evaluate several symmetry-based techniques for cost-optimal planning as heuristic search. For our evaluation, we use state-of-the-art abstraction heuristics on a large set of benchmarks from the international planning competitions.

An agent that interacts with a nondeterministic environ- ment can often only partially observe the surroundings. This necessitates observations via sensors rendering more information about the current world state. Sensors can be expensive in many regards therefore it can be essential to minimize the amount of sensors an agents requires to solve given tasks. A limitation for sensor minimization is given by essential sensors which are always required to solve particular problems. In this paper we present an efficient algorithm which determines a set of neces- sary observation variables. More specifically, we develop a bottom-up algorithm which computes a set of variables which are always necessary to observe, in order to always reach a goal state. Our experimental results show that the knowledge about necessary observation variables can be used to minimize the number of sensors of an agent

Qualitative representations of spatial knowledge aim to cap- ture the essential properties and relations of the underlying spatial domain. In addition, conceptual neighborhood has been introduced to describe how qualitative spatial relations may change over time. Cur- rent qualitative representations mainly use symbolic constraint-based languages that are detached from the underlying domain with the down- side that a well-formed sentence is not necessarily consistent. This makes it difficult to design efficient knowledge manipulation techniques that con- sistently advance a representation with respect to conceptual neighbor- hood. In this paper we argue for analogical spatial representations that inherently obey domain restrictions and, as a result, are consistent per se. We develop a graph-based analogical representation for RCC-8, the construction of which is based on neighborhood transitions realized by efficient graph transformations. The main benefit of the developed rep- resentation is an improved efficiency for neighborhood-based reasoning tasks that need to manipulate spatial knowledge under the side condition of consistency, such as planning or constraint relaxation

Square grids are commonly used in robotics and game development as spatial models and well known in AI community heuristic search algorithms (such as A*, JPS, Theta* etc.) are widely used for path planning on grids. A lot of research is concentrated on finding the shortest (in geometrical sense) paths while in many applications finding smooth paths (rather than the shortest ones but containing sharp turns) is preferable. In this paper we study the problem of generating smooth paths and concentrate on angle constrained path planning. We put angle-constrained path planning problem formally and present a new algorithm tailored to solve it – LIAN. We examine LIAN both theoretically and empirically. We show that it is sound and complete (under some restrictions). We also show that LIAN outperforms the analogues when solving numerous path planning tasks within urban outdoor navigation scenarios.

Technical Communications

Axiom pinpointing consists in computing a set-wise minimal set of axioms that explains the reason for a subsumption relation in an ontology. Recently, an encoding of the classification of an EL+ ontology to a polynomial-size Horn propositional formula has been devised. This enables the development of a method for axiom pinpointing based on the analysis of unsatisfiable propositional formulas. Building on this earlier work, we propose a computation method, termed EL2MCS, that exploits an important relationship between minimal axiom sets and minimal unsat- isfiable subformulas in the propositional domain. Experimental evaluation shows that EL2MCS achieves substantial performance gains over existing axiom pinpointing approaches for lightweight description logics.

Humans perform abductive reasoning routinely. We hypoth- esize about what happened in the past to explain an observation made in the present. This is frequently needed to model the present, too.

In this paper we describe an approach to equip robots with the capa- bility to abduce hypotheses triggered by unexpected observations from sensor data. This is realized on the basis of KnowRob, which provides general knowledge about objects and actions. First we analyze the types of environment changes that a robot may encounter. Thereafter we define new reasoning methods allowing to abduce past events from observed changes. By projecting the effects of these hypothetical previous events, the robot gains knowledge about consequences likely to expect in its present. The applicability of our reasoning methods is demonstrated in a virtual setting as well as in a real-world scenario. In these, our robot was able to abduce highly probable information not directly accessible from its sensor data.

LabSAT is a software system that for a giving abstract argu- mentation system AF can determine some or all extensions, and can decide whether an argument is credulously or sceptically accepted. These tasks are solved for complete, stable, preferred, and grounded seman- tics. LabSAT’s implementation employs recent results on the connec- tion between argumentation and Boolean satisfiability and uses the SAT solver Lingeling. In this paper, we give an overview of LabSAT and its capabilities and compare its performance to two other computational argumentation systems.

Probabilistic belief change is an operation that takes a prob- ability distribution representing a belief state along with an input sen- tence representing some information to be accommodated or removed, and maps it to a new probability distribution. In order to choose from many such mappings possible, techniques from information theory such as the principle of minimum cross-entropy described in “Overview of Cross-Entropy and Related Algorithms and Implementation Examples,” have previously been used. Central to this principle is the Kullback-Leibler (KL) divergence. In this short study, we focus on the contraction of a belief state P by a belief a, which is the process of turning the belief a into a non-belief. The con- tracted belief state Pacan be represented as a mixture of two states: the original belief state P, and the resultant state P¬a of revising P by ¬a. Crucial to this mixture is the mixing factor ε which determines the proportion of P and P¬a that are to be used in this process. We show that once ε is determined, the KL divergence of Pafrom P is given by a function whose only argument is ε. We suggest that ε is not only a mixing factor but also captures relevant aspects of P and P¬a required for computing the KL divergence.

Many recent papers claim, that the symbol grounding prob- lem (SGP) remains unsolved. Most AI researchers ignore that and the autonomous agents (or robots) they design indeed do not seem to have any “problem”. Anyway, these claims should be taken rationally, since nearly all these papers make “robots” a subject of the discussion – leav- ing some kind of impression that what many roboticists do in the long run has to fail because of the SGP not yet being solved. Starting from Searle’s chinese room argument (CRA) and Harnad’s reformulation of the problem, we take a look on proposed solutions and the concretiza- tion of the problem by Taddeo’s and Floridi’s “Z condition”. We then refer to two works, which have recently shown that the Z-conditioned SGP is unsolvable. We conclude, that the original, hard SGP is not rel- evant in the context of designing goal-directed autonomous agents

We present PGPI+ (Partial Graph Profile Inference+) an improved algorithm for user profiling in online social networks. PGPI+ infers user profiles under the constraint of a partial social graph using rich information about users (e.g. group memberships, views and likes) and handles nominal and numeric attributes. Experimental results with 20,000 user profiles from the Pokec social network shows that PGPI+ predicts user profiles with considerably more accuracy and by accessing a smaller part of the social graph than five state-of-the-art algorithms

In this paper, we will present an educational game that we developed in order to teach a chemistry lesson, namely drawing a Lewis diagram. We also conducted an experiment to gather data about the cognitive and emotional states of the learners as well as their behaviour throughout our game by using three types of sensors (electroencephalography, eye tracking, and facial expression recognition with an optical camera). Primary results show that a machine learn- ing model (logistic regression) can predict with some success whether the learner will give a correct or a wrong answer to a task presented in the game, and paves the way for an adaptive version of the game. This latter will chal- lenge or assist learners based on some features extracted from our data in order to provide real-time adaptation specific to the user.

Automatic detection of special events in large data is often more interesting for data analysis than regular patterns. In particular, the processes in multivariate time series data can be better understood, if a deviation from the normal behavior is found. In this work, we apply a machine learning event detection method to a new application in the marine domain. The marine long-term data from the stationary plat- form at Spiekeroog, called Time Series Station, are a challenge, because noise, sensor drifts and missing data complicate analysis of the data. We acquire labels for evaluation with help of experts and test different approaches, which include time context into patterns. The used event detection method is local outlier factor (LOF). To improve results, we apply dimensionality reduction to the data. The analysis of the results shows, that the machine learning techniques can find special events, which are of interest to experts in the field.

This paper addresses the problem of fitting finite Gaussian Mixture Model (GMM) with unknown number of components to the uni- variate and multivariate data. The typical method for fitting a GMM is Expectation Maximization (EM) in which many challenges are involved i.e. how to initialize the GMM, how to restrict the covariance matrix of a component from becoming singular and setting the number of compo- nents in advance. This paper presents a simulated annealing EM algo- rithm along with a systematic initialization procedure by using the prin- cipals of stochastic exploration. The experiments have demonstrated the robustness of our approach on different datasets.

In this paper, we present the Polylingual Labeled Topic Model, a model which combines the characteristics of the existing Polylin- gual Topic Model and Labeled LDA. The model accounts for multiple lan- guages with separate topic distributions for each language while restrict- ing the permitted topics of a document to a set of predefined labels. We explore the properties of the model in a two-language setting on a dataset from the social science domain. Our experiments show that our model outperforms LDA and Labeled LDA in terms of their held-out perplexity and that it produces semantically coherent topics which are well interpretable by human subjects.

The design of domain independent heuristic functions often brings up experimental evidence that different heuristics perform well in different domains. A promising approach is to monitor and reduce the error associated with a given heuristic function even as the planner solves a problem. We extend this single-step-error adaptation to heuristic functions from Partial Order Causal Link (POCL) planning. The goal is to allow a partial order planner to observe the effective average-step- error during search. The preliminary evaluation shows that our approach improves the informativeness of the state-of-the-art heuristics. Our plan- ner solves more problems by using the improved heuristics as compared to when it uses current heuristics in the selected domains.

Planning with diverse knowledge, i.e., hybrid planning, is essential for robotic applications. However, powerful heuristics are needed to reason efficiently in the resulting large search spaces. HTN planning provides a means to reduce the search space; furthermore, meta- CSP search has shown promise in hybrid domains, both wrt. search and online plan adaptation. In this paper we combine the two approaches by implementing HTN-style task decomposition as a meta-constraint in a meta-CSP search, resulting in an HTN planner able to handle very rich domain knowledge. The planner produces partial-order plans and if several goal tasks are given, subtasks can be shared, leading to shorter plans. We demonstrate the straightforward integration of different kinds of knowledge for causal, temporal and resource knowledge as well as knowledge provided by an external path planner. The resulting online planner, CHIMP, is integrated in a plan-based robot control system and is demonstrated to physically guide a PR2 robot.

Humans forget events or certain types of information some- times even when the information is important. There are patients who suffer from diseases such as dementia that result in memory loss. Using an eye tracker and image analysis modules, we develop a system that recognizes everyday objects, faces and text that the user looks at. This system can be effectively used to support an individual user’s memory by logging certain types of everyday information that the user perceives and tries to organize in his or her memory. We present a technical imple- mentation of an episodic memory support.

Support vector machines (SVMs) have become a very popu- lar machine learning method for text classification. One reason for their popularity is the possibility of using different types of kernels, which makes them very flexible and allows for dividing data sets that are not linearly separable. This paper shows a method how the primal form of the quadratic power kernel can be derived by using complex numbers. The paper also describes an application of this kernel to text catego- rization following the Dewey Document Classification (DDC) scheme. In our evaluation, the power kernel (PK) led to a competitive f-measure to those of the compared kernels and was faster to compute than all but the linear kernel.

Doctoral Consortium Contributions

Differential diagnoses in the field of neurology are very com- plex whereas a fast and precise treatment is absolutely necessary for a positive outcome for patients. Support through expert systems has shown to improve practitioners performance in certain areas. In this paper an approach for an expert system for the field of neurology based on fuzzy logic is presented. The client and server side applications, as well as the knowledge base are explained. Furthermore an overview of remaining challenges and issues is given

The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorpo- rate new pieces of information. In order to apply the rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented in this paper, along with the concept of a generalized revision algorithm for knowledge bases (Horn or Horn logic with stratified negation). We show that knowledge base dynamics has an interesting connection with kernel change via hitting set and abduction. In this paper, we show how techniques from disjunctive logic program- ming can be used for efficient (deductive) database updates. The key idea is to transform the given database together with the update request into a disjunctive (datalog) logic program and apply disjunctive tech- niques (such as minimal model reasoning) to solve the original update problem. The approach extends and integrates standard techniques for efficient query answering and integrity checking. The generation of a hit- ting set is carried out through a hyper tableaux calculus and magic set that is focused on the goal of minimality. The present paper provides a comparative study of view update algorithms in rational approach. For, understand the basic concepts with abduction, we provide an abductive framework for knowledge base dynamics. Finally, we demonstrate how belief base dynamics can provide an axiomatic characterization for inser- tion a view atom to the database. We give a quick overview of the main operators for belief change, in particular, belief updates versus database updates.

Humans have become extremely sophisticated in their use of tools compare to their animal counterparts. It is not just the dexterity but also the diversity in tool use that makes humans alpha-males of tool-use. Consider the following typical scenarios:

Scenario 1: George’s 2 years old son, James, could not reach the water tab of the wash basin. Since George was already using the stool, George quickly grabbed a suitable plastic container box and settled it by the front side of the wash basin. Now James could easily reach the basin by standing over the box.

Scenario 2: Fred wanted to hammer a nail into a wall for hanging a painting he bought recently. Since he lost his hammer the other day, he grabbed an suitable shoe and hammered the nail into the wall using the heel of the shoe.

Image interpretation is a dynamic research domain involving not only the detection of objects in a scene but also the semantic descrip- tion considering context information in the whole scene. Image interpre- tation problem can be formalized as an abductive reasoning problem, i.e. an inference to the best explanation using a background knowledge. In this work, we present a framework using a tableau method for generating and selecting potential explanations of the given image when the back- ground knowledge is encoded using a description that is able to handle spatial relations.

In the next article, we will discuss KI2016.

コメント

タイトルとURLをコピーしました