KI 2014: Advances in Artificial Intelligence
In the previous article, we discussed KI2013. This time, the conference was held in Stuttgart, Germany, September 22-26, 2014. I will describe the 37th German Conference on Artificial Intelligence (KI 2014).
This conference will be the most important forum in Germany where academic and industry researchers from all areas of artificial intelligence will gather to exchange the latest information and research results on the theory and application of intelligent systems technology.
At KI 2014, Wolfram Burgard (University of Freiburg, Germany) will give an overview of probabilistic techniques for mobile robot navigation, Hans van Ditmarsch (LORIA Nancy, France) will give a talk on dynamic cognitive logic and artificial Intelligence by Hans van Ditmarsch (LORIA Nancy, France), and a keynote lecture on Allocation in Practice by oby Walsh (NICTA and UNSW Sydney, Australia).
The 8th Workshop on Emotion and Computing – Current Research and Future Impact, the 28th Workshop on Planen/Scheduling und Kon- figurieren/Entwerfen (PuK), and the Higher Level Workshop on Cognition and Computation were also held. Workshops on Cognition and Computation were held, plus tutorials on Probabilistic Programming (Angelika Kimmig, KU Leuven) and Human Computation (Fran ̧ Bry, LMU Munich).
Other main topics included Cognitive Modeling, Computer Vision, Constraint Satisfaction, Search, and Optimization, Knowledge Representation and Reasoning, Machine Learning and Data Mining, and Planning. Machine Learning and Data Mining, and Planning and Scheduling.
Details are given below.
Cognitive Modeling
Induction of number series is a typical task included in intelligence tests. It measures the ability to detect regular patterns and to generalize over them, which is assumed to be crucial for general intelligence. There are some computational approaches to solve number problems. Besides special-purpose algorithms, applicability of general purpose learning algorithms to number series prediction was shown for E-generalization and artificial neural networks (ANN). We present the applicability of the analytical inductive programming system Igor2 to number series problems. An empirical comparison of Igor2 shows that Igor2 has comparable performance on the test series used to evalu- ate the ANN and the E-generalization approach. Based on findings of a cognitive analysis of number series problems by Holzman et al. (1982, 1983) we conducted a detailed case study, presenting Igor2 with a set of number series problems where the complexity was varied over differ- ent dimensions identified as sources of cognitive complexity by Holzman. Our results show that performance times of Igor2 correspond to the cognitive findings for most dimensions.
Algorithmic debugging is an effective diagnosis method in intelligent tutoring systems (ITSs). Given an encoding of expert problem- solving as a logic program, it compares the program’s behaviour during incremental execution with observed learner behaviour. Any deviation captures a learner error in terms of a program location. The feedback engine of the ITS can then take the program clause in question to gen- erate help for learners to correct their error. With the error information limited to a program location, however, the feedback engine can only give remediation in terms of what’s wrong with the current problem solving step. With no access to the overall hierarchical context of a stu- dent action, it is hard to dose scaffolding help, to explain why and how a step needs to be performed, to summarize a learner’s performance so far, or to prepare the learner for the problem solving still ahead. This is a pity because such scaffolding helps learning. To address this issue, we extend the meta-interpretation technique and complement it with a pro- gram annotation approach. The expert program is enriched with terms that explain the logic behind the program, very much like comments explaining code blocks. The meta-interpreter is extended to collect all annotation in the program’s execution path, and to keep a record of the relevant parts of the program’s proof tree. We obtain a framework that defines sophisticated tutorial interaction in terms of Prolog-based task definition, execution, and monitoring.
Human-level artificial intelligence (HAI) surely is a special research endeavor in more than one way: In the first place, the very nature of intelligence is not entirely clear; there are no criteria commonly agreed upon necessary or sufficient for the ascription of intelligence other than similarity to human perfor- mance (and even this criterion is open for a plethora of possible interpretations); there is a lack of clarity concerning how to properly investigate HAI and how to proceed after the very first steps of implementing an HAI system; etc. In this note I assess the ways in which the approach of Psychometric Artificial Intelligence [1] can (and cannot) be taken as a foundation for a scientific approach to HAI.
This paper forms the final part of a short series of related articles[1,2] dedicated to highlighting a fruitful type of application of cognitively-inspired analogy engines in an educational context. It complements the earlier work with an additional fully worked out example by providing a short analysis and a detailed formal model (based on the Heuristic-Driven Theory Projection computational analogy framework) of the Number Highrise, a tool for teaching multiplication- based relations in the range of natural numbers up to 100 to children in their first years of primary school.
So far most cognitive modeling approaches have concentrated on modeling and predicting the actions of an “average user” – a user pro- file that in reality often does not exist. User performance is highly de- pendent on psychological factors like working memory, planning depth, search strategy etc. that differ between users. Therefore, we propose a combination of several AI methods to automatically identify user pro- files. The proposed method assigns each user a set of cognitive agents which are controlled by several psychological factors. Finally, this method is evaluated in a case study on preliminary user data on the PSPACE- complete planning problem Rush-Hour.
Computer Vision
In this paper, we propose a method to combine unsuper- vised and semi-supervised learning (SSL) into a system that is able to adaptively learn objects in a given environment with very little user in- teraction. The main idea of our approach is that clustering methods can help to reduce the number of required label queries from user interac- tion, and at the same time provide the potential to select useful data to learn from. In contrast to standard methods, we train our classifier only on data from the actual environment and only if the clustering gives enough evidence that the data is relevant. We apply our method to the problem of object detection in indoor environments, for which we use a region-of-interest detector before learning. In experiments we show that our adaptive SSL method can outperform the standard non-adaptive supervised approach on an indoor office data set.
In semantic scene segmentation, every pixel of an image is assigned a category label. This task can be made easier by incorporat- ing depth information, which structured light sensors provide. Depth, however, has very different properties from RGB image channels. In this paper, we present a novel method to provide depth information to convo- lutional neural networks. For this purpose, we apply a simplified version of the histogram of oriented depth (HOD) descriptor to the depth chan- nel. We evaluate the network on the challenging NYU Depth V2 dataset and show that with our method, we can reach competitive performance at a high frame rate.
Constraint Satisfaction, Search, and Optimization
We introduce a novel variant of the problem of computing energy-efficient and quick routes in a road network. In contrast to previ- ous route planning approaches we do not only make use of variation of the routes to save energy but also allow variation of driving speed along the route to achieve energy savings. Our approach is based on a simple yet fundamental insight about the optimal velocities along a fixed route and a reduction to the constrained shortest path problem
Many applications can be tackled with modern CDCL SAT solvers. However, most of todays CDCL solvers guide their search with a simple, but very fast to compute decision heuristic. In contrast to CDCL solvers, SAT solvers that are based on look-ahead procedures spend more time for decisions and with their local reasoning. This paper proposes three light-weight additions to the CDCL algorithm, local look-ahead, all-unit-UIP learning and on-the-fly-probing which allow the search to find unit clauses that are hard to find by unit propagation and clause learning alone. With the additional reasoning steps of these techniques the resulting algorithm is able to solve SAT formulas that cannot be solved by the original algorithm.
Many real world problems a resolved with satisfiability testing(SAT). However, SAT solvers have documented bugs and therefore the answer that a formula is unsatisfiable can be incorrect. Certifying algorithms are an attractive approach to increase the reliability of SAT solvers. For unsatisfiable formulas an unsatisfiability proof has to be created. This paper presents certificate con- structions for various formula simplification techniques, which are crucial to the success of modern SAT solvers
In this paper we answer the open question for the existence of a more compact encoding from Pseudo-Boolean constraints into CNF that maintains generalized arc consistency by unit propagation, formal- ized by Bailleux et al. in [21]. In contrast to other encodings our approach is defined in an abstract way and we present a concrete instantiation, resulting in a space complexity of O(n2 log2(n) log(wmax)) clauses in con- trast to O(n3 log(n)log(wmax)) clauses generated by the previously best known encoding that maintains generalized arc consistency.
Knowledge Representation and Reasoning
Matching concept descriptions against concept patterns was introduced as a new inference task in Description Logics (DLs) almost 20 years ago, motivated by applications in the Classic system. For the DL EL, it was shown in 2000 that matching without a TBox is NP-complete. In this paper we show that matching in EL w.r.t. general TBoxes (i.e., finite sets of general concept inclusions, GCIs) is in NP by introducing a goal-oriented matching algorithm that uses non-deterministic rules to transform a given matching problem into a solved form by a polynomial number of rule applications. We also investigate some tractable variants of the matching problem w.r.t. general TBoxes
Over the past decade automated negotiation has developed into a subject of central interest in distributed artificial intelligence. For a great part this is because of its broad application potential in different areas such as economics, e-commerce, the political and social sciences. The complexity of practical auto- mated negotiation – a multi-issue, incomplete-information and continuous-time environment – poses severe challenges, and in recent years many negotiation strategies have been proposed in response to this challenge. Traditionally, the per- formance of such strategies is evaluated in game-theoretic settings in which each agent “globally” interacts (negotiates) with all other participating agents. This traditional evaluation, however, is not suited for negotiation settings that are pri- marily characterized by “local” interactions among the participating agents, that is, settings in which each of possibly many participating agents negotiates only with its local neighbors rather than all other agents. This paper presents an ap- proach to handle this type of local setting. Starting out from the traditional global perspective, the negotiations are also analyzed in a new fashion that negotiation locality (hence spatial information about the agents) is taken into consideration. It is shown how both empirical and spatial evolutionary game theory can be used to interpret bilateral negotiation results among state of the art negotiating agents in these different scenarios.
Possibilistic Answer Set Programming is an extension of the standard ASP framework that allows for attaching degrees of certainty to the rules in ASP programs. In the literature, several semantics for such PASP-programs have been presented, each of them having particular strengths and weaknesses. In this work we present a new semantics that employs so-called iota-answer sets, a solution concept introduced by Gebser et al. (2009), in order to find solutions for stan- dard ASP programs with odd cycles or auto-blocking rules. This is achieved by considering maximal subsets of a given ASP program for which answer sets ex- ist. The main idea of our work is to integrate iota-semantics into the possibilistic framework in such a way that degrees of certainty are not only assigned to atoms mentioned in the answer sets, but also to the answer sets themselves. Our ap- proach gives more satisfactory solutions and avoids counter-intuitive examples arising in the other approaches. We compare our approach to existing ones and present a translation into the standard ASP framework allowing the computation of solutions by existing tools.
In the context of Description Logics (DLs) concrete domains allow to model concepts and facts by the use of concrete values and pred- icates between them. For reasoning in the DL ALC with general TBoxes concrete domains may cause undecidability. Under certain restrictions of the concrete domains decidability can be regained. Typically, the con- crete domain predicates are crisp, which is a limitation for some applica- tions. In this paper we investigate crisp ALC in combination with fuzzy concrete domains for general TBoxes, devise conditions for decidability, and give a tableau-based reasoning algorithm.
The paper contributes to the recent efforts on temporalizing and streamifiying ontology based data access (OBDA) described in “Ontology Based Data Access (ODBA), generative AI and GNN” by discussing as- pects of rewritability, i.e., compilability of the TBox into ontology-level queries, and unfoldability, i.e., transformability of ontology-level queries to queries on datasource level, for the new query-language framework STARQL. The distinguishing feature of STARQL is its general stream windowing and ABox sequencing strategy which allows it to plugin well- known query languages such as unions of conjunctive queries (UCQs) in combination with TBox languages such as DL-Lite and do temporal reasoning with a sorted first-order logic on top of them. The paper dis- cusses safety aspects under which STARQL queries that embed UCQs over DL-Lite ontologies can be rewritten and unfolded to back-end re- lational stream query languages such as CQL. With these results, the adoption of description logic technology in industrially relevant applica- tion areas such as industrial monitoring is crucially fostered.
We investigate the problem of inconsistency measurement on large knowledge bases by considering stream-based inconsistency measurement, i. e., we investigate inconsistency measures that cannot consider a knowledge base as a whole but process it within a stream. For that, we present, first, a novel in- consistency measure that is apt to be applied to the streaming case and, second, stream-based approximations for the new and some existing inconsistency mea- sures. We conduct an extensive empirical analysis on the behavior of these incon- sistency measures on large knowledge bases, in terms of runtime, accuracy, and scalability. We conclude that for two of these measures, the approximation of the new inconsistency measure and an approximation of the contension inconsistency measure, large-scale inconsistency measurement is feasible.
A central notion in qualitative spatial and temporal reason- ing is the concept of qualitative constraint calculus, which captures a particular paradigm of representing and reasoning about spatial and temporal knowledge. The concept, informally used in the research com- munity for a long time, was formally defined by Ligozat and Renz in 2004 as a special kind of relation algebra — thus emphasizing a particu- lar type of reasoning about binary constraints. Although the concept is known to be limited it has prevailed in the community. In this paper we revisit the concept, contrast it with alternative approaches, and analyze general properties. Our results indicate that the concept of qualitative constraint calculus is both too narrow and too general: it disallows differ- ent approaches, but its setup already enables arbitrarily hard problems.
Intelligibility is a design principle for context-aware systems which focuses on providing information about context acquisition and interpretation to its users. In this paper we present existing approaches to provide intelligibility and identify a common shortcoming. Explana- tions starting on the context level are insufficient to help users in finding and understanding why their system is not working. Debuggability for context-aware systems is introduced as a means to assist users in de- bugging the cause of a failure. To achieve this we adapt an information exchange approach from explanatory debugging. Furthermore we discuss open problems of debuggability and provide a possible solution
In this paper we present first steps towards an index based workflow similarity function. Classical approaches on workflow similarity perform a pairwise comparison of workflows, thus the workflow similar- ity function presented in this paper can speedup the calculation as the comparison is performed on the index.
Machine Learning and Data Mining
Artificial neural networks are fast in the application phase but very slow in the training phase. On the other hand there are state-of- the-art approaches using neural networks, which are very efficient in im- age classification tasks, like the hybrid neural network plait (HNNP) ap- proach for images from signal data stemming for instance from phonemes. We propose to accelerate HNNP for phoneme recognition by substituting the neural network with the highest computation costs, the convolutional neural network, within the HNNP by a preceding local feature extrac- tor and a simpler and faster neural network. Hence, in this paper we propose appropriate feature extractors for this problem and investigate and compare the resulting computation costs as well as the classifica- tion performance. The results of our experiments show that HNNP with the best one of our proposed feature extractors in combination with a smaller neural network is more than two times faster than HNNP with the more complex convolutional neural network and delivers still a good classification performance.
To solve big data problems which occur in modern data min- ing applications, a comprehensive approach is required that combines a flexible model and an optimisation algorithm with fast convergence and a potential for efficient parallelisation both in the number of data points and the number of features.
In this paper we present an algorithm for fitting additive models based on the basis expansion principle. The classical backfitting algorithm that solves the underlying normal equations cannot be properly parallelised due to inherent data dependencies and leads to a limited error reduction under certain circumstances. Instead, we suggest a modified BiCGStab method adapted to suit the special block structure of the problem. The new method demonstrates superior convergence speed and promising parallel scalability.
We discuss the convergence properties of the method and investigate its convergence and scalability further using a set of benchmark problems.
Real-parameterblackboxoptimizationusingevolutionstrategies(ES) is often applied when the fitness function or its characteristics are not explic- itly given. The evaluation of fitness and feasibility might be expensive. In the past, different surrogate model approaches have been proposed to address this issue. In our previous work, local feasibility surrogate models have been pro- posed, which are trained with already evaluated individuals. This tightly cou- pled interdependency with the optimization process leads to complex side effects when applied with meta-heuristics like the covariance matrix adaption ES (CMA- ES). The objective of this paper is to propose a new type of constraint surrogate model, which uses the concept of active learning in multiple stages for the estima- tion of the constraint boundary for a stage-depending accuracy. The underlying linear model of the constraint boundary is estimated in every stage with binary search. In the optimization process the pre-selection scheme is employed to save constraint function calls. The surrogate model is evaluated on a simple adaptive (1 + 1)-ES as well as on the complex (1 + 1)-CMA-ES for constrained optimiza- tion. The results of both ES on a linearly-constrained test bed look promising.
Statistical methods have shown great success in short-term prediction of wind power in the recent past. A preselection of turbines is presented that is based on the segmentation of the area around the target turbine with a specific radius. Small problem instances allow a rigorous comparison of different input sets employing various regression techniques and motivate the application of evolutionary algorithms described in “Overview of evolutionary algorithms and examples of algorithms and implementations” for finding adequate features. The optimization problem turns out to be difficult to solve, while strongly depending on the target turbine and the prediction technique.
Planning and Scheduling
Even though there are sophisticated AI planning algorithms, many integrated, large-scale projects do not use planning. One reason seems to be the missing support by engineering tools such as syntax high- lighting and visualization. We propose myPddl— a modular toolbox for efficiently creating pddl domains and problems. To evaluate myPddl, we compare it to existing knowledge engineering tools for pddl and ex- perimentally assess its usefulness for novice pddl users
In this paper we look at packing problems that naturally arise in con- tainer loading. Given a set of 3D iso-oriented objects and a container, the task is to find a packing sequence of the input objects consisting of the ID, location, and orientation that minimizes the space wasted by the packing. Instead of the decision problem, we look at the packing optimization problem, minimizing the total height of a packing. Our solutions uses extreme points and applies Monte- Carlo tree search with policy adaptation, a randomized search technique that has been shown to be effective for solving single-agent games and, more recently, complex traveling salesman and vehicle routing problems. The implementation is considerably simple and conceptually different from mathematical program- ming branch-and-bound and local search approaches. Nonetheless, the results in solving 2D and 3D packing problems are promising.
Domain-independent planning in general is broadly appli- cable to a wide range of tasks. Many formalisms exist that allow the description of different aspects of realistic problems. Which one to use is often no obvious choice, since a higher degree of expressiveness usually comes with an increased planning time and/or a decreased policy quality. Under the assumption that hard guarantees are not required, users are faced with a decision between multiple approaches. As a generic model we use a probabilistic description in the form of Markov Decision Processes (MDPs). We define abstracting translations into a classical planning for- malism and fully observable nondeterministic planning. Our goal is to give insight into how state-of-the-art systems perform on different MDP planning domains.
Autonomous agents interact with the irenvironments viasenors and actuators. Motivated by the observation that sensors can be expensive, in this paper we are concerned with the problem of minimiz- ing the amount of sensors an agent needs in order to successfully plan and act in a partially observable nondeterministic environment. More specifically, we present a simple greedy top-down algorithm in the space of observation variables that returns an inclusion minimal set of state variables sufficient to observe in order to find a plan. We enhance the algorithm by reusing plans from earlier iterations and by the use of func- tional dependencies between variables that allows the values of some variables to be inferred from those of other variables. Our experimental evaluation on a number of benchmark problems shows promising results regarding runtime, numbers of sensors and plan quality.
In the next article, we will discuss KI2015.
コメント