KI 2017: Advances in Artificial Intelligence Papers

Machine Learning Technology Artificial Intelligence Technology Natural Language Processing Technology Semantic Web Technology Ontology Technology Reasoning Technology Knowledge Information Technology Collecting AI Conference Papers Digital Transformation Technology Navigation of this Blog
KI2017

In the previous article we discussed KI2016. In this issue, we describe KI2017, which was held at the Technical University of Dortmund in 2017.

The papers presented at the 40th Congress KI2017, held at the Technical University of Dortmund from September 25-29, 2017, received 73 valid submissions, with 20 accepted as full research papers and 16 as short technical communications.

The official event of the German Society for Artificial Intelligence (abbreviated KI) was held in 1975, at that time a workshop of the KI Working Group of the German “Gesellschaft für Informatik” (Computer Science Association, GI), and before that, in April 1973 in Hamburg There were also informal meetings such as the “Fachtagung Kognitive Verfahren und Systeme” held in Hamburg in April 1973. Today, it has evolved into an annual conference with international participation, mainly by artificial intelligence researchers in Germany and neighboring countries.

The content includes a wide range of topics such as agents, robotics, cognitive science, machine learning, planning, knowledge representation, reasoning, and ontology, as well as numerous applications in areas such as social media, psychology, and transportation systems.

In addition to the regular sessions, the program will feature presentations by Pierre Baldi (University of California, Irvine), Gerhard Brewka (University of Leipzig), Luc De Raedt (Katholieke Universiteit Leuven), an industry session with a keynote by Wolfgang Wahlster (Saarland University), and presentations of selected papers by German authors who presented at the international sister conferences AAAI and IJCAI in 2017. Sessions consisting of presentations were also held.

In addition, to celebrate the 40th anniversary of the German Conference on Artificial Intelligence, the program included a historical session with a panel discussion. The session was moderated by Ulrich Furbach, with contributions from panelists Katharina Morik, Hans-Helmut Nagel, Bernd Neumann, and Jörg Siekmann. After the moderator briefly reviewed the history of computer science and artificial intelligence in Germany, the panelists commented on early AI-related activities and their relationship to the computer science community of the time. Finally, an open forum reflected on recent developments in the field and its future.

During the first two days of the conference, Christoph Beierle (University of Hagen), who chaired the workshops and tutorials, organized a program consisting of two workshops.

– ZooOperation Competition (Vanessa Volz, Christian Eichhorn)
– Formal and Cognitive Reasoning (Christoph Beierle, Gabriele Kern-Isberner,

During the workshop, more papers were presented, ideas were discussed, and experiences were exchanged. In addition, the program included two tutorials.

– Feasible Reasoning in Description Logic (Ivan Varzinczak)
– Knowledge Representation and Reasoning with Nilsson Type Probabilistic Logic (Ivan Varzinczak)

The details are given below.

Full Technical Papers

In this paper, we employ aspects of machine learning, computer vision, and qualitative representations to build a classifier of plain sketches. The paper proposes a hybrid technique for accurately recognizing hand-drawn sketches, by relying on a set of qualitative relations between the strokes that compose such sketches, and by taking advantage of two major perspectives for processing images. Our implementation shows promising results for recognizing sketches that have been hand-drawn by human participants.

We adapt the relaxation heuristics hmax, hadd and hFF to interval based numeric relaxation frameworks, combining them with two different relaxation techniques and with two different search techniques. In contrast to previous approaches, the heuristics presented here are not limited to a subset of numeric planning and support action costs.

Two simple and attractive mechanisms for the fair division of indivisible goods in an online setting are LIKE and BALANCED LIKE. We study some fundamental computational problems concerning the outcomes of these mechanisms. In particular, we consider what expected outcomes are possible, what outcomes are necessary, and how to compute their exact outcomes. In general, we show that such questions are more tractable to compute for LIKE than for BALANCED LIKE. As LIKE is strategy-proof but BALANCED LIKE is not, we also consider the computational problem of how, with BALANCED LIKE, an agent can compute a strategic bid to improve their outcome. We prove that this problem is intractable in general

This paper combines two key ingredients for online algorithms – competitive analysis (e.g. the competitive ratio) and advice complexity (e.g. the number of advice bits needed to improve online decisions) – in the context of a simple online fair division model where items arrive one by one and are allocated to agents via some mechanism. We consider four such online mechanisms: the popular Ranking matching mechanism adapted from online bipartite matching and the Like, Balanced Like and Maximum Like allocation mechanisms firstly introduced for online fair division problems. Our first contribution is that we perform a competitive analysis of these mechanisms with respect to the expected size of the matching, the utilitarian welfare, and the egalitarian welfare. We also suppose that an oracle can give a number of advice bits to the mechanisms. Our second contribution is to give several impossibility results; e.g. no mechanism can achieve the egalitarian outcome of the optimal offline mechanism supposing they receive partial advice from the oracle. Our third contribution is that we quantify the competitive performance of these four mechanisms w.r.t. the number of oracle requests they can make. We thus present a most-competitive mechanism for each objective

The highly influential framework of conceptual spaces provides a geometric way of representing knowledge. Instances are represented by points in a high-dimensional space and concepts are represented by convex regions in this space. After pointing out a problem with the convexity requirement, we propose a formalization of conceptual spaces based on fuzzy star-shaped sets. Our formalization uses a parametric definition of concepts and extends the original framework by adding means to represent correlations between different domains in a geometric way. Moreover, we define computationally efficient operations on concepts (intersection, union, and projection onto a subspace) and show that these operations can support both learning and reasoning processes

Neighborhood-based approaches often fail in sparse scenarios; a direct implication for recommender systems exploiting co-occurring items is often an inappropriately poor performance. As a remedy, we propose to propagate information (e.g., similarities) across the item graph to leverage sparse data. Instead of processing only directly connected items (e.g. co-occurrences), the similarity of two items is defined as the maximum capacity path interconnecting them. Our approach resembles a generalization of neighborhood-based methods that are obtained as special cases when restricting path lengths to one. We present two efficient online computation schemes and report on empirical results.

For inference in probabilistic formalisms with first-order constructs, lifted variable elimination (LVE) is one of the standard approaches for single queries. To handle multiple queries efficiently, the lifted junction tree algorithm (LJT) uses a specific representation of a first-order knowledge base and LVE in its computations. Unfortunately, LJT induces unnecessary groundings in cases where the standard LVE algorithm, GC-FOVE, has a fully lifted run. Additionally, LJT does not handle evidence explicitly. We extend LJT (i) to identify and prevent unnecessary groundings and (ii) to effectively handle evidence in a lifted manner. Given multiple queries, e.g., in machine learning applications, our extension computes answers faster than LJT and GC-FOVE.

Flood-filling algorithms as used for coloring images and shadow casting show that improved locality greatly increases the cache performance and, in turn, reduces the running time of an algorithm. In this paper we look at Dijkstra’s method to compute the shortest paths for example to generate pattern databases. As cache-improving contributions, we propose edge-cost factorization and flood-filling the memory layout of the graph. We conduct experiments in commercial game maps and compare the new priority queues with advanced heap implementations as well as and with alternative bucket implementations

A shallow semantic embedding of an intensional higher-order modal logic (IHOML) in Isabelle/HOL is presented. IHOML draws on Montague/Gallin intensional logics and has been introduced by Melvin Fitting in his textbook Types, Tableaus and Gödel’s God in order to discuss his emendation of Gödel’s ontological argument for the existence of God. Utilizing IHOML, the most interesting parts of Fitting’s textbook are formalized, automated and verified in the Isabelle/HOL proof assistant. A particular focus thereby is on three variants of the ontological argument which avoid the modal collapse, which is a strongly criticized side-effect in Gödel’s resp. Scott’s original work

Situation-aware route planning gathers increasing interest. The proliferation of various sensor technologies in smart cities allows the incorporation of real-time data and its predictions in the trip planning process. We present a system for individual multi-modal trip planning that incorporates predictions of future public transport delays in routing. Future delay times are computed by a Spatio-Temporal-Random-Field based on a stream of current vehicle positions. The conditioning of spatial regression on intermediate predictions of a discrete probabilistic graphical model allows to incorporate historical data, streamed online data and a rich dependency structure at the same time. We demonstrate the system with a real-world use-case at Warsaw city, Poland

Instead of wastefully sending entire images at fixed frame rates, neuromorphic vision sensors only transmits the local pixel-level changes caused by movement in a scene at the time they occur. This results in a stream of events, with a latency in the order of micro-seconds. While these sensors offer tremendous advantages in terms of latency and bandwidth, they require new, adapted approaches to computer vision, due to their unique event-based pixel-level output. In this contribution, we propose an online multi-target tracking system utilizing for neuromorphic vision sensors, which is the first neuromorphic vision system in intelligent transportation systems. In order to track moving targets, a fast and simple object detection algorithm using clustering techniques is developed. To make full use of the low latency, we integrate an online tracking-by-clustering system running at a high frame rate, which far exceeds the real-time capabilities of traditional frame based industry cameras. The performance of the system is evaluated using real world dynamic vision sensor data of a highway bridge scenario. We hope that our attempt will motivate further research on neuromorphic vision sensors for intelligent transportation systems

One of the first (and key) steps in analyzing an argumentative exchange is to reconstruct complete arguments from utterances which may carry just enthymemes. In this paper, using legal argument from analogy, we argue that in this reconstruction process interpreters may have to deal with a kind of uncertainty that can be appropriately represented in Dempster-Shafer (DS) theory rather than classical probability theory. Hence we generalize and relax existing frameworks of Probabilistic Argumentation (PAF), which are currently based on classical probability theory, by what we refer to as DS-based Argumentation framework (DSAF). Concretely, we first define a DSAF form and semantics by generalizing existing PAF form and semantics. We then present a method to translate existing proof procedures for standard Abstract Argumentation into DSAF inference procedures. Finally we provide a Prolog-based implementation of the resulted DSAF inference procedures

This paper introduces an evolutionary tuning approach for a pipeline of preprocessing methods and kernel principal component analysis (PCA) employing evolution strategies (ES). A simple (1+1)-ES adapts the imputation method, various preprocessing steps like normalization and standardization, and optimizes the parameters of kernel PCA. A small experimental study on a benchmark data set with missing values demonstrates that the evolutionary kernel PCA pipeline can be tuned with relatively few optimization steps, which makes evolutionary tuning applicable to scenarios with very large data sets.

Dimensionality reduction (DR) lowers the dimensionality of a high-dimensional data set by reducing the number of features for each pattern. The importance of DR techniques for data analysis and visualization led to the development of a large diversity of DR methods. The lack of comprehensive comparative studies makes it difficult to choose the best DR methods for a particular task based on known strengths and weaknesses. To close the gap, this paper presents an extensive experimental study comparing 29 DR methods on 13 artificial and real-world data sets. The performance assessment of the study is based on six quantitative metrics. According to our benchmark and evaluation scheme, the methods mMDS, GPLVM, and PCA turn out to outperform their competitors, although exceptions are revealed for special cases.

Task networks are a powerful tool for AI planning. Classical approaches like forward STN planning and SHOP typically devise non-deterministic algorithms that can be operationalized using classical graph search techniques such as A*. For two reasons, however, this strategy is sometimes inefficient. First, identical tasks might be resolved several times within the search process, i.e., the same subproblem is solved repeatedly instead of being reused. Second, large parts of the search space might be redundant if some of the objects in the planning domain are substitutable.

In this paper, we present an extension of simple task networks that avoid these problems and enable a much more efficient planning process. Our main innovation is the creation of new constants during planning combined with AND-OR-graph search. To demonstrate the advantages of these techniques, we present a case study in the field of automated service composition, in which search space reductions of several magnitudes can be achieved.

One of the biggest challenges in business process management is the creation of appropriate and efficient workflows. This asks for intelligent, knowledge-based systems that assist domain experts in this endeavor. In this paper we investigate workflow creation by applying Process-Oriented Case-Based Reasoning (POCBR). We introduce POCBR and describe how it can be applied to the experience-based generation of workflows by retrieval and adaptation of available best-practice workflow models. While existing approaches have already demonstrated their feasibility in principle, the generated workflows are not optimized with respect to complexity requirements. However, there is a high interest in workflows with a low complexity, e.g., to ensure the appropriate enactment as well as the understandability of the workflow. The main contribution of this paper is thus a novel approach to consider the workflow complexity during the workflow generation. Therefore, a complexity measure for workflows is proposed and integrated into the retrieval and adaptation process. An experimental evaluation with real cooking recipes clearly demonstrates the benefits of the described approach.

Maintaining the a-posteriori distribution of categorical states given a sequence of noisy and ambiguous observations, e. g. sensor data, can lead to situations where one observation can correspond to a large number of different states. We call these states symmetrical as they cannot be distinguished given the observation. Considering each of them during the inference is computationally infeasible, even for small scenarios. However, the number of situations (called hypotheses) can be reduced by abstracting from particular ones and representing all symmetrical in a single abstract state. We propose a novel Bayesian Filtering algorithm that performs this abstraction. The algorithm that we call Lifted Marginal Filtering (LiMa) is inspired by Lifted Inference and combines techniques known from Computational State Space Models and Multiset Rewriting Systems to perform efficient sequential inference on a parametric multiset state description. We demonstrate that our approach is working by comparing LiMa with conventional filtering

Recently a new account to the problem of induction has been developed [1], based on a priori advantages of regret-weighted meta-induction (RW) in online learning [2]. The claimed a priori advantages seem to contradict the no free lunch (NFL) theorem, which asserts that relative to a state-uniform prior distribution (SUPD) over possible worlds all (non-clairvoyant) prediction methods have the same expected predictive success. In this paper we propose a solution to this problem based on four novel results:

    • RW enjoys free lunches, i.e., its predictive long-run success dominates that of other prediction strategies.
    • Yet the NFL theorem applies to online prediction tasks provided the prior distribution is a SUPD.
    • The SUPD is maximally induction-hostile and assigns a probability of zero to all possible worlds in which RW enjoys free lunches. This dissolves the apparent conflict with the NFL.
    • The a priori advantages of RW can be demonstrated even under the assumption of a SUPD. Further advantages become apparent when a frequency-uniform distribution is considered.

Autonomous robots need to perceive and represent their environments and act accordingly. Using simultaneous localization and mapping (SLAM) methods, robots can build maps of the environment which are efficient for localization and path planning as long as the environment remains unchanged. However, facility logistics environments are not static because pallets and other obstacles are stored temporarily

Within the general objective of conceiving a cognitive architecture for image interpretation able to generate outputs relevant to several target user profiles, the paper elaborates on a set of operations that should be provided by a cognitive space to guarantee the generation of relevant descriptions. First, it attempts to define a working definition of contrast operation. Then, revisiting well-known results in cognitive studies, it sketches a definition of similarity based on contrast, distinguished from the metric defined on the conceptual space

Technical Communications

This paper reports on a research project to use Artificial Intelligence (AI) technology to reduce the alarm handling workload of control room operators in a terminal in the harbour of Antwerp. Several characteristics of this terminal preclude the use of standard methods, such as root cause analysis. Therefore, we focused attention on the process engineers and developed a system to help these engineers reduce the number of alarms that occur. It consists of two components: one to identify interesting alarms and another to analyse them. For both components, user-friendly visualisations were developed.

This paper presents an approach to learn the agents’ action model (action blueprints orchestrating transitions of the system state) from plan execution sequences. It does so by representing intra-action and inter-action dependencies in the form of a maximum satisfiability problem (MAX-SAT), and solving it with a MAX-SAT solver to reconstruct the underlying action model. Unlike previous MAX-SAT driven approaches, our chosen dependencies exploit the relationship between consecutive actions, rendering more accurately learnt models in the end

The present paper identifies an important narrative framework that has not yet been employed to implement computational story telling systems. Grounded in this theory, it suggests to use a BDI architecture extended with a personality-based affective appraisal component to model fictional characters. A proof of concept is shown to be capable of generating the plot of a folk tale. This example is used to explore the system’s parameters and the plot-space that is spanned by them.

Clinical practice guidelines (CPGs) serve to transfer results from evidence-based medicine into clinical practice. There is growing interest in clinical decision support systems (CDSS) implementing the guideline recommendations; research on such systems typically considers combinations of workflow languages with knowledge representation formalisms. Here, we report on experience with an OWL-based proof-of-concept implementation of parts of the German S3 guideline for schizophrenia. From the information-technological point of view, the salient feature of our implementation is that it represents the CPG entirely as a logic-based ontology, without resorting, e.g., to rule-based action formalisms or hard-wired workflows to capture clinical pathways. Our current goal is to establish that such an implementation is feasible; long-range benefits we expect from the approach are modularity of CPG implementation, ease of maintenance, and logical unity

We consider scenarios in which robots need to collaboratively perform a task. Our focus is on scenarios where every robot has the same goal and performs autonomous decisions on which next step to do to reach the shared goal. The robots can synchronize by requesting each others’ help. Our approach builds on a knowledge-based architecture in which robot goals and behaviour are encoded declaratively using logic programming, that is Prolog in our case. Each robot executes the same Prolog program and requests help from other robots to cooperatively solve some subtasks. In this paper we present the system architecture and a proof-of-concept scenario of tidying up an apartment. In this scenario a set of robots are working autonomously yet collaboratively to tidy up a simulated apartment by placing the scattered objects in their proper places

Since its introduction two decades ago, the way researchers parameterized and optimized Cartesian Genetic Programming (CGP) remained almost unchanged. In this work we investigate non-standard parameterizations and optimization algorithms for CGP. We show that the conventional way of using CGP, i.e. configuring it as a single line optimized by an (1+4) Evolutionary Strategies-style search scheme, is a very good choice but that rectangular CGP geometries and more elaborate metaheuristics, such as Simulated Annealing, can lead to faster convergence rates.

Human-Computer Interaction (HMI) is useful in sterile environments such as operating rooms (OR) where surgeons need to interact with images from scanners of organs on screens. Contamination issues may happen if the surgeon must touch a keyboard or the mouse. In order to reduce contamination and improve the interactions with the images without asking another team member, the Gesture ToolBox project, based on previous methods of Altran Research, has been proposed. Ten different signs from the LSF (French Sign Language) have been chosen as a way to interact with the images. In order to detect the signs, deep learning methods have been programmed using a pre-trained Convolutional Neural Network (VGG-16). A Kinect is used to detect the positions of the hand and classify gestures. The system allows the user to select, move, zoom in, or zoom out images from organs on the screen according to the recognised sign. Results with 11 subjects are used demonstrate this system in the laboratory. Future work will include tests in real situations in an operating room to obtain feedback from surgeons to improving the system.

In this paper, we analyze data generated by the extended Mainzer Kindertisch (ERKI setup). The setup uses five loudspeakers to generate 32 virtual sound sources which are ordered in an half circle. It is a new test setup in the medical field. The task for the test candidate is to locate sounds from different, randomly chosen directions. In our analysis, we apply data from test results of adults and of children and compare them. We show that the ERKI setup is applicable for sound localization and that the generated data contain various properties which can be explained by medical reasons. Further, we demonstrate the possibility to detect noticeable test results with data generated by the ERKI setup

When processing information from unstructured sources, numbers have to be parsed in many cases to do useful reasoning on that information. However, since numbers can be expressed in different ways, a robust number parser that can cope with number representations in different shapes is required in those cases. In this paper, we show how to train such a parser based on Conditional Random Fields. As training data, we use pairs of Wikipedia infobox entries and numbers from public knowledge graphs. We show that it is possible to parse numbers at an accuracy of more than 90%

Research into knowledge acquisition for robotic agents has looked at interpreting natural language instructions meant for humans into robot-executable programs; however, the ambiguities of natural language remain a challenge for such “translations”. In this paper, we look at a particular sort of ambiguity: the control flow structure of the program described by the natural language instruction. It is not always clear, when more conditional statements appear in a natural language instruction, which of the conditions are to be thought of as alternative options in the same test, and which belong to a code branch triggered by a previous conditional. We augment a system which uses probabilistic reasoning to identify the meaning of the words in a sentence with reasoning about action preconditions and effects in order to filter out non-sensical code structures. We test our system with sample instruction sheets inspired from analytical chemistry.

Emerging applications for various types of robots require them to operate autonomously in the vicinity of humans. Ensuring the safety of humans is a permanent requirement that must be upheld at all times. As tasks performed by robots are becoming more complex, adaptive planning components are required. Modern planning approaches do not contribute to a robot system’s safety capabilities, which are thus limited to means like reduction of force and speed or halt of the robot. We argue that during a plan’s execution there should be a backup plan that the robot system can fall back to as an alternative to halting. We present our approach of generating variations of plans by optional exploitation of opportunities, which extend the initially safe plan for extra value when the current safety situation allows it, while maintaining recovery policies to get back to the safe plan in the case of safety-related events

The modern workplace is driven by a high amount of available information which can be observed in various domains, e.g., in Industry 4.0. Hence, the question arises: Which competences do actors need to build and efficient work environment? This paper proposes an simulation-based optimization approach to adapt role configurations for team work scenarios. The approach was tested using a multiagent-based job-shop-scheduling model to simulate the effects of various role configurations

Public Knowledge Graphs (KGs) on the Web are considered a valuable asset for developing intelligent applications. They contain general knowledge which can be used, e.g., for improving data analytics tools, text processing pipelines, or recommender systems. While the large players, e.g., DBpedia, YAGO, or Wikidata, are often considered similar in nature and coverage, there are, in fact, quite a few differences. In this paper, we quantify those differences, and identify the overlapping and the complementary parts of public KGs. From those considerations, we can conclude that the KGs are hardly interchangeable, and that each of them has its strenghts and weaknesses when it comes to applications in different domains

The classification of repositories found on GitHub can be considered as a hard task. However, the solution of this task could be helpful for a lot of different applications (e.g. Recommender Systems). In this paper we present ClassifyHub, an algorithm based on Ensemble Learning developed for the InformatiCup 2017 competition, which is able to tackle this classification problem with high precision and recall. In addition we provide a data set of classified repositories for further research.

Big data is a hot topic in research and industry. The availability of data has never been as high as it is now. Making good use of the data is a challenging research topic in all aspects of industry and society. The Bremen Big Data Challenge invites students to dig deep into big data. In this yearly event students are challenged to use the month of March to analyze a big dataset and use the knowledge they gained to answer a question. In this year’s Bremen Big Data Challenge students were challenged to predict the load of the university cafeteria from the load of past years. The best of 24 teams predicted the load with a root mean squared error of 8.6 receipts issued in five minutes, with a fusion system based on the top 5 entries achieving an even better result of 8.28.

Sentiment Analysis is a Natural Language Processing-task that is relevant in a number of contexts, including the analysis of literature. We report on ongoing research towards enabling, for the first time, sentence-level Sentiment Analysis in the domain of German novels. We create a labelled dataset from sentences extracted from German novels and, by adapting existing sentiment classifiers, reach promising F1-scores of 0.67 for binary polarity classification.

In the next article, we will discuss KI2018.

コメント

Exit mobile version
タイトルとURLをコピーしました