KI2019
In the previous article, we discussed KI 2018. In this issue, we describe the 42nd German Conference on Artificial Intelligence (KI 2019), which took place in Kassel, Germany, from September 23-26, 2019.
The German Conference on Artificial Intelligence (abbreviated KI for “Künstliche Intelligenz”), from informal meetings and workshops organized by the German “Gesellschaft für Informatik” (Computer Science Association, GI), is a research on the theory and applications of intelligent systems technology. It is an annual conference series aimed at.
The KI 2019 conference was held in Kassel, Germany, September 23-26, 2019, in conjunction with the 49th Annual Conference of the German Society for Computer Science (INFORMATIK 2019). Information about the event can be found at www.ki2019.de and informatik2019.de, respectively.
KI 2019 had a special focus theme of “AI methods for Argumentation” and specifically invited submissions that use methods from all areas of AI to understand, formalize, and generate argumentation structures in natural language. This special focus was organized in cooperation with the DFG-funded RATIO (Robust Argumentation Machines) priority program. In addition to this special focus theme, the conference also includes original papers and short technical communications on all topics in AI, as well as extended abstracts of papers presented at recent major AI conferences.
KI 2019 received 82 submissions from 25 countries and accepted 29 papers. Of these, 16 were full papers, 10 were short papers, and 3 were extended abstracts, including the following three invited talks.
Jürgen Altmann (University Dortmund): Autonomous Weapons Systems – Dangers and the Need for an International Ban
Michael Beetz (University of Bremen): Digital twin knowledge bases – knowledge representation and reasoning for robotic agents
Antony Hunter (University College London): Towards Computational Persuasion for Behaviour Change Applications
More details are given below.
Advances in ICT, robotics and sensors bring autonomous weapon systems (AWS) within reach. Shooting without control by a human operator has military advantages, but also disadvantages – human understanding of the situation and control of events would suffer. Beyond this, compliance with the law of armed conflict is in question. Would it be ethical to allow a machine to take a human life? The increased pace of battle may overburden human understanding and decision making and lead to uncontrolled escalation. An international campaign as well as IT, robotics and AI professionals and enterprises are calling for an international ban of AWS. States have discussed about limitations in the UN context, but no consensus has evolved so far. Germany has argued for a ban of fully autonomous weapons, but has not joined the countries proposing an AWS ban, and is using a problematic definition.
Computational persuasion aims to capture the human ability to persuade through argumentation for applications such as behaviour change in healthcare (e.g. persuading people to take more exercise or eat more healthily). In this paper, we review research in computational persuasion that incorporates domain modelling (capturing arguments and counterarguments that can appear in a persuasion dialogues), user modelling (capturing the beliefs and concerns of the persuadee), and dialogue strategies (choosing the best moves for the persuader to maximize the chances that the persuadee is persuaded). We discuss evaluation of prototype systems that get the user’s counterarguments by allowing them to select them from a menu. Then we consider how this work might be enhanced by incorporating a natural language interface in the form of an argumentative chatbot.
Building on a specific formalization of analogical relationships of the form “A relates to B as C relates to D”, we establish a connection between two important subfields of artificial intelligence, namely analogical reasoning and kernel-based machine learning. More specifically, we show that so-called analogical proportions are closely connected to kernel functions on pairs of objects. Based on this result, we introduce the analogy kernel, which can be seen as a measure of how strongly four objects are in analogical relationship. As an application, we consider the problem of object ranking in the realm of preference learning, for which we develop a new method based on support vector machines trained with the analogy kernel. Our first experimental results for data sets from different domains (sports, education, tourism, etc.) are promising and suggest that our approach is competitive to state-of-the-art algorithms in terms of predictive accuracy.
Argument search is the study of search engine technology that can retrieve arguments for potentially controversial topics or claims upon user request. The design of an argument search engine is tied to its underlying argument acquisition paradigm. More specifically, the employed paradigm controls the trade-off between retrieval precision and recall and thus determines basic search characteristics: Compiling an exhaustive argument corpus offline benefits precision at the expense of recall, whereas retrieving arguments from the web on-the-fly benefits recall at the expense of precision. This paper presents the new corpus of our argument search engine args.me, which follows the former paradigm. We freely provide the corpus to the community. With 387 606 arguments it is one of the largest argument resources available so far. In a qualitative analysis, we compare the args.me corpus acquisition paradigm to that of two other argument search engines, and we report first empirical insights into how people search with args.me
We study a new but simple model for online fair division in which indivisible items arrive one-by-one and agents have monotone utilities over bundles of the items. We consider axiomatic properties of mechanisms for this model such as strategy-proofness, envy-freeness, and Pareto efficiency. We prove a number of impossibility results that justify why we consider relaxations of the properties, as well as why we consider restricted preference domains on which good axiomatic properties can be achieved. We propose two mechanisms that have good axiomatic fairness properties on restricted but common preference domains.
We introduce a fully automatic system, implemented in the Lean theorem prover, that solves equality problems of everyday mathematics. Our overriding priority in devising the system is that it should construct proofs of equality in a way that is similar to that of humans. A second goal is that the methods it uses should be domain independent. The basic strategy of the system is to operate with a subtask stack: whenever there is no clear way of making progress towards the task at the top of the stack, the program finds a promising subtask, such as rewriting a subterm, and places that at the top of the stack instead. Heuristics guide the choice of promising subtasks and the rewriting process. This makes proofs more human-like by breaking the problem into tasks in the way that a human would. We show that our system can prove equality theorems simply, without having to preselect or orient rewrite rules as in standard theorem provers, and without having to invoke heavy duty tools for performing simple reasoning.
In previous work, we have investigated privacy-preserving publishing of Description Logic (DL) ontologies in a setting where the knowledge about individuals to be published is an (mathcal {EL} ) instance store, and both the privacy policy and the possible background knowledge of an attacker are represented by concepts of the DL (mathcal {EL}). We have introduced the notions of compliance of a concept with a policy and of safety of a concept for a policy, and have shown how, in the context mentioned above, optimal compliant (safe) generalizations of a given (mathcal {EL}) concept can be computed. In the present paper, we consider a modified setting where we assume that the background knowledge of the attacker is given by a DL different from the one in which the knowledge to be published and the safety policies are formulated. In particular, we investigate the situations where the attacker’s knowledge is given by an (mathcal {FL}_0 ) or an (mathcal {FLE} ) concept. In both cases, we show how optimal safe generalizations can be computed. Whereas the complexity of this computation is the same (ExpTime) as in our previous results for the case of (mathcal {FL}_0 ), it turns out to be actually lower (polynomial) for the more expressive DL (mathcal {FLE} ).
Research on argumentation in Artificial Intelligence recently investigates new methods that contribute to the vision of developing robust argumentation machines. One line of research explores ways of reasoning with natural language arguments coming from information sources on the web as a foundation for the deliberation and synthesis of arguments in specific domains. This paper builds upon arguments represented as argument graphs in the standardized Argument Interchange Format. While previous work was focused on the development of semantic similarity measures used for the case-based retrieval of argument graphs, this paper addresses the problem of clustering argument graphs to explore structures that facilitate argumentation interpretation. We propose a k-medoid and an agglomerative clustering approach based on semantic similarity measures. We compare the clustering results based on a graph-based semantic measure that takes the structure of the argument into account with a semantic word2vec measure on the pure textual argument representation. Experiments based on the Microtext corpus show that the graph-based similarity is best on internal evaluation measures, while the pure textual measure performs very well for identifying topic-specific clusters.
Recent research regarding the reliability of Deep Neural Networks (DNN) revealed that it is easy to produce images that are completely unrecognizable to humans, but DNNs recognize as classifiable objects with 99.99% confidence. The present study investigates the effect of search space reduction for Genetic Algorithms (GA) on their capability of purposefully fooling DNNs. Therefore, we introduce a GA with respective modifications that is able to fool neural networks trained to classify objects from well-known benchmark image data sets like GTSRB or MNIST. The developed GA is extended and thus capable of reducing the search space without changing its general behavior. Empirical results on MNIST indicate a significantly decreased number of generations needed to satisfy the targeted confidence of an MNIST image classifier (12 instead of 228 generations). Conducted experiments on GTSRB, a more challenging object classification scenario, show similar results. Therefore, fooling DNNs has found not only easily possible but can also be done very fast. Our study thus substantiates an already recognized, potential danger for DNN-based computer vision or object recognition applications.
The vast majority of work in planning to date has focused on state-independent action costs. However, if a planning task features state-dependent costs, using a cost model with state-independent costs means either introducing a modeling error, or potentially sacrificing compactness of the model. In this paper, we investigate the conflicting priorities of modeling accuracy and compactness empirically, with a particular focus on the extent of the negative impact of reduced modeling accuracy on (a) the quality of the resulting plans, and (b) the search guidance provided by heuristics that are fed with inaccurate cost models. Our empirical results show that the plan suboptimality introduced by ignoring state-dependent costs can range, depending on the domain, from inexistent to several orders of magnitude. Furthermore, our results show that the impact on heuristic guidance additionally depends strongly on the heuristic that is used, the specifics of how exactly the costs are represented, and whether one is interested in heuristic accuracy, node expansions, or overall runtime savings.
Argumentation frameworks with collective attacks are a prominent extension of Dung’s abstract argumentation frameworks, where an attack can be drawn from a set of arguments to another argument. These frameworks are often abbreviated as SETAFs. Although SETAFs have received increasing interest recently, the notion of strong equivalence, which is fundamental in nonmonotonic formalisms to characterize equivalent replacements, has not yet been investigated. In this paper, we study how strong equivalence between SETAFs can be decided with respect to the most important semantics and also consider variants of strong equivalence
Both qualitative and quantitative research are integral parts for the understanding of traffic systems, yet it can be difficult to formalize and execute qualitative research results in a technical simulation system in an understandable and flexible manner. This paper presents an approach to systematically construct fuzzy inference systems from socio-scientific data for the application as a decision making component in an agent-based mobility simulation. A general fuzzy inference concept is presented and subsequently applied to statements about travel mode choice and common activities from semi-structured interviews on mobility behavior. It is shown that the inference concept can be used to determine both fuzzy rule base and the linguistic variables and terms from the interviews and that such an inference system can be used successfully in an agent-based mobility simulation.
Logistics operations often require a robot to pickup and deliver objects from multiple locations within certain time frames. This is a challenging task-and-motion planning problem as it intertwines logical and temporal constraints about the operations with geometric and differential constraints related to obstacle avoidance and robot dynamics. To address these challenges, this paper couples vehicle routing over a discrete abstraction with sampling-based motion planning. On the one hand, vehicle routing provides plans to effectively guide sampling-based motion planning as it explores the vast space of feasible motions. On the other hand, motion planning provides feasibility estimates which vehicle routing uses to refine its plans. This coupling makes it possible to extend the state-of-the-art in multi-goal motion planning by also incorporating capacities, pickups, and deliveries in addition to time windows. When not all pickups and deliveries can be completed in time, the approach seeks to minimize the violations and maximize the profit.
Transferring human movements to robotic systems is of high interest to equip the systems with new behaviors without expert knowledge. Typically, skills are often only learned for a very specific setup and a certain robot. We propose a modular framework to learn skills that is applicable on different robotic systems without adaptations. Our work builds on the recently introduced BesMan Learning Platform, which comprises the full workflow to transfer human demonstrations to a system, including automatized behavior segmentation, imitation learning, reinforcement learning for motion refinement, and methods to generalize to related tasks. For this paper, we extend this approach in order that different skills can be imitated by various systems in an automated fashion with a minimal amount of configuration, e.g., definition of the target system and environment. For this, we focus on the imitation of the demonstrated movements and show their transferability without movement refinement. We demonstrate the generality of the approach on a large dataset, consisting of about 700 throwing demonstrations. Nearly all of these human demonstrations are successfully transferred to four different robot target systems, namely Universal Robot’s UR5 and UR10, KUKA LBR iiwa, and DFKI’s robot COMPI. An analysis of the quality of the imitated movement on the real UR5 robot shows that useful throws can be executed on the system which can be used as starting points for further movement refinement.
Several methods of estimating the mutual information of random variables have been developed in recent years. They can prove valuable for novel approaches to learning statistically independent features. In this paper, we use one of these methods, a mutual information neural estimation (MINE) network, to present a proof-of-concept of how a neural network can perform linear ICA. We minimize the mutual information, as estimated by a MINE network, between the output units of a differentiable encoder network. This is done by simple alternate optimization of the two networks. The method is shown to get a qualitatively equal solution to FastICA on blind-source-separation of noisy sources.
Modern deep reinforcement learning agents are capable of achieving super-human performance in tasks like playing Atari games, solely based on visual input. However, due to their use of neural networks the trained models are lacking transparency which makes their inner workings incomprehensible for humans. A promising approach to gain insights into the opaque reasoning process of neural networks is the layer-wise relevance propagation (LRP) concept. This visualization technique creates saliency maps that highlight the areas in the input which were relevant for the agents’ decision-making process. Since such saliency maps cover every possible cause for a prediction, they are often accentuating very diverse parts of the input. This makes the results difficult to understand for people without a machine-learning background. In this work, we introduce an adjustment to the LRP concept that utilizes only the most relevant neurons of each convolutional layer and thus generates more selective saliency maps. We test our approach with a dueling Deep Q-Network (DQN) agent which we trained on three different Atari games of varying complexity. Since the dueling DQN approach considerably alters the neural network architecture of the original DQN algorithm, it requires its own LRP variant which will be presented in this paper.
The two-dimensional bin packing problem (2D-BPP) consists of packing, without overlapping, a set of rectangular items with different sizes into smallest number of rectangular containers, called “bins”, having identical dimensions. According to the real-word requirements, the items may either have a fixed orientation or they can be rotated by 90°. In addition, it may or not be subjugate to the guillotine cutting. In this article, we consider the two-dimensional bin packing problem with fixed orientation and free cutting. In fact, we propose a hybrid approach by combining two bio-inspired algorithms that are the crow search algorithm (CSA) and the genetic algorithm (GA) to solve the considered problem. So, the main idea behind this hybridization is to expect reaching a sort of cooperative synergy between the operators of the two combined algorithms. That is, the CSA is discretized and adapted to the 2D-BPP context, while using genetic operators to improve individuals (i.e. crows) adaptation. The average performance of the proposed hybrid approach is evaluated on the standard benchmark instances of the considered problem and compared with two other bio-inspired algorithms having closely similar nature; namely standard genetic algorithm and binary particle swarm optimization algorithm. The obtained results are very promising
We present an approach to the computational extraction of reasons for the sake of explaining moral judgments in the context of an hybrid ethical reasoning agent (HERA). The HERA agent employs logical representations of ethical principles to make judgments about the moral permissibility or impermissibility of actions, and uses the same logical formulae to come up with reasons for these judgments. We motivate the distinction between sufficient reasons, necessary reasons, and necessary parts of sufficient reasons yielding different types of explanations, and we provide algorithms to extract these reasons.
Recently, Lifted Marginal Filtering [5] has been proposed, an approach for efficient probabilistic inference in systems with multiple, (inter-)acting agents and objects (entities). The algorithm achieves its efficiency by performing inference jointly over groups of similar entities (i.e. their properties follow the same distribution). In this paper, we explore the case where there are no entities that are directly suitable for grouping. We propose to use methods from Gaussian mixture fitting and merging to identify entity groups and keep the number of groups constant over time. Empirical results suggest that decrease in prediction accuracy is small, while the algorithm runtime decreases significantly.
Current research on knowledge graph completion is often concerned with latent approaches that are based on the idea to embed a knowledge graph into a low dimensional vector space. At the same time symbolic approaches have attracted less attention [13]. However, such approaches have a big advantage: they yield an explanation in terms of the rules that trigger a prediction. In this paper we propose a bottom-up technique for efficiently learning logical rules from large knowledge graphs inspired by classic bottom-up rule learning approaches as Golem [8] and Aleph [10]. Our approach is called AnyBURL (Anytime Bottom-Up Rule Learning). We report on experiments where we evaluated AnyBURL on datasets that have been labelled as hard cases for simple (rule-based) approaches. Our approach performs as good as and sometimes better than most models that have been proposed recently. Moreover, the required resources in terms of memory and runtime are significantly smaller compared to latent approaches. This paper is an extended abstract of an IJCAI 2019 paper
Pattern databases (PDBs) are memory-based abstraction heuristics that are constructed prior to the planning process which, if expressed symbolically, yield a very efficient representation. Recent work in the automatic generation of symbolic PDBs has established it as one of the most successful approaches for cost-optimal domain-independent planning. In this paper, we contribute two planners, both using bin-packing for its pattern selection. In the second one, we introduce a greedy selection algorithm called Partial-Gamer, which complements the heuristic given by bin-packing. We tested our approaches on the benchmarks of the last three International Planning Competitions, optimal track, getting very competitive results, with this simple and deterministic algorithm.
We present a new version of ALICA – “A Language for Interactive Cooperative Agents”. The ALICA framework is a highly reactive multi-agent framework and comprises three components for working with multi-agent plans: a specification language, an execution engine, and a graphical modelling tool. The framework automatically coordinates teams, allocates tasks to team members, and compensates execution failures in a fully distributed manner. In a major redesign, we extended the description language and re-implemented the execution engine and graphical modelling tool. As a result, the second version of ALICA encompasses fewer dependencies, is domain independent, and adaptable to different environments.
- Extending Modular Semantics for Bipolar Weighted Argumentation (ExtendedAbstract)
Weighted argumentation offers a tool for decision support and social media analysis. Arguments are evaluated by an iterative procedure that takes initial weights and attack and support relations into account. Mossakowski and Neuhaus recently unified different approaches and proved first convergence results in cyclic graphs. We build up on this work, simplify and generalize convergence results and add runtime guarantees. As it turns out, there is a tradeoff between convergence guarantees and the ability to move strength values away from the initial weights. We demonstrate that continuizing semantics can avoid divergence without this tradeoff. Semantically, we extend the framework with a Duality property that assures a symmetric impact of attack and support. We also present a Java implementation of modular semantics and explain the practical usefulness of the theoretical ideas.
Coordination in multi-agent systems with partial and non-uniform observability is a practically challenging problem. We use Monte-Carlo tree search as the basis of an implicitly coordinated epistemic planning algorithm which is capable of using the knowledge distributed among the agents to find solutions in problems even with a large branching factor. We use Dynamic Epistemic Logic to represent the knowledge and the actual situation as a state of the Monte-Carlo tree search, and epistemic planning to formalize the goals and actions of a problem. Further, we describe the required modifications of the Monte-Carlo tree search when searching over epistemic states, and make use of the cooperative card game Hanabi to test our planner on larger problems. We find that the approach scales to games with up to eight cards while maintaining high playing strength.
This paper describes an approach to intuitive robot programming, with the aim of enabling non-experts to generate sensor-based, structured programs. The core idea is to generate a variant of a finite state automaton (representing the program) by kinesthetic programming (physically guiding the robot). We use the structure of the automaton for control flow (loops and branching according to conditions of the environment). For programming, we forgo a visual user interface completely to determine to what extent this is viable. Our experiments show that non-expert users are indeed able to successfully program small sample tasks within reasonable time
Argumentation Mining aims at finding components of arguments, as well as relations between them, in text. One of the largely unsolved problems is implicitness, where the text invites the reader to infer a missing component, such as the claim or a supporting statement. In the work of Wojatzki and Zesch (2016), an interesting implicitness problem is addressed on a Twitter data set. They showed that implicit stances toward a claim can be found with some success using just token and character n-grams. Using the same dataset, we show that results for this task can be improved using word and sentence embeddings, but that not all embedding variants perform alike. Specifically, we compare fastText described in “FastText Overview, Algorithm, and Example Implementation., GloVe described in “Overview of GloVe (Global Vectors for Word Representation), Algorithm, and Example Implementation, and Universal Sentence Encoder (USE); and we find that, to our knowledge, USE yields state-of-the-art results for this task.
In this short paper we introduce the notions of backbones and backdoors in the context of qualitative constraint networks. As motivation for the study of those structures, we argue that they can be used to define collaborative approaches among SAT, CP, and native tools, inspire novel decomposition and parallelization techniques, and lead to the development of adaptive constraint propagators with a better insight into the particularities of real-world datasets than what is possible today.
2D path planning in static environment is a well-known problem and one of the common ways to solve it is to 1) represent the environment as a grid and 2) perform a heuristic search for a path on it. At the same time 2D grid resembles much a digital image, thus an appealing idea comes to being — to treat the problem as an image generation task and to solve it utilizing the recent advances in deep learning. In this work we make an attempt to apply a generative neural network as a path finder and report preliminary results, convincing enough to claim that this direction of research is worth further exploration.
Neural networks are used more and more in critical areas such as autonomous driving. In such cases, their limitations might cause dangerous situations. Researchers were able to show that such limitations enable attacks on systems containing neural networks, which are even possible in real world scenarios. For example, a state-of-the-art network might misclassify modified traffic signs. Other researchers have shown that modern car assistants can easily be fooled to drive the car into the wrong lane on a street.
The InformatiCup is a collegiate computer science competition in Germany, Switzerland and Austria for all students, with tasks based on real world problems. This year’s task is based on the above mentioned problem. To demonstrate this problem and to motivate students for experimenting with neural networks, participants were asked to generate fooling images for a traffic sign classifying neural network without having direct access to the network. The images should not be recognisable by humans as traffic signs, but be classified as such with a high confidence by the neural network.
The automated theorem prover Leo-III for classical higher-order logic with Henkin semantics and choice is presented. Leo-III is based on extensional higher-order paramodulation and accepts every common TPTP dialect (FOF, TFF, THF), including their recent extensions to rank-1 polymorphism (TF1, TH1). In addition, the prover natively supports almost every normal higher-order modal logic. Leo-III cooperates with first-order reasoning tools using translations to many-sorted first-order logic and produces verifiable proof certificates. The prover is evaluated on heterogeneous benchmark sets
We study pairwise preference data to model the behavior of users in online recommendation problems. We first propose a tensor kernel to model contextual transactions of a user in a joint feature space. The representation is extended to all users via hash functions that allow to effectively store and retrieve personalized slices of data and context. In order to quickly focus on the relevant properties of the next item to display, we investigate the utility of Monte-Carlo Tree Search (MCTS) described in “Overview of Monte Carlo Tree Search and Examples of Algorithms and Implementations” on the learned preference values. Empirically, on real-world transaction data, both the preference models as well as the search tree exhibit excellent performance over baseline approaches.
コメント