KI 2018: Advances in Artificial Intelligence Papers

Machine Learning Technology  Artificial Intelligence Technology  Natural Language Processing Technology  Semantic Web Technology  Ontology Technology   Reasoning Technology   Knowledge Information Technology  Collecting AI Conference Papers    Digital Transformation Technology

The German Conference on Artificial Intelligence (abbreviated KI) evolved from informal meetings and workshops organized by the German Society for Information Science (GI) to become an annual conference series dedicated to research on the theory and applications of intelligent systems technology KI is primarily attended by researchers from Germany and neighboring countries, but international participation is also open to participation, and various contributions from the international research community continue to be received.

In  the previous article, we discussed KI2017. In this article, we discuss papers presented at KI2018, held in Berlin from September 24-28, 2018. Prominent research topics at the conference will be machine learning, multi-agent systems, and belief revision. Overall, KI 2018 provided a broad overview of current research topics in AI.

As is customary in the KI conference series, awards for best paper and best student paper were presented. This year’s winners were selected based on reviews provided by PC members. The best paper award went to “Preference-Based Monte Carlo Tree Search” by Tobias Joppen, Christian Wirth, and Johannes Fürnkranz. The paper selected for the Best Student Paper Award will be “Model Checking for Coalition Announcement Logic” by Rustam Galimullin, Natasha Alechina, and Hans van Ditmarsch

KI 2018 will be held jointly with the conference INFORMATIK 2018, the annual conference of Gesellschaft für Informatik. Both conferences will share a reception event, with a keynote address by Catrin Misselhorn on “Machine Ethics and Artificial Morality”; other invited talks at KI 2018 include Dietmar Jannach on “Session- Based Recom- mendation-Challenges and Recent Advances” by Dietmar Jannach and “Robotics” by Sami Haddadin.

Since KI is the premier forum for AI researchers in Germany, several co-located events are also taking place. The conference week started with a set of workshops dedicated to diverse topics such as processing web data and formal and cognitive aspects of inference. There were also workshops on Statistical Relational AI (StarAI, Tanya Braun, Kristian Kersting, and Ralf Möller) and Real-Time Recommenations with Streamed Data (Andreas Lommatzsch, Benjamin Kille, Frank Hopfgartner, and Torben Brodt). In addition, a doctoral consortium was organized by Johannes Fähndrich to support doctoral students in the field of AI.

Keynote Talk

Contents

In recent years, sequential recommender systems (SRSs) and session-based recommender systems (SBRSs) have emerged as a new paradigm of RSs to capture users’ short-term but dynamic preferences for enabling more timely and accurate recommendations. Although SRSs and SBRSs have been extensively studied, there are many inconsistencies in this area caused by the diverse descriptions, settings, assumptions and application domains. There is no work to provide a unified framework and problem statement to remove the commonly existing and various inconsistencies in the area of SR/SBR. There is a lack of work to provide a comprehensive and systematic demonstration of the data characteristics, key challenges, most representative and state-of-the-art approaches, typical real-world applications and important future research directions in the area. This work aims to fill in these gaps so as to facilitate further research in this exciting and vibrant area

Reasoning

Coalition Announcement Logic (CAL) studies how a group of agents can enforce a certain outcome by making a joint announcement, regardless of any announcements made simultaneously by the opponents. The logic is useful to model imperfect information games with simultaneous moves. We propose a model checking algorithm for CAL and show that the model checking problem for CAL is PSPACE-complete. We also also consider a special positive case for which the model checking problem is in P. We compare these results to those for other logics with quantification over information change

Standard approaches for inference in probabilistic formalisms with first-order constructs include lifted variable elimination (LVE) for single queries as well as first-order knowledge compilation (FOKC) based on weighted model counting. To handle multiple queries efficiently, the lifted junction tree algorithm (LJT) uses a first-order cluster representation of a model and LVE as a subroutine in its computations. For certain inputs, the implementations of LVE and, as a result, LJT ground parts of a model where FOKC has a lifted run. The purpose of this paper is to prepare LJT as a backbone for lifted inference and to use any exact inference algorithm as subroutine. Using FOKC in LJT allows us to compute answers faster than LJT, LVE, and FOKC for certain inputs.

The lifted dynamic junction tree algorithm (LDJT) answers filtering and prediction queries efficiently for probabilistic relational temporal models by building and then reusing a first-order cluster representation of a knowledge base for multiple queries and time steps. Unfortunately, a non-ideal elimination order can lead to unnecessary groundings

For a probabilistic extension of the description logic ({mathcal {mathcal {E!L}}}^{!bot }), we consider the task of automatic acquisition of terminological knowledge from a given probabilistic interpretation. Basically, such a probabilistic interpretation is a family of directed graphs the vertices and edges of which are labeled, and where a discrete probability measure on this graph family is present. The goal is to derive so-called concept inclusions which are expressible in the considered probabilistic description logic and which hold true in the given probabilistic interpretation. A procedure for an appropriate axiomatization of such graph families is proposed and its soundness and completeness is justified.

Multi-agent Systems

We study the fair division of items to agents supposing that agents can form groups. We thus give natural generalizations of popular concepts such as envy-freeness and Pareto efficiency to groups of fixed sizes. Group envy-freeness requires that no group envies another group. Group Pareto efficiency requires that no group can be made better off without another group be made worse off. We study these new group properties from an axiomatic viewpoint. We thus propose new fairness taxonomies that generalize existing taxonomies. We further study near versions of these group properties as allocations for some of them may not exist. We finally give three prices of group fairness between group properties for three common social welfares (i.e. utilitarian, egalitarian, and Nash)

Probabilistic parallel multiset rewriting systems (PPMRS) model probabilistic, dynamic systems consisting of multiple, (inter-) acting agents and objects (entities), where multiple individual actions can be performed in parallel. The main computational challenge in these approaches is computing the distribution of parallel actions (compound actions), that can be formulated as a constraint satisfaction problem (CSP). Unfortunately, computing the partition function for this distribution exactly is infeasible, as it requires to enumerate all solutions of the CSP, which are subject to a combinatorial explosion. The central technical contribution of this paper is an efficient Markov Chain Monte Carlo (MCMC)-based algorithm to approximate the partition function, and thus the compound action distribution. The proposal function works by performing backtracking in the CSP search tree, and then sampling a solution of the remaining, partially solved CSP. We demonstrate our approach on a Lotka-Volterra system with PPMRS semantics, where exact compound action computation is infeasible. Our approach allows to perform simulation studies and Bayesian filtering with PPMRS semantics in scenarios where this was previously infeasible.

Recent advances in mobile robotics and AI promise to revolutionize industrial production. As autonomous robots are able to solve more complex tasks, the difficulty of integrating various robot skills and coordinating groups of robots increases dramatically. Domain independent planning promises a possible solution. For single robot systems a number of successful demonstrations can be found in scientific literature. However our experiences at the RoboCup Logistics League in 2017 highlighted a severe lack in plan quality when coordinating multiple robots. In this work we demonstrate how out of the box temporal planning systems can be employed to increase plan quality for temporal multi-robot tasks. An abstract plan is generated first and sub-tasks in the plan are auctioned off to robots, which in turn employ planning to solve these tasks and compute bids. We evaluate our approach on two planning domains and find significant improvements in solution coverage and plan quality.

Many multi-agent systems (MASs) are situated in stochastic environments. Some such systems that are based on the partially observable Markov decision process (POMDP) do not take the benevolence of other agents for granted. We propose a new POMDP-based framework which is general enough for the specification of a variety of stochastic MAS domains involving the impact of agents on each other’s reputations. A unique feature of this framework is that actions are specified as either undirected (regular) or directed (towards a particular agent), and a new directed transition function is provided for modeling the effects of reputation in interactions. Assuming that an agent must maintain a good enough reputation to survive in the network, a planning algorithm is developed for an agent to select optimal actions in stochastic MASs. Preliminary evaluation is provided via an example specification and by determining the algorithm’s complexity

The demand for fast and reliable parcel shipping is globally rising. Conventional delivery by land requires good infrastructure and causes high costs, especially on the last mile. We present a distributed and scalable drone delivery system based on the contract net protocol for task allocation and the ROS hybrid behaviour planner (RHBP) for goal-oriented task execution. The solution is tested on a modified multi-agent systems simulation platform (MASSIM). Within this environment, the solution scales up well and is profitable across different configurations.

Robotics

Inferring ego position by recognizing previously seen places in the world is an essential capability for autonomous mobile systems. Recent advances have addressed increasingly challenging recognition problems, e.g. long-term vision-based localization despite severe appearance changes induced by changing illumination, weather or season. Since robots typically move continuously through an environment, there is high correlation within consecutive sensory inputs and across similar trajectories. Exploiting this sequential information is a key element of some of the most successful approaches for place recognition in changing environments. We present a novel, neurally inspired approach that uses sequences for mobile robot localization. It builds upon Hierarchical Temporal Memory (HTM), an established neuroscientific model of working principles of the human neocortex. HTM features two properties that are interesting for place recognition applications: (1) It relies on sparse distributed representations, which are known to have high representational capacity and high robustness towards noise. (2) It heavily exploits the sequential structure of incoming sensory data. In this paper, we discuss the importance of sequence information for mobile robot localization, we provide an introduction to HTM, and discuss theoretical analogies between the problem of place recognition and HTM. We then present a novel approach, applying a modified version of HTM’s higher order sequence memory to mobile robot localization. Finally we demonstrate the capabilities of the proposed approach on a set of simulation-based experiments

Robots are becoming ever more present in households, interacting more with humans. They are able to perform tasks in an accurate manner, e.g. manipulating objects. However, this manipulation often does not follow the human way to arrange objects. Therefore, robots require semantic knowledge about the environment for executing tasks and satisfying humans’ expectations. In this paper, we will introduce a breakfast table setting scenario where a robot acquires information from human demonstrations to arrange objects in a meaningful way. We will show how robots can obtain the necessary amount of knowledge to autonomously perform daily tasks

Learning

This paper addresses the problem of tuning parameters of mathematical solvers to increase their performance. We investigate how solvers can be tuned for models that undergo two types of configuration: variable configuration and constraint configuration. For each type, we investigate search algorithms for data generation that emphasizes exploration or exploitation. We show the difficulties for solver tuning in constraint configuration and how data generation methods affects a training sets learning potential.

Transfer learning supports classification in domains varying from the learning domain. Prominent applications can be found in Wifi-localization, sentiment classification or robotics. A recent study shows that approximation of training trough test environments is leading to proper performance and out-dates the strategy most transfer learning approaches pursue. Additionally, sparse transfer learning models are required to address technical limitations and the demand for interpretability due to recent privacy regulations. In this work, we propose a new transfer learning approach which approximates the learning environment, combine it with the sparse and interpretable probabilistic classification vector machine and compare our solution with standard benchmarks in the field.

The Schema Mechanism is a general learning and concept building framework initially created in the 1980s by Gary Drescher. It was inspired by the constructivist theory of early human cognitive development by Jean Piaget and shares interesting properties with human learning. Recently, Schema Networks were proposed. They combine ideas of the original Schema mechanism, Relational MDPs and planning based on Factor Graph optimization. Schema Networks demonstrated interesting properties for transfer learning, i.e. the ability of zero-shot transfer. However, there are several limitations of this approach. For example, although the Schema Network, in principle, works on an object-level, the original learning and inference algorithms use individual pixels as objects. Also, all types of entities have to share the same set of attributes and the neighborhood for each learned Schema has to be of the same size. In this paper, we discuss these and other limitations of Schema Networks and propose a novel representation based on hypervectors to address some of the limitations. Hypervectors are very high dimensional vectors (e.g. 2,048 dimensional) with useful statistical properties, including high representational capacity and robustness to noise. We present a system based on a Vector Symbolic Architecture (VSA) that uses hypervectors and carefully designed operators to create representations of arbitrary objects with varying number and type of attributes. These representations can be used to encode Schemas on this set of objects in arbitrary neighborhoods. The paper includes first results demonstrating the representational capacity and robustness to noise.

Configuration systems must be able to deal with inconsistencies which can occur in different contexts. Especially in interactive settings, where users specify requirements and a constraint solver has to identify solutions, inconsistencies may more often arise. Therefore, diagnosis algorithms are required to find solutions for these unsolvable problems. Runtime efficiency of diagnosis is especially crucial in real-time scenarios such as production scheduling, robot control, and communication networks. For such scenarios, diagnosis algorithms should determine solutions within predefined time limits. To provide runtime performance, direct or sequential diagnosis algorithms find diagnoses without the need of calculating conflicts. In this paper, we propose a new direct diagnosis algorithm LEARNDIAG which uses learned heuristics. It applies supervised learning to calculate constraint ordering heuristics for the diagnostic search. Our evaluations show that LEARNDIAG improves runtime performance of direct diagnosis besides improving the diagnosis quality in terms of minimality and precision

Planning

Assembly recipes can elegantly be represented in description logic theories. With such a recipe, the robot can figure out the next assembly step through logical inference. However, before performing an action, the robot needs to ensure various spatial constraints are met, such as that the parts to be put together are reachable, non occluded, etc. Such inferences are very complicated to support in logic theories, but specialized algorithms exist that efficiently compute qualitative spatial relations such as whether an object is reachable. In this work, we combine a logicbased planner for assembly tasks with geometric reasoning capabilities to enable robots to perform their tasks under spatial constraints. The geometric reasoner is integrated into the logic-based reasoning through decision procedures attached to symbols in the ontology.

Recent attempts at behaviour understanding through language grounding have shown that it is possible to automatically generate planning models from instructional texts. One drawback of these approaches is that they either do not make use of the semantic structure behind the model elements identified in the text, or they manually incorporate a collection of concepts with semantic relationships between them. To use such models for behaviour understanding, however, the system should also have knowledge of the semantic structure and context behind the planning operators. To address this problem, we propose an approach that automatically generates planning operators from textual instructions. The approach is able to identify various hierarchical, spatial, directional, and causal relations between the model elements. This allows incorporating context knowledge beyond the actions being executed. We evaluated the approach in terms of correctness of the identified elements, model search complexity, model coverage, and similarity to handcrafted models. The results showed that the approach is able to generate models that explain actual tasks executions and the models are comparable to handcrafted models

Making decisions under risk is a competence human beings naturally display when being confronted with new and potentially dangerous learning tasks. In an effort to replicate this ability, many approaches have been promoted in different fields of artificial learning and planning. To plan domains with inherent risk in the presence of a simulation model we propose Risk-Sensitive Online Planning (RISEON) that extends traditional online planning by using an appropriate risk-aware optimization objective. The objective we use is Conditional Value at Risk (CVaR), where risk-sensitivity can be controlled by setting the quantile size to fit a given risk level. By using CVaR the planner shifts its focus from risk-neutral sample means towards the tail of loss distributions, thus considers an adjustable share of high costs. We evaluate RISEON in a smart grid planning scenario and in a continuous control task, where the planner has to steer a vehicle towards risky checkboxes, and empirically show that the proposed algorithm can be used to plan w.r.t. risk-sensitivity.

Neural Networks

Many Deep Neural Networks (DNNs) are implemented with the single objective to achieve high classification scores. However, there can be additional objectives like the minimization of computational costs. This is especially important in the field of mobile computing where not only the computational power itself is a limiting factor but also each computation consumes energy affecting the battery life. Unfortunately, the determination of minimal structures is not straightforward.

In our paper, we present a new approach to determine DNNs employing reduced structures. The networks are determined by an Evolutionary Algorithm (EA). After the DNN is trained, the EA starts to remove neurons from the network. Thereby, the fitness function of the EA is depending on the accuracy of the DNN. Thus, the EA is able to control the influence of each individual neuron. We introduce our new approach in detail. Thereby, we employ motion data recorded by accelerometer and gyroscope sensors of a mobile device. The data are recorded while drawing Japanese characters in the air in a learning context. The experimental results show that our approach is capable to determine reduced networks with similar performance to the original ones. Additionally, we show that the reduction can improve the accuracy of a network. We analyze the reduction in detail. Further, we present arising structures of the reduced networks.

Finding good hyper-parameter settings to train neural networks is challenging, as the optimal settings can change during the training phase and also depend on random factors such as weight initialization or random batch sampling. Most state-of-the-art methods for the adaptation of these settings are either static (e.g. learning rate scheduler) or dynamic (e.g ADAM optimizer), but only change some of the hyper-parameters and do not deal with the initialization problem. In this paper, we extend the asynchronous evolutionary algorithm, population based training, which modifies all given hyper-parameters during training and inherits weights. We introduce a novel knowledge distilling scheme. Only the best individuals of the population are allowed to share part of their knowledge about the training data with the whole population. This embraces the idea of randomness between the models, rather than avoiding it, because the resulting diversity of models is important for the population’s evolution. Our experiments on MNISTfashionMNIST, and EMNIST (MNIST split) with two classic model architectures show significant improvements to convergence and model accuracy compared to the original algorithm. In addition, we conduct experiments on EMNIST (balanced split) employing a ResNet and a WideResNet architecture to include complex architectures and data as well.

Stochastic gradient descent is the most prevalent algorithm to train neural networks. However, other approaches such as evolutionary algorithms are also applicable to this task. Evolutionary algorithms bring unique trade-offs that are worth exploring, but computational demands have so far restricted exploration to small networks with few parameters. We implement an evolutionary algorithm that executes entirely on the GPU, which allows to efficiently batch-evaluate a whole population of networks. Within this framework, we explore the limited evaluation evolutionary algorithm for neural network training and find that its batch evaluation idea comes with a large accuracy trade-off. In further experiments, we explore crossover operators and find that unprincipled random uniform crossover performs extremely well. Finally, we train a network with 92k parameters on MNIST using an EA and achieve 97.6 % test accuracy compared to 98 % test accuracy on the same network trained with Adam. Code is available at this https URL.

Recurrent neural networks have proven useful in natural language processing. For example, they can be trained to predict, and even generate plausible text with few or no spelling and syntax errors. However, it is not clear what grammar a network has learned, or how it keeps track of the syntactic structure of its input. In this paper, we present a new method to extract a finite state machine from a recurrent neural network. A FSM is in principle a more interpretable representation of a grammar than a neural net would be, however the extracted FSMs for realistic neural networks will also be large. Therefore, we also look at ways to group the states and paths through the extracted FSM so as to get a smaller, easier to understand model of the neural network. To illustrate our methods, we use them to investigate how a neural network learns noun-verb agreement from a simple grammar where relative clauses may appear between noun and verb.

Visual Search target inference subsumes methods for predicting the target object through eye tracking. A person intents to find an object in a visual scene which we predict based on the fixation behavior. Knowing about the search target can improve intelligent user interaction. In this work, we implement a new feature encoding, the Bag of Deep Visual Words, for search target inference using a pre-trained convolutional neural network (CNN). Our work is based on a recent approach from the literature that uses Bag of Visual Words, common in computer vision applications. We evaluate our method using a gold standard dataset.

Recently a strong poker-playing algorithm called DeepStack was published, which is able to find an approximate Nash equilibrium during gameplay by using heuristic values of future states predicted by deep neural networks. This paper analyzes new ways of encoding the inputs and outputs of DeepStack’s deep counterfactual value networks based on traditional abstraction techniques, as well as an unabstracted encoding, which was able to increase the network’s accuracy

In natural language generation, the task of Referring Expression Generation (REG) is to determine a set of features or relations which identify a target object. Referring expressions describe the target object and discriminate it from other objects in a scene. From an algorithmic point of view, REG can be posed as a search problem. Since search space is exponential with respect to the number of features and relations available, efficient search strategies are required. In this paper we investigate variants of Monte-Carlo Tree Search (MCTS) for application in REG. We propose a new variant, called Quasi Best-First MCTS (QBF-MCTS). In an empirical study we compare different MCTS variants to one another, and to classic REG algorithms. The results indicate that QBF-MCTS yields a significantly improved performance with respect to efficiency and quality.

Monte Carlo tree search (MCTS) is a popular choice for solving sequential anytime problems. However, it depends on a numeric feedback signal, which can be difficult to define. Real-time MCTS is a variant which may only rarely encounter states with an explicit, extrinsic reward. To deal with such cases, the experimenter has to supply an additional numeric feedback signal in the form of a heuristic, which intrinsically guides the agent. Recent work has shown evidence that in different areas the underlying structure is ordinal and not numerical. Hence erroneous and biased heuristics are inevitable, especially in such domains. In this paper, we propose a MCTS variant which only depends on qualitative feedback, and therefore opens up new applications for MCTS. We also find indications that translating absolute into ordinal feedback may be beneficial. Using a puzzle domain, we show that our preference-based MCTS variant, wich only receives qualitative feedback, is able to reach a performance level comparable to a regular MCTS baseline, which obtains quantitative feedback

Similarity among worlds plays a pivotal role in providing the semantics for different kinds of belief change. Although similarity is, intuitively, a context-sensitive concept, the accounts of similarity presently proposed are, by and large, context blind. We propose an account of similarity that is context sensitive, and when belief change is concerned, we take it that the epistemic input provides the required context. We accordingly develop and examine two accounts of probabilistic belief change that are based on such evidence-sensitive similarity. The first switches between two extreme behaviors depending on whether or not the evidence in question is consistent with the current knowledge. The second gracefully changes its behavior depending on the degree to which the evidence is consistent with current knowledge. Finally, we analyze these two belief change operators with respect to a select set of plausible postulates.

Current trends, like digital transformation and ubiquitous computing, yield in massive increase in available data and information. In artificial intelligence (AI) systems, capacity of knowledge bases is limited due to computational complexity of many inference algorithms. Consequently, continuously sampling information and unfiltered storing in knowledge bases does not seem to be a promising or even feasible strategy. In human evolution, learning and forgetting have evolved as advantageous strategies for coping with available information by adding new knowledge to and removing irrelevant information from the human memory. Learning has been adopted in AI systems in various algorithms and applications. Forgetting, however, especially intentional forgetting, has not been sufficiently considered, yet. Thus, the objective of this pa- per is to discuss intentional forgetting in the context of AI systems as a first step. Starting with the new priority research program on ‘Inten- tional Forgetting’ (DFG-SPP 1921), definitions and interpretations of intentional forgetting in AI systems from different perspectives (knowl- edge representation, cognition, ontologies, reasoning, machine learning, self-organization, and distributed AI) are presented and opportunities as well as challenges are derived

Knowledge representation and reasoning have a long tradition in the field of artificial intelligence. More recently, the aspect of forgetting, too, has gained increasing attention. Humans have developed extremely effective ways of forgetting e.g. outdated or currently irrelevant information, freeing them to process ever-increasing amounts of data. The purpose of this paper is to present abstract formalizations of forgetting operations in a generic axiomatic style. By illustrating, elaborating, and identifying different kinds and aspects of forgetting from a common-sense perspective, our work may be used to further develop a general view on forgetting in AI and to initiate and enhance the interaction and exchange among research lines dealing with forgetting, both, but not limited to, in computer science and in cognitive psychology.

Context Aware Systems

Foundational work on stream processing is relevant for different areas of AI and it becomes even more relevant if the work concerns feasible and scalable stream processing. One facet of feasibility is treated under the term bounded memory. In this paper, streams are represented as finite or infinite words and stream processing is modelled with stream functions, i.e., functions mapping one or more input stream to an output stream. Bounded-memory stream functions can process input streams by using constant space only. The main result of this paper is a syntactical characterization of bounded-memory functions by a form of safe recursion.

In smart cities we need innovative mobility solutions. In the near future, most travelers will start their multi-modal journey through a seamlessly connected smart city with intelligent mobility services at home. Nevertheless, there is a lack of well-founded requirements for smart in-house mobility services. In our original journal publication [7] we presented a first step towards a better understanding of the situation in which travelers use digital services at home in order to inform themselves about their mobility options. We reported three main findings, namely (1) the lack of availability of mobility-centered information is the most pressing pain point regarding mobility-centered information at home, (2) most participants report a growing need to access vehicle-centered information at home and a growing interest in using a variety of smart home features and (3) smart in-house mobility services should combine pragmatic (i.e., information-based qualities) and hedonic (i.e., stimulation- and pleasure-oriented) qualities. In the present paper, we now extend our previous work among an implementation and evaluation of our previously gained user insights into a smart mirror prototype. The quantitative evaluation again highlighted the importance of pragmatic and hedonic product qualities for smart in-house mobility services. Since these insights can help practitioners to develop user-centered mobility services for smart homes, our results will help to maximize customer value.

Cognitive Approach

Reasoning is a core ability of humans being explored across disciplines during the last millenia. Investigations focused, however, often on identifying general principles of human reasoning or correct reasoning, but less on predicting conclusions for an individual reasoner. It is a desideratum to have artificial agents that can adapt to the individual human reasoner. We present an approach which successfully predicts individual performance across reasoning domains for reasoning about quantified or conditional statements using collaborative filtering techniques. Our proposed models are simple but efficient: they take some answers from a subject, and then build pair-wise similarities and predict missing answers based on what similar reasoners concluded. Our approach has a high accuracy in different data sets, and maintains this accuracy even when more than half of the data is missing. These features suggest that our approach is able to generalize and account for realistic scenarios, making it an adequate tool for artificial reasoning systems for predicting human inferences.

A core method of cognitive science is to investigate cognition by approaching human behavior through model implementations. Recent literature has seen a surge of models which can broadly be classified into detailed theoretical accounts, and fast and frugal heuristics. Being based on simple but general computational principles, these heuristics produce results independent of assumed mental processes.

This paper investigates the potential of heuristic approaches in accounting for behavioral data by adopting a perspective focused on predictive precision. Multiple heuristic accounts are combined to create a portfolio, i.e., a meta-heuristic, capable of achieving state-of-the-art performance in prediction settings. The insights gained from analyzing the portfolio are discussed with respect to the general potential of heuristic approaches.

In the next article, we will discuss KI2019.

コメント

タイトルとURLをコピーしました