KI 2020: Advances in Artificial Intelligence Papers

Machine Learning Technology Artificial Intelligence Technology Natural Language Processing Technology Semantic Web Technology Ontology Technology Reasoning Technology Knowledge Information Technology Collecting AI Conference Papers Digital Transformation Technology Navigation of this Blog
KI2020

This proceedings volume contains the papers presented at the 43rd German Conference on Artificial Intelligence (KI 2020), held during September 2125, 2020, hosted by University of Bamberg, Germany. Due to COVID-19, KI 2020 was the first virtual edition of this conference series.

The German conference on Artificial Intelligence (abbreviated KI for Knstliche Intelligenz) has developed from a series of unofficial meetings and workshops, starting 45 years ago with the first GI-SIG AI meeting on October 7, 1975. GI-SIG AI is the Fachbereich Knstliche Intelligenz (FBKI) der Gesellschaft fr Informatik (GI). As a well-established annual conference series it is dedicated to research on theory and applications across all methods and topic areas of AI research. While KI is primarily attended by researchers from Germany and neighboring countries, it warmly welcomes international participation.

KI 2020 had a special focus on human-centered AI with a particular focus on AI in education and explainable machine learning. These topics were addressed in a panel discussion as well as in a workshop. The conference invited original research papers and shorter technical communications as well as descriptions of system demonstrations on all topics of AI. Further, the submission of extended abstracts summarizing papers that had recently been presented at major AI conferences was encouraged.

KI 2020 received more than 70 submissions from 13 countries which were reviewed by three Program Committee members each. The Program Committee, comprising 53 experts from 8 countries, decided to accept 16 submissions as full papers, 12 as technical contributions, and 4 as pre-published abstracts.

The program included six invited talks:

  • –  Anthony G. Cohn, University of Leeds, UK: Learning about Language and Action for Robots
  • –  Hector Geffner, InstituciCatalana de Recerca i Estudis Avanats and Universitat Pompeu Fabra, Spain: From Model-free to Model-based AI: Representation Learning for Planning
  • –  Jana Koehler, DFKI, Germany: 10120 and Beyond: Scalable AI Search Algorithms as a Foundation for Powerful Industrial Optimization
  • –  Nada Lavra, Joef Stefan Institute, Slovenia: Semantic Relational Learning
  • –  Sebastian Riedel, Facebook AI Research and University College London, UK: Open and Closed Book Machine Reading
  • –  Ulli Waltinger, Siemens Corporate Technology, Germany: The Beauty of Imper- fection: From Gut Feeling to Transfer Learning described in “Overview of Transfer Learning and Examples of Algorithms and Implementationsto Self-SupervisionThe main conference was supplemented with five workshops and seven tutorials. In addition to the annual doctoral consortium, a student day was introduced encouraging students from high-school as well as from bachelor and master programs to present their AI projects. Although COVID-19 complicated KI 2020 in several regards, it was a

vi Preface

pleasure to organize this traditional annual event. We are grateful to our co-organizers, Matthias Thimm (workshops and tutorials chair), Tanya Braun (doctoral consortium chair), Jens Garbas (demo and exhibition chair), as well as to Johannes Rabold and the Fachschaft WIAI for organizing the student day, and to Klaus Stein for technical support. Student volunteers and support from local administrators (especially Romy Hartmann) was essential for the smooth (virtual) running of the conference. They supported us not only with generating a virtual city tour, but with many organizational details. We also want to thank the University of Bamberg for their generous support.

We thank the Program Committee members and all additional reviewers for their effort and time they invested in the reviewing process. Our appreciation also goes to the developers of EasyChair; their conference management system provides great func- tionalities that helped to organize the reviewing process and generate this volume. Last but not least, we would like to thank Christine Harms and the GI Geschftsstelle for the registration support and Springer for publishing the proceedings and sponsoring the Best Paper Award.

We hope the conference was enjoyed by all who participated.

Full Contributions

Contents

We consider a fair division model in which agents have positive, zero and negative utilities for items. For this model, we analyse one existing fairness property – EFX – and three new and related properties – EFX0333α/0/α3α/0/β utilities, this algorithm also returns such an allocation. We report some new impossibility results as well.

In this paper we look at multi-player trick-taking card games that rely on obeying suits, which include Bridge, Hearts, Tarot, Skat, and many more. We propose mini-game solving in the suit factors of the game, and exemplify its application as a single-dummy or double-dummy analysis tool that restricts game play to either trump or non-trump suit cards. Such factored solvers are applicable to improve card selections of the declarer and the opponents, mainly in the middle game, and can be adjusted for optimizing the number of points or tricks to be made. While on the first glance projecting the game to one suit is an over-simplification, the partitioning approach into suit factors is a flexible and strong weapon, as it solves apparent problems arising in the phase transition of accessing static table information to dynamic play. Experimental results show that by using mini-game play, the strength of trick-taking Skat AIs can be improved.

Convolutional neural networks (CNN) are getting more and more complex, needing enormous computing resources and energy. In this paper, we propose methods for conditional computation in the context of image classification that allows a CNN to dynamically use its channels and layers conditioned on the input. To this end, we combine lightweight gating modules that can make binary decisions without causing much computational overhead. We argue, that combining the recently proposed channel gating mechanism with layer gating can significantly reduce the computational cost of large CNNs. Using discrete optimization algorithms, the gating modules are made aware of the context in which they are used and decide whether a particular channel and/or a particular layer will be executed. This results in neural networks that adapt their own topology conditioned on the input image. Experiments using the CIFAR10 and MNIST datasets show how competitive results in image classification with respect to accuracy can be achieved while saving up to 50% computational resources.

The automatic transcription of historical printings with OCR has made great progress in recent years. However, the correct segmentation of demanding page layouts is still challenging, in particular, the separation of text and non-text (e.g. pictures, but also decorated initials). Fully convolutional neural nets (FCNs) with an encoder-decoder structure are currently the method of choice, if suitable training material is available. Since the variation of non-text elements is huge, the good results of FCNs, if training and test material are similar, do not easily transfer to different layouts. We propose an approach based on dividing a page into many contours (i.e. connected components) and classifying each contour with a standard Convolutional neural net (CNN) as being text or non-text. The main idea is that the CNN learns to recognize text contours, i.e. letters, and classifies everything else as non-text, thus generalizing better on the many forms of non-text. Evaluations of the contour-based segmentation in comparison to classical FCNs with varying amount of training material and with similar and dissimilar test data show its effectiveness.

Algorithm selection (AS) is defined as the task of automatically selecting the most suitable algorithm from a set of candidate algorithms for a specific instance of an algorithmic problem class. While suitability may refer to different criteria, runtime is of specific practical relevance. Leveraging empirical runtime information as training data, the AS problem is commonly tackled by fitting a regression function, which can then be used to estimate the candidate algorithms’ runtimes for new problem instances. In this paper, we develop a new approach to algorithm selection that combines regression with ranking, also known as learning to rank, a problem that has recently been studied in the realm of preference learning. Since only the ranking of the algorithms is eventually needed for the purpose of selection, the precise numerical estimation of runtimes appears to be a dispensable and unnecessarily difficult problem. However, discarding the numerical runtime information completely seems to be a bad idea, as we hide potentially useful information about the algorithms’ performance margins from the learner. Extensive experimental studies confirm the potential of our hybrid approach, showing that it often performs better than pure regression and pure ranking methods.

Conditional Reasoning and Relevance

To make planning feasible, planning models abstract from many details of the modeled system. When executing plans in the actual system, the model might be inaccurate in a critical point, and plan execution may fail. There are two options to handle this case: the previous solution can be modified to address the failure (plan repair), or the planning process can be re-started from the new situation (re-planning). In HTN planning, discarding the plan and generating a new one from the novel situation is not easily possible, because the HTN solution criteria make it necessary to take already executed actions into account. Therefore all approaches to repair plans in the literature are based on specialized algorithms. In this paper, we discuss the problem in detail and introduce a novel approach that makes it possible to use unchanged, off-the-shelf HTN planning systems to repair broken HTN plans. That way, no specialized solvers are needed

A conditional knowledge base R is a set of conditionals of the form “If A, the usually B”. Using structural information derived from the conditionals in R, we introduce the preferred structure relation on worlds. The preferred structure relation is the core ingredient of a new inference relation called system W inference that inductively completes the knowledge given explicitly in R. We show that system W exhibits desirable inference properties like satisfying system P and avoiding, in contrast to e.g. system Z, the drowning problem. It fully captures and strictly extends both system Z and skeptical c-inference. In contrast to skeptical c-inference, it does not require to solve a complex constraint satisfaction problem, but is as tractable as system Z

Free logics are a family of logics that are free of any exis-tential assumptions. Unlike traditional classical and non-classical logics, they support an elegant modeling of nonexistent objects and partial functions as relevant for a wide range of applications in computer science, philosophy , mathematics, and natural language semantics. While free first-order logic has been addressed in the literature, free higher-order logic has not been studied thoroughly so far. The contribution of this paper includes (i) the development of a notion and definition of free higher-order logic in terms of a positive semantics (partly inspired by Farmer’s partial functions version of Church’s simple type theory), (ii) the provision of a faithful shallow semantical embedding of positive free higher-order logic into classical higher-order logic, (iii) the implementation of this embedding in the Isabelle/HOL proof-assistant, and (iv) the exemplary application of our novel reasoning framework for an automated assessment of Prior’s paradox in positive free quantified propositional logics, i.e., a fragment of positive free higher-order logic.

Current supervised learning models cannot generalize well across domain boundaries, which is a known problem in many applications, such as robotics or visual classification. Domain adaptation methods are used to improve these generalization properties. However, these techniques suffer either from being restricted to a particular task, such as visual adaptation, require a lot of computational time and data, which is not always guaranteed, have complex parameterization, or expensive optimization procedures. In this work, we present an approach that requires only a well-chosen snapshot of data to find a single domain invariant subspace. The subspace is calculated in closed form and overrides domain structures, which makes it fast and stable in parameterization. By employing low-rank techniques, we emphasize on descriptive characteristics of data. The presented idea is evaluated on various domain adaptation tasks such as text and image classification against state of the art domain adaptation approaches and achieves remarkable performance across all tasks

Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concerned with the General Data Protection Regulation of the European Union, which features transparency as a cornerstone. Such demands require the ability to audit the rationale behind a classifier’s decision. While visualizations are the de facto standard of explanations, they come short in terms of expressiveness in many ways: They cannot distinguish between different attribute manifestations of visual features (e.g. eye open vs. closed), and they cannot accurately describe the influence of absence of, and relations between features. An alternative would be more expressive symbolic surrogate models. However, these require symbolic inputs, which are not readily available in most computer vision tasks. In this paper we investigate how to overcome this: We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN). The semantics of the features are mined by a concept analysis approach trained on a set of human understandable visual concepts. The explanation is found by an Inductive Logic Programming (ILP) method and presented as first-order rules. We show that our explanation is faithful to the original black-box model.
The code for our experiments is available at this https URL.

Many problems from industrial applications and AI can be encoded as Maximum Satisfiability (MaxSAT). Often, it is more desirable to produce practicable results in very short time compared to optimal solutions after an arbitrary long computation time. In this paper, we propose Stable Resolving (SR), a novel randomized local search heuristic for MaxSAT with that aim. SR works for both weighted and unweighted instances. Starting from a feasible initial solution, the algorithm repeatedly performs the three steps of perturbation, improvements and solution checking. In the perturbation, the search space is explored at the cost of possibly worsening the current solution. The local improvements work by repeatedly flipping signs of variables in over-satisfied clauses. Finally, the algorithm performs a solution checking in a simulated annealing fashion. We compare our approach to state-of-the-art MaxSAT solvers and show by numerical experiments on benchmark instances from the annual MaxSAT competition that SR performs comparable on average and is even the best solver for particular problem instances.

As Europe sees its population aging dramatically, Assisted Daily Living for the elderly becomes a more and more important and relevant research topic. The Movecare Project focuses on this topic by integrating a robotic platform, an IoT system, and an activity center to provide assistance, suggestions of activities and transparent monitoring to users at home. In this paper, we describe the Virtual Caregiver, a software component of the Movecare platform, that is responsible for analyzing the data from the various modules and generating suggestions tailored to the user’s state and needs. A preliminary study has been carried on over 2 months with 15 users. This study suggests that the presence of the Virtual Caregiver encourages people to use the Movecare platform more consistently, which in turn could result in better monitoring and prevention of cognitive and physical decline.

In multiagent organizations, the coordination of problem-solving capabilities builds the foundation for processing complex tasks. Roles provide a structured approach to consolidate task-processing responsibilities. However, designing roles remains a challenge since role configurations affect individual and team performance. On the one hand, roles can be specialized on certain tasks to allow for efficient problem solving. On the other hand, this reduces task processing flexibility in case of disturbances. As agents gain experience knowledge by enacting certain roles, switching roles becomes difficult and requires training. Hence, this paper explores the effects of different role designs on learning agents at runtime. We utilize an adaptive Belief-Desire-Intention agent architecture combined with a reinforcement learning approach to model experience knowledge, task-processing improvement, and decision-making in a stochastic environment. The model is evaluated using an emergency response simulation in which agents manage fire departments for which they configure and control emergency operations. The results show that specialized agents learn to process their assigned tasks more efficient than generalized agents.

Descriptor revision by Hansson is a framework for addressing the problem of belief change. In descriptor revision, different kinds of change processes are dealt with in a joint framework. Individual change requirements are qualified by specific success conditions expressed by a belief descriptor, and belief descriptors can be combined by logical connectives. This is in contrast to the currently dominating AGM paradigm shaped by Alchourrón, Gärdenfors, and Makinson, where different kinds of changes, like a revision or a contraction, are dealt with separately. In this article, we investigate the realisation of descriptor revision for a conditional logic while restricting descriptors to the conjunction of literal descriptors. We apply the principle of conditional preservation developed by Kern-Isberner to descriptor revision for conditionals, show how descriptor revision for conditionals under these restrictions can be characterised by a constraint satisfaction problem, and implement it using constraint logic programming. Since our conditional logic subsumes propositional logic, our approach also realises descriptor revision for propositional logic.

Multi-agent path finding with continuous movements and time (denoted MAPFR ) is addressed. The task is to navigate agents that move smoothly between predefined positions to their individual goals so that they do not collide. Recently a novel solving approach for obtaining makespan optimal solutions called SMT-CBSR based on satisfiability modulo theories (SMT) has been introduced. We extend the approach further towards the sum-of-costs objective which is a more challenging case in the yes/no SMT environment due to more complex calculation of the objective. The new algorithm combines collision resolution known from conflict-based search (CBS) with previous generation of incomplete SAT encodings on top of a novel scheme for selecting decision variables in a potentially uncountable search space. We experimentally compare SMT-CBSR and previous CCBS (continuous conflict-based search) algorithm for MAPFR

Abstracts of Pre-published Papers

This is an extended abstract of the paper “Cone Semantics for Logics with Negation” to be published in the proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020).

The Databionic swarm (DBS) is a flexible and robust clustering framework that consists of three independent modules: swarm-based projection, high-dimensional data visualization, and representation guided clustering. The first module is the parameter-free projection method Pswarm, which exploits concepts of self-organization and emergence, game theory, and swarm intelligence. The second module is a parameter-free high-dimensional data visualization technique called topographic map. It uses the generalized U-matrix, which enables to estimate first, if any cluster tendency exists and second, the estimation of the number of clusters. The third module offers a clustering method that can be verified by the visualization and vice versa. Benchmarking w.r.t. conventional algorithms demonstrated that DBS can outperform them. Several applications showed that cluster structures provided by DBS are meaningful. This article is an abstract of Swarm Intelligence for Self-Organized Clustering [1].

The purpose of image restoration is to recover the original state of damaged images. To overcome the disadvantages of the traditional, manual image restoration process, like the high time consumption and required domain knowledge, automatic inpainting methods have been developed. These methods, however, can have limitations for complex images and may require a lot of input data. To mitigate those, we present “interactive Deep Image Prior“, a combination of manual and automated, Deep-Image-Prior-based restoration in the form of an interactive process with the human in the loop. In this process a human can iteratively embed knowledge to provide guidance and control for the automated inpainting process. For this purpose, we extended Deep Image Prior with a user interface which we subsequently analyzed in a user study. Our key question is whether the interactivity increases the restoration quality subjectively and objectively. Secondarily, we were also interested in how such a collaborative system is perceived by users.

Our evaluation shows that, even with very little human guidance, our interactive approach has a restoration performance on par or superior to other methods. Meanwhile, very positive results of our user study suggest that learning systems with the human-in-the-loop positively contribute to user satisfaction. We therefore conclude that an interactive, cooperative approach is a viable option for image restoration and potentially other ML tasks where human knowledge can be a correcting or guiding influence.

Technical Contributions

Future intelligent autonomous systems (IAS) are inevitably deciding on moral and legal questions, e.g. in self-driving cars, health care or human-machine collaboration. As decision processes in most modern sub-symbolic IAS are hidden, the simple political plea for transparency, accountability and governance falls short. A sound ecosystem of trust requires ways for IAS to autonomously justify their actions, that is, to learn giving and taking reasons for their decisions. Building on social reasoning models in moral psychology and legal philosophy such an idea of >>Reasonable Machines<< requires novel, hybrid reasoning tools, ethico-legal ontologies and associated argumentation technology. Enabling machines to normative communication creates trust and opens new dimensions of AI application and human-machine interaction.

Humans are capable of recognizing intentions by solely observing another agent’s actions. Hence, in a cooperative planning task, i.e., where all agents aim for all other agents to reach their respective goals, to some extend communication or a central planning instance are not necessary. In epistemic planning a recent research line investigates multi-agent planning problems (MAPF) with goal uncertainty. In this paper, we propose and analyze a round-based variation of this problem, where each agent moves or waits in each round. We show that simple heuristics from cognition can outperform in some cases an adapted formal approach on computation time and solve some new instances in some cases. Implications are discussed.

In the financial sector, a reliable forecast the future financial performance of a company is of great importance for investors’ investment decisions. In this paper we compare long-term short-term memory (LSTM) networks to temporal convolution network (TCNs) in the prediction of future earnings per share (EPS). The experimental analysis is based on quarterly financial reporting data and daily stock market returns. For a broad sample of US firms, we find that both LSTMs outperform the naive persistent model with up to 30.0% more accurate predictions, while TCNs achieve and an improvement of 30.8%. Both types of networks are at least as accurate as analysts and exceed them by up to 12.2% (LSTM) and 13.2% (TCN).

A crucial part of recommender systems is to model the user’s preference based on her previous interactions. Different neural networks (e.g., Recurrent Neural Networks), that predict the next item solely based on the sequence of interactions have been successfully applied to sequential recommendation. Recently, BERT4Rec has been proposed, which adapts the BERT architecture based on the Transformer model and training methods used in the Neural Language Modeling community to this task. However, BERT4Rec still only relies on item identifiers to model the user preference, ignoring other sources of information. Therefore, as a first step to include additional information, we propose KeBERT4Rec, a modification of BERT4Rec, which utilizes keyword descriptions of items. We compare two variants for adding keywords to the model on two datasets, a Movielens dataset and a dataset of an online fashion store. First results show that both versions of our model improves the sequential recommending task compared to BERT4Rec.

Different approaches have been investigated for the modelling of real-world situations, especially in the medical field, many of which are based on probabilities or other numerical parameters. In this paper, we show how real world situations from the biomedical domain can be conveniently modelled with qualitative conditionals by presenting three case studies: modelling the classification of certain mammals, modelling infections with the malaria pathogen, and predicting the outcome of chronic myeloid leukaemia. We demonstrate that the knowledge to be modelled can be expressed directly and declaratively using qualitative conditional logic. For instance, it is straightforward to handle exceptions to a general rule as conditionals support nonmonotonic reasoning. Each of the knowledge bases is evaluated with example queries and with respect to different inference mechanisms that have been proposed for conditional knowledge, including p-entailment, system Z, and various inference relations based on c-representations. Comparing the obtained inference results with the answers expected from human experts demonstrates the feasibility of the modelling approach and also provides an empirical evaluation of the employed nonmonotonic inference relations in realistic application scenarios

We advocate the use of conformal prediction (CP) to enhance rule-based multi-label classification (MLC). In particular, we highlight the mutual benefit of CP and rule learning: Rules have the ability to provide natural (non-)conformity scores, which are required by CP, while CP suggests a way to calibrate the assessment of candidate rules, thereby supporting better predictions and more elaborate decision making. We illustrate the potential usefulness of calibrated conformity scores in a case study on lazy multi-label rule learning.

The performance of a constraint problem can often be improved by converting a subproblem into a single regular constraint. We describe a new approach to optimize constraint satisfaction (optimization) problems using constraint transformations from different kinds of global constraints to regular constraints, and their combination. Our transformation approach has two aims: 1. to remove redundancy originating from semantically overlapping constraints over shared variables and 2. to remove origins of backtracks in the search during the solution process. Based on the case study of the Warehouse Location Problem we show that our new approach yields a significant speed-up.

Knowledge graphs, which model relationships between entities, provide a rich and structured source of information. Currently, search engines aim to enrich their search results by structured summaries, e.g., obtained from knowledge graphs, that provide further information on the entity of interest. While single entity summaries are available already, summaries on the relations between multiple entities have not been studied in detail so far. Such queries can be understood as a pathfinding problem. However, the large size of public knowledge graphs, such as Wikidata, as well as the large indegree of its major entities, and the problem of concept drift impose major challenges for standard search algorithms in this context.
         In this paper, we propose a bidirectional pathfinding approach for directed knowledge graphs that uses the semantic distance between entity labels, which is approximated using word vectors, as a search heuristics in a parameterized A*-like evaluation function in order to find meaningful paths between two entities fast. We evaluate our approach using different parameters against a set of selected within- and cross-domain queries. The results indicate that our approach generally needs to explore fewer entities compared to its uninformed counterpart and qualitatively yields more meaningful paths.

In this work, we propose a new approach to automatically predict the locations of visual dermoscopic attributes for Task 2 of the ISIC 2018 Challenge. Our method is based on the Attention U-Net described in “Overview of U-net and examples of algorithms and implementationswith multi-scale images as input. We apply a new strategy based on transfer learning, i.e., training the deep network for feature extraction by adapting the weights of the network trained for segmentation. Our tests show that, first, the proposed algorithm is on par or outperforms the best ISIC 2018 architectures (LeHealth and NMN) in the extraction of two visual features. Secondly, it uses only 1/30 of the training parameters; we observed less computation and memory requirements, which are particularly useful for future implementations on mobile devices. Finally, our approach generates visually explainable behaviour with uncertainty estimations to help doctors in diagnosis and treatment decisions

Deep learning is moving more and more from the cloud towards the edge. Therefore, embedded devices are needed that are reasonably cheap, energy-efficient and fast enough. In this paper we evaluate the performance and energy consumption of popular, off-the-shelf commercial devices for deep learning inferencing. We compare the Intel Neural Compute Stick 2, the Google Coral Edge TPU and the Nvidia Jetson Nano with the Raspberry Pi 4 for their suitability as a central controller in an autonomous vehicle for the formula student driverless.

We consider the problem of learning to choose from a given set of objects, where each object is represented by a feature vector. Traditional approaches in choice modelling are mainly based on learning a latent, real-valued utility function, thereby inducing a linear order on choice alternatives. While this approach is suitable for discrete (top-1) choices, it is not straightforward how to use it for subset choices. Instead of mapping choice alternatives to the real number line, we propose to embed them into a higher-dimensional utility space, in which we identify choice sets with Pareto-optimal points. To this end, we propose a learning algorithm that minimizes a differentiable loss function suitable for this task. We demonstrate the feasibility of learning a Pareto-embedding on a suite of benchmark datasets

Speech-based robot instruction is a promising field in private households and in small and medium-sized enterprises. It facilitates the use of robot systems for experts as well as non-experts, especially while the user executes other tasks. Besides possible verbal ambiguities and uncertainties it has to be considered that the user may have no knowledge about the robot’s capabilities. This can lead to faulty performances or even damage beyond repair which leads to a loss of trust in the robot. We present a framework, which validates verbally instructed, force-based robot motions using a physics simulation. This prevents faulty performances and allows a generation of motions even with exceptional outcomes. As a proof of concept the framework is applied to a household use-case and the results are discussed.

Classifying stress in firefighters poses challenges, such as accurate personalized labeling, unobtrusive recording, and training of adequate models. Acquisition of labeled data and verification in cage mazes or during hot trainings is time consuming. Virtual Reality (VR) and Internet of Things (IoT) wearables provide new opportunities to create better stressors for firefighter missions through an immersive simulation. In this demo, we present a VR-based setup that enables to simulate firefighter missions to trigger and more easily record specific stress levels. The goal is to create labeled datasets for personalized multilevel stress detection models that include multiple biosignals, such as heart rate variability from electrocardiographic RR intervals. The multi-level stress setups can be configured, consisting of different levels of mental stressors. The demo shows how we established the recording of a baseline and virtual missions with varying challenge levels to create a personalized stress calibration.

As individual sub-fields of AI become more developed, it becomes increasingly important to study their integration into complex systems. In this paper, we provide a first look at the AI Domain Definition Language (AIDDL) as an attempt to provide a common ground for modeling problems, data, solutions, and their integration across all branches of AI in a common language. We look at three examples of how automated planning can be integrated with learning and reasoning

Abstract of a Pre-published Paper

In this paper we combine the theory of probability aggregation with results of machine learning theory concerning the optimality of predictions under expert advice. In probability aggregation theory several characterisation results for linear aggregation exist. However, in linear aggregation weights are not fixed, but free parameters. We show how fixing such weights by successbased scores allows for transferring the mentioned optimality results to the case of probability aggregation

コメント

Exit mobile version
タイトルとURLをコピーしました