KI2013
In this article, we describe the proceedings of the 36th German Conference on Artificial Intelligence (KI 2013), held at the University of Koblenz, Germany, from September 16-20, 2013.
Started in 1975 by the German Workshop on AI (GWAI), this German Conference on Artificial Intelligence is the leading forum for artificial intelligence research in Germany and is attended by many international guests. The conference traditionally brings together academic and industrial researchers from all areas of artificial intelligence. The conference is organized by the Technical Committee on Artificial Intelligence of the German Informatics Society (Fachbereich Ku ̈nstliche Intelligenz der Gesellschaft fu ̈r In- formatik e.V.) Following KI 2013, the 43rd German Informatics Conference ( Informatik 2013), the 11th MATES 2013 (German Conference on Multi-Agent System Technologies) co-located with the 4th JAWS (Joint Agent Workshops in Synergy), and five other Five joint conferences were held. Together, they provide a perfect basis for interesting discussions and information exchange within the AI community and with other communities.
Over the years, Artificial Intelligence has become a major field in German computer science, with numerous successful projects and applications. Artificial intelligence applications and methods have influenced many areas and research fields, including business informatics, logistics, eHuman, finance, cognitive science, and medicine. These applications have been made feasible based on sophisticated theoretical and methodological efforts and successes in the German AI community. Thus, the theme of KI 2013 will be “From Research to Innovation to Practical Applications”.
This year, 70 submissions were received and the International Program Committee conditionally accepted 24 as full papers and 8 as short papers (posters), for an acceptance rate of 46%. Each submission was reviewed at least three times, and the members of the program committee devoted a great deal of effort to discussing the submissions. The papers covered a wide range of topics: agents, robotics, cognitive science, machine learning, swarm intelligence, planning, knowledge modeling, reasoning, and ontology.
The details are given below.
We can currently see the rapid formation of an exciting mul- tidisciplinary field focusing on the application of biological principles and mechanisms to develop autonomous systems – software agents and robots – that act highly flexible and robust in the face of environmen- tal contingency and uncertainty. In this talk I will give an overview of various aspects of this field. The state of the art will be illustrated with divers examples of bio-inspired approaches to system adaptivity, func- tional and structural optimization, collective and swarm behavior, lo- comotion, sensor-motor control, and (co )evolution. A focus will be on representative work on biologically inspired autonomous systems done at the Swarmlab of Maastricht University, including recent research moti- vated by the behavior of social insects such as bees and ants.
We present a technique which allows partial-order causal- link (POCL) planning systems to use heuristics known from state-based planning to guide their search.
The technique encodes a given partially ordered partial plan as a new classical planning problem that yields the same set of solutions reach- able from the given partial plan. As heuristic estimate of the given partial plan a state-based heuristic can be used estimating the goal distance of the initial state in the encoded problem. This technique also provides the first admissible heuristics for POCL planning, simply by using ad- missible heuristics from state-based planning. To show the potential of our technique, we conducted experiments where we compared two of the currently strongest heuristics from state-based planning with two of the currently best-informed heuristics from POCL planning.
The problem of clustering workflows is a relatively new re- search area of increasing importance as the number and size of workflow repositories is getting larger. It can be useful as a method to analyze the workflow assets accumulated in a repository in order to get an overview of its content and to ease navigation. In this paper, we investigate workflow clustering by adapting two traditional clustering algorithms (k-medoid and AGNES) for workflow clustering. Clustering is guided by a seman- tic similarity measure for workflows, originally developed in the context of case-based reasoning. Further, a case study is presented that evalu- ates the two algorithms on a repository containing cooking workflows automatically extracted from an Internet source.
Endowing artificial agents with the ability to empathize is believed to enhance their social behavior and to make them more likable, trustworthy, and caring. Neuropsychological findings substantiate that empathy occurs to different degrees depending on several factors including, among others, a person’s mood, personality, and social relationships with others. Although there is increasing in- terest in endowing artificial agents with affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such factors in influencing their empathic behavior. In this paper, we present a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. The presented model is based on psychological models of empathy and is applied and evaluated in the context of a conversational agent scenario.
Replying to corresponding research calls I experimentally in- vestigate whether a higher level of artificial intelligence support leads to a lower user cognitive workload. Applying eye-tracking technology I show how the user’s cognitive workload can be measure more objectively by capturing eye movements and pupillary responses. Within a laboratory environment which adequately reflects a realistic working situation, the probands use two distinct systems with similar user interfaces but very different levels of artificial intelligence support. Recording and analyz- ing objective eye-tracking data (i.e. pupillary diameter mean, pupillary diameter deviation, number of gaze fixations and eye saccade speed of both left and right eyes) – all indicating cognitive workload – I found significant systematic cognitive workload differences between both test systems. My results indicated that a higher AI-support leads to lower user cognitive workload.
Description Logics (DLs) are a family of knowledge repre- sentation formalisms, that provides the theoretical basis for the standard web ontology language OWL. Generalization services like the least com- mon subsumer (lcs) and the most specific concept (msc) are the basis of several ontology design methods, and form the core of similarity mea- sures. For the DL ELOR, which covers most of the OWL 2 EL profile, the lcs and msc need not exist in general, but they always exist if re- stricted to a given role-depth. We present algorithms that compute these role-depth bounded generalizations. Our method is easy to implement, as it is based on the polynomial-time completion algorithm for ELOR.
Formula simplification is important for the performance of SAT solvers. However, when applied until completion, powerful preprocessing tech- niques like variable elimination can be very time consuming. Therefore, these techniques are usually used with a resource limit. Although there has been much research on parallel SAT solving, no attention has been given to parallel prepro- cessing. In this paper we show how the preprocessing techniques subsumption, clause strengthening and variable elimination can be parallelized. For this task either a high-level variable-graph formula partitioning or a fine-grained locking schema can be used. By choosing the latter and enforcing clauses to be ordered, we obtain powerful parallel simplification algorithms. Especially for long pre- processing times, parallelization is beneficial, and helps MINISAT to solve 11 % more instances of recent competition benchmarks.
The development and maintenance of traffic concepts in ur- ban districts is expensive and leads to high investments for altering transport infrastructures or for the acquisition of new resources. We present an agent-based approach for modeling, simulation, evaluation, and optimization of public transport systems by introducing a dynamic microscopic model. Actors of varying stakeholders are represented by intelligent agents. While describing the inter-agent communication and their individual behaviors, the focus is on the implementation of infor- mation systems for traveler agents as well as on the matching between open source geographic information systems, and standardized transport schedules provided by the Association of German Transport Companies. The performance, efficiency, and limitations of the system are evaluated within the public transport infrastructure of Bremen. We discuss the effects of passengers’ behaviors to the entire transport network and in- vestigate the system’s flexibility as well as consequences of incidents in travel plans.
Nowadays astronomical catalogs contain patterns of hun- dreds of millions of objects with data volumes in the terabyte range. Upcoming projects will gather such patterns for several billions of objects with peta- and exabytes of data. From a machine learning point of view, these settings often yield unsupervised, semi-supervised, or fully super- vised tasks, with large training and huge test sets. Recent studies have demonstrated the effectiveness of prototype-based learning schemes such as simple nearest neighbor models. However, although being among the most computationally efficient methods for such settings (if implemented via spatial data structures), applying these models on all remaining pat- terns in a given catalog can easily take hours or even days. In this work, we investigate the practical effectiveness of GPU-based approaches to ac- celerate such nearest neighbor queries in this context. Our experiments indicate that carefully tuned implementations of spatial search structures for such multi-core devices can significantly reduce the practical runtime. This renders the resulting frameworks an important algorithmic tool for current and upcoming data analyses in astronomy.
The significant effect of parameter settings on the success of the evolutionary optimization has led to a long history of research on parameter control, e.g., on mutation rates. However, few studies com- pare different tuning and control strategies under the same experimen- tal condition. Objective of this paper is to give a comprehensive and fundamental comparison of tuning and control techniques of mutation rates employing the same algorithmic setting on a simple unimodal prob- lem. After an analysis of various mutation rates for a (1+1)-EA on OneMax, we compare meta-evolution to Rechenberg’s 1/5th rule and self-adaptation.
In this paper we provide methods for the Continuous Moni- toring Problem with Inter-Depot routes (CMPID). It arises when a num- ber of agents or vehicles have to persistently survey a set of locations. Each agent has limited energy storage (e.g., fuel tank or battery capacity) and can renew this resource at any available base station. Various real- world scenarios could be modeled with this formulation. In this paper we consider the application of this problem to disaster response man- agement, where wide area surveillance is performed by unmanned aerial vehicles. We propose a new method based on the Insertion Heuristic and the metaheuristic Variable Neighborhood Search. The proposed algo- rithm computes solutions for large real-life scenarios in a few seconds and iteratively improves them. Solutions obtained on small instances (where the optimum could be computed) are on average 2.6% far from optimum. Furthermore, the proposed algorithm outperforms existing methods for the Continuous Monitoring Problem (CMP) in both solution quality (in 3 times) and computational time (more than 400 times faster)
Ontology-based query answering has to be supported w.r.t. secondary memory and very expressive ontologies to meet practical re- quirements in some applications. Recently, advances for the expressive DL SHI have been made in the dissertation of S. Wandelt for concept- based instance retrieval on Big Data descriptions stored in secondary memory. In this paper we extend this approach by investigating optimization algorithms for answering grounded conjunctive queries.1
The use of in-vehicle information systems has increased in the past years. These systems assist the user but can as well cause ad- ditional cognitive load. The study presented in this paper was carried out to enable workload estimation in order to adapt information and entertainment systems so that an optimal driver performance and user experience is ensured. For this purpose smartphone sensor data, situa- tional factors and basic user characteristics are taken into account. The study revealed that the driving situation, the gender of the user and the frequency of driving significantly influence the user’s workload. Using only this information and smartphone sensor data the current workload of the driver can be estimated with 86% accuracy.
Heuristic search is the dominant approach to classical planning. However, many realistic problems violate classical assumptions such as determinism of action outcomes or full observability. In this paper, we investigate how – and how successfully – a particular classical technique, namely informed search using an abstraction heuristic, can be transferred to nondeterministic planning under partial observability. Specifically, we explore pattern-database heuristics with automatically generated pat- terns in the context of informed progression search for strong cyclic planning under partial observability. To that end, we discuss projections and how belief states can be heuristically assessed either directly or by going back to the contained world states, and empirically evaluate the resulting heuristics internally and compared to a delete-relaxation and a blind approach. From our experiments we can conclude that in terms of guidance, it is preferable to represent both nondeterminism and par- tial observability in the abstraction (instead of relaxing them), and that the resulting abstraction heuristics significantly outperform both blind search and a delete-relaxation approach where nondeterminism and par- tial observability are also relaxed.
Automated theorem provers (ATP) usually operate on finite input where all relevant axioms and conjectures are known at the start of the proof attempt. However, when a prover is embedded in a real-world knowledge rep- resentation application, it may have to draw upon data that is not immediately available in a local file, for example by accessing databases and online sources such as web services. This leads both to technical problems such as latency times as well as to formal problems regarding soundness and completeness. We have integrated external data sources into our ATP system E-KRHyper and in its un- derlying hyper tableaux calculus. In this paper we describe the modifications and discuss problems and solutions pertaining to the integration. We also present an application of this integration for the purpose of abductive query relaxation.
In many applications of constrained continuous black box optimization, the evaluation of fitness and feasibility is expensive. Hence, the objective of reducing the constraint function calls remains a challenging research topic. In the past, various surrogate models have been proposed to solve this issue. In this paper, a local surrogate model of feasibility for a self-adaptive evolution strategy is proposed, which is based on support vector classification and a pre-selection surrogate model management strategy. Negative side effects suchs as a decceleration of evolutionary convergence or feasibility stagnation are prevented with a control parameter. Additionally, self-adaptive mutation is extended by a surrogate-assisted alignment to support the evolutionary convergence. The experimental results show a significant reduction of constraint func- tion calls and show a positive effect on the convergence
Coping with uncertain knowledge and changing beliefs is essential for reasoning in dynamic environments. We generalize an approach to adjust prob- abilistic belief states by use of the relative entropy in a propositional setting to relational languages. As a second contribution of this paper, we present a method to compute such belief changes by considering a dual problem and present first application and experimental results
In the single-agent case general game playing and action planning are two related topics, so that one might hope to use the established planners to improve the handling of general single-player games. However, both come with their own description language, GDL and PDDL, respectively. In this paper we propose a way to translate single-player games described in GDL to PDDL planning tasks and provide an evaluation on a wide range of single-player games, comparing the efficiency of grounding and solving the games in the translated and in the original format.
In the next article, we will discuss KI2014.
コメント