AAAI Classic paper: Artificial Intelligence Technology Review the old and know the new

Artificial Intelligence Technology  Semantic Web Technology  Knowledge Information Processing Technology  Reasoning Technology  Digital Transformation Technology  Machine Learning Technology  Collecting AI Conference Papers

The AAAI (Association for the Advancement of Artificial Intelligence), the world’s top truss conference on artificial intelligence technology, has selected 30 outstanding papers from the past AAAI Classic Papers. The abstracts are translated into Japanese. These papers are not machine learning papers, which have been the mainstream in recent years, and I think they are worth reading to get new ideas to combine them.

1.Automatically Constructing a Dictionary for Information Extraction Tasks

Abstract:Knowledge-based NLP systems have been successful for some tasks, but they have been criticized for their reliance on domain-specific dictionaries and for the huge amount of knowledge engineering they require. Because of this bottleneck, knowledge-based NLP systems cannot be easily scaled up or ported to new domains in real-world applications. To address this problem, we have developed a system called AutoSlog that automatically builds domain-specific dictionaries of concepts for extracting information from text. using AutoSlog, we built a dictionary for the domain of terrorist event description in just 5 man-hours. We used AutoSlog to build a dictionary for the domain of terrorist event descriptions in just five man hours. We then compared the AutoSlog dictionary with a hand-crafted dictionary created by two skilled graduate students in about 1500 man-hours. The two dictionaries were evaluated using two blind test sets, each consisting of 100 texts. Overall, the AutoSlog dictionary achieved 98% of the performance of the hand-crafted dictionary. In the first test set, the Auto-Slog dictionary achieved 96.3% of the performance of the handcrafted dictionary; in the second test set, the AutoSlog dictionary achieved 99.7% of the performance of the handcrafted dictionary, and all scores were practically indistinguishable.

2. A filtering algorithm for constraints of difference in CSPs

Abstract:Many (CSPs) have constraints that are similar to alldifferent constraints. Such constraints are called “differential constraints”. These constraints are defined by a set of tuples on a subset of variables whose values all occur in the same tuple but are different. In this paper, we present a new filtering algorithm for these constraints. The algorithm achieves a generalized arc-consistency condition for these non-binary constraints. The algorithm is based on matching theory and its complexity is low. In fact, for constraints defined on a subset of p variables with a domain of cardinality of at most d, its space complexity is OCpd) and its time complexity is O(p2d2). This filtering algorithm has been successfully used to solve the subgraph homomorphism problem in a system called RESYN (Vismara et al. 1992).

3. A Logic of Implicit and Explicit Belief

Abstract:As part of an ongoing project to understand the underlying theory of knowledge representation, we are attempting to characterize a kind of belief that forms a more adequate basis for knowledge representation systems, rather than the usual formalization of possible worlds initiated by Hintikka. In this paper, I point out the flaws in the current semantic treatment of knowledge and beliefs (including the reformulated syntactic approach) and propose a new analysis in the form of a logic that avoids these shortcomings and is computationally more viable.

4. A New Mrthod for Solving Hard Satisfiability Problems

Abstract:In this paper, we introduce GSAT, a greedy local search method for solving propositional satisfiability problems. Experimental results show that GSAT can solve randomly generated hard problems that are an order of magnitude larger than those that can be handled by traditional methods such as Davis-Putnam and solution methods. We also show that GSAT is capable of solving structured satisfiability problems fast. In particular, it can solve encoding, N-queens, and Boolean induction for graph coloring problems. Furthermore, we discuss general application strategies of GSAT and its limitations.
GSAT is best viewed as a model search procedure; its superior performance suggests that it can be beneficial to reformulate inference tasks traditionally regarded as theorem proving problems as model search tasks.

5. A Robust, Qualitative Method for Robot Spatial Learning

Abstract:We propose a qualitative method that can be robust in the face of various errors that may occur in the real world when a mobile robot learns a map by exploring an unknown environment. While conventional methods represent the environment mainly in terms of geometric information, our method separately represents procedural knowledge for movement, topological mode I for the structure of the environment, and geometric information for geometric accuracy. The topographic model consists of feature locations and local moving edges connecting nearby feature locations. A feature location is defined as the local maximum of a measure of feature locations in the immediate vicinity, which can be found by hill-climbing search. The local travel edge is defined in terms of the local control strategy required for travel. How to find a feature location and how to follow an edge is procedural knowledge that the robot learns dynamically during the search phase and guides the robot during the navigation phase. An accurate topological model is created by associating locations with edges, which can store metrical information and reduce vulnerability to metrical errors. We present a simulation of a range-sensor-equipped robot NX exploring various 2D environments, and show its success with varying levels of random sensor error.

6. A Theory of Action for MultiAgent Planning

Abstract:A theory of action suitable for reasoning about events in multi-agent and dynamically changing environments has been developed in advance. A device called a process model is used to represent the observable behavior of an agent in performing an action. This model is more general than previous action models and can represent sequence, selection, nondeterminism, iteration, and parallelism. Using this model, we show that it is possible to synthesize plans and revisit concurrency. In particular, conditions are presented to determine whether concurrent actions are free from mutual interference. It is also shown that the theory provides a basis for understanding and reasoning about behavioral statements in both natural and programming languages.

7. A Variational Approach to Edge Detection

Abstract:The probIc:rn of dztc~ctin~ iri:,c:il::ity c!~arlges contained in the image is canonical. Other criteria, such as output signal-to-noise ratio and bandwidth, are also discussed. This paper discusses an attempt to formulate an edge detection criterion.
This paper discusses an attempt to formulate an edge detection criterion that captures the desired characteristics of the detector as directly as possible. We use variational methods to find two solutions on the space of all possible functions. The first criterion is that the detector should have a low probability of error. The second criterion is that the marked point should be as close as possible to the center of the true edge; the third criterion is that it should be unlikely to get more than one response for a single edge. The third criterion is newer and is said to have become necessary when operators designed using the first two criteria were found to have excessive multiple responses. The edge model considered here is two one-dimensional step edges in white Gaussian noise, but the same technique is also applied to extended impulse and ridge profiles. The result is a one-dimensional operator that approximates the first derivative of a Gaussian.
This is a one-dimensional operator that approximates the first derivative of a Gaussian. An extension to two dimensions is also discussed.

8. Acting Optimally in Part Partially Observable Stochastic Domains

Abstract:This paper describes a Markov Decision Process (POMDP) approach to finding optimal or near-optimal control strategies for partially observable stochastic environments, given a complete model of the environment. The POMDP approach was originally developed by the operations research community and provides a formal basis for planning problems of interest to the AI community.
The POMDP approach, originally developed in the operations research community, provides a formal basis for planning problems that are also of interest to the AI community. We found that the existing algorithms for computing optimal control strategies are very computationally inefficient, and developed a new empirically efficient algorithm. We give an overview of this algorithm and show preliminary results on several small problems that illustrate important properties of the POMDP approach.

9. Arc-Consistency and Arc-Consistency Again

Abstract:Constraint networks are an effective way to formulate problems such as design, scene labeling, temporal reasoning, and more recently, natural language analysis.
The problem of solution existence in constraint networks is NP-complete.
Hence, consistency techniques that simplify the constraint network before or during the search for a solution have been widely studied. The most used of these is arc consistency.
Mohr and Henderson proposed an algorithm AC-4 with optimal worst case time complexity.
However, AC-4 has two drawbacks: space complexity and mean time complexity.
For problems with a large number of solutions, these drawbacks are very important when the size of the constraints is large, and users often replace AC-4 with a non-optimal algorithm called AC-3 [Mac&Fre85].
In this paper, we propose a new algorithm, AC-6, which maintains the optimal worst-case time complexity of AC-4 while eliminating the spatial complexity drawbacks. Furthermore, the average time complexity of AC-6 is optimal for constrained networks where nothing is known about the semantics of the constraints.
At the end of this paper, we show how much better AC-6 is than AC-3 and AC-4 with experimental results.

10. Bayesian Classification

Abstract:In this paper, we describe an unsupervised Bayesian method of
Data Classification and its computer implementation, AutoClass. given real-valued or discrete data, AutoClass determines the most likely number of classes present in the data, the most likely description of It calculates that class and the probability that each object belongs to each class. The program performs as well as or better than other automatic classification systems when run on the same data, and does not include ad hoc similarity measures or stopping criteria. AutoClass has been applied to several databases and has found classes that represent previously unsuspected phenomena.

11. Boosting Combinatorial Search Through Randomization

Abstract: The unpredictability of the execution time of full search methods can often be explained by the phenomenon of “heavy-tailed cost distributions”. This means that at any point in the experiment, the possibility of encountering a problem that requires exponentially more time than previously encountered cannot be ignored (Gomes et al. 1998a). We present a general method for introducing controlled randomization into full search algorithms.
The boosted search method can provably eliminate the heavy tails to the right of the median. Moreover, the heavy tails to the left of the median can be exploited to dramatically reduce the solution time (i.e., a very short run time becomes a non-negligible probability). We have demonstrated speedups of several orders of magnitude with state-of-the-art full search methods on real-world hard problems.

12 Default Reasoning, Nonmonotonic Logics, and the Frame Problem

Abstract Non-monotonic formal systems have been proposed as an extension of classical first-order logic. It captures the process of “human default reasoning” or “plausible inference
by inference mechanisms, as Modus Ponenas modeled deductive reasoning. However, the technical properties of these logics have been studied in detail, and many examples of human default reasoning have been identified. For the most part, these logics have not been applied to real-world problems to determine whether they produce the expected results. We provide a set of axioms for a simple problem in temporal reasoning that has long been recognized as a case of default reasoning. Thus, it can be expressed in non-monotonic logic. The resulting non-monotonicity theory will be tested. However, it turns out that the reasoning allowed by that logic is not what was assumed when the axioms were written, but rather much weaker. The problem was shown to be independent of the logic used, and independent of any particular time representation. Analyzing this failure, we found that the non-monotonic logics we considered are essentially incapable of representing this kind of default reasoning. Finally, we discuss two recent proposals to solve this problem.

13 Estimating the Absolute Position of a Mobile Robot Using Position Probability Grids

Abstract In order to reuse existing models of the environment, a mobile robot must be able to estimate its own position and orientation within such a model. Most of the existing position estimation methods are either based on specialized sensors or aim to track the robot’s position relative to a known starting point. In this paper, we describe a position probability grid approach for estimating the absolute position and orientation of a robot in a metric model of the environment. The method is designed to work with standard sensors and does not rely on any knowledge of the starting point. It is a Bayesian approach based on a certainty grid. In each cell of such a grid, we store the probability that this cell points to the current position of the robot. These probabilities are obtained by integrating the likelihood of the sensor readings over time. The results of this paper show that our technique can reliably estimate the robot’s position even in complex environments. In addition, the approach has been proven to be robust against position estimation in imprecise environments.

14 Hard and Easy Distribution of SAT Problems

Abstract As others have observed, testing the satisfiability of random formulas often appears surprisingly simple. Here, we show that it is possible to generate difficult, i.e., random formulas that are very difficult to test for satisfiability, by using appropriate instance distributions and appropriate parameter values. The results provide a benchmark for the evaluation of satisfaction testing methods.

15 Learning to Coordinate Behaviors

Abstract This section describes an algorithm that allows a behavior-based robot to learn when to initiate an action based on positive and negative feedback. Following the philosophy of behavior-based robots, this algorithm is fully decentralized. Each action independently tries to find out (i) if it is relevant (i.e., if it correlates at all with positive feedback) and (ii) what are the reliable conditions (i.e., its conditions).
As a result, we were able to maximize the probability of receiving positive feedback and minimize the probability of receiving negative feedback). The algorithm was tested on a six-legged autonomous robot and was successful.

16 Learning-Interface Agents

Abstract An interface agent is a computer program that employs artificial intelligence techniques to assist users in handling specific computer applications. In this paper, we discuss an interface agent, a model similar to the personal assistant metaphor. This agent learns to do the following.
We describe how it assists the user by (i) observing and imitating the user’s behavior, (ii) receiving feedback from the user when it makes a mistake, and (iii) receiving training from the user based on hypothetical examples. In this paper, we discuss how this learning agent was implemented using memory-based learning and reinforcement learning techniques. We also show the actual results of two prototype agents (a meeting scheduling application and an email application) created using these techniques. This paper argues that building interface agents by machine learning is a feasible approach with several advantages over other approaches: it provides customized and adaptive solutions, at a lower cost, and guarantees better user acceptance. The paper also discusses the advantages of the specific learning techniques used.

17 Pengi- lementation of a Theory of Activity

Abstract AI has generally interpreted the organizational nature of everyday activities in terms of following a plan. There can be no doubt that people often make and follow plans. However, the complexity, uncertainty, and immediacy of the real world require that moment-to-moment improvisation play a central role. However, one is constantly deciding what to do now, before and under the plan. Our investigation of the dynamics of daily routine work has revealed important regularities in the interaction between very simple machines and their environment. We used this dynamic theory to design a program that allows Pengi to perform complex, seemingly planned activities. It does not require an explicit model of the world.

18 Pushing the Envelope-Planning, Propositional Logic, and Stochastic Search

Abstract Planning is a well-known intractable combinatorial search problem. By combining a general stochastic search algorithm with an appropriate problem encoding based on propositional logic, difficult planning problems can now be solved many times faster than the best current planning systems. Although stochastic methods have been shown to be effective in a variety of scheduling problems, this is the first time they have been demonstrated in a truly difficult classical planning problem. This research also provides a perspective on representation issues in new planning.

19 Quantifying the inductive bias in concept learning

Abstract We show that the concept of bias in inductive concept learning can be qualified in a way that is directly related to learning performance, and that a quantitative theory of bias can contribute to the design of effective learning algorithms. Applying this idea, we include restrictions to general language connection concepts and connection concepts with internal disjunction.

20 REACTIVE REASONING AND PLANNING

Abstract The reasoning system that controls the robot is designed to exhibit the behaviors expected of a rational agent, and is equipped with psychological attitudes of beliefs, desires, and intentions. Because these attitudes are explicitly represented, they can be manipulated and reselected, resulting in complex goal-directed and reflective behaviors. In contrast to general planning systems, the plans and intentions formed by the robot need only be partially elaborated before deciding on a course of action. This means that the robot does not have the over-commitment that is common in traditional planners, such as overly strong expectations of the environment or overly constrained action plans. Furthermore, the robot is continuously reactivated and has the ability to change its goals and intentions as the situation demands. The system has been tested in a space station scenario using SRI’s autonomous robot (Flakey).

21 Real-Time Heuristic Search- First Results

Abstract Existing heuristic search algorithms are not applicable to real-time applications because the moves cannot be finalized before the entire solution is found. We present a special case of minimax look-ahead search to address this problem, as well as an analog of alpha-beta pruning that significantly improves the efficiency of the algorithm. In addition, a new algorithm called real time A* is used for the search, not only for simulations, but also when the action actually needs to be executed. Finally, the nature of the trade-off between computation and execution will be discussed.

22 REVEREND BAYES ON INFERENCE ENGINES- A DISTRIBUTED HIERARCHICAL APPROACH

Abstract In this paper, we generalize the Bayesian likelihood ratio upgrading rule to allow asynchronous propagation of the impact of new beliefs and new evidence in a hierarchical inference structure. The proposed computational scheme specifies a set of belief parameters, communication messages, and update rules that guarantee that the dissemination of updated beliefs is achieved in a single pass, and performs Bayesian computation.

23 Skyskill & Webert- Identifying I teresting web sites

Abstract In this paper, we describe Syskill & Webert, a software agent that learns to evaluate pages on the World Wide Web (WWW) to determine which pages are likely to be of interest to users. Syskill & Webert evaluates the explored pages on a three-point scale and learns Syskill & Webert evaluates the explored pages on a three-point scale and learns from them. The information on each page is analyzed to create a user profile. The user profile can be used in two ways: first, to suggest to the user which links are most appropriate for them, and second, to help the user find the most useful links. interested in exploring. Second, we can create a lycos query to find pages that might be of interest to the user. comparing six different algorithms
This task consists of machine learning and information retrieval. The results The Naïve Bayes classifier has several advantages over the other learners. Furthermore, it is sufficient to predict the interestingness of a page for the initial part of the web. It can reduce the amount of network communication required for prediction.

24 Statistical Parsing with a Context-free Grammar and Word Statistics

Abstract In this paper, we describe a parsing system based on a language model for English. This model is used in the parsing system by finding the sentence constructions that have the highest probability. This system outperforms the previous methods. This paper is the third in a series of parsers by different authors that are similar enough to make a detailed comparison, but with different levels of performance.

25 Stolving Large-Scale Constraint Satisfaction an. Scheduling Problems Using Heuristic Repair Method

Abstract This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment to the variables in the problem, the method works by searching the space of possible repairs. This search is guided by the min-conflicts heuristic, which is an order heuristic. It tries to minimize the number of constraint violations after each step. This method tries to minimize the number of constraint violations at each step. For standard problems, it is superior to conventional backtracking techniques. For example For example, the one million queens problem can be solved fast with our method. Approach. We have also successfully applied our method to a practical scheduling application. A theoretical analysis shows why this method works well for certain kinds of problems, and in which cases it is likely to be most effective.

26 Systematic Nonlinear Planning

Abstract In this paper, we present a simple, sound, complete, and systematic algorithm for domain-independent STRIPS planning.
We apply a general and independently verifiable lifting transformation to the ground procedure. Previous planners have been designed as direct lifting procedures. Our ground procedure is a ground version of Tate’s NONLIN procedure; in Tate’s procedure, it is not necessary to determine whether the preconditions of the steps of an unfinished planner are guaranteed to hold in all linearizations. For this reason, the Tate procedure avoids the use of Chapman’s aspect truth criterion. Systematicity is the property that the same plan or subplan is never considered more than once. Systematicity is achieved by a simple modification of the Tate procedure.

27 The Interactive Museum Tour-Guide Robot

Abstract In this paper, we describe the software architecture of an autonomous wireless communication system for au Guide and Tutor Robot. This robot was recently used in the “Deutsches Bonn Museum, where it guided hundreds of visitors through the museum, deployed over a six-day period. The robot’s control software is based on low-level probabilistic reasoning and first-order logic embedded in high-level problem solving. innovative software presented in this book. In this paper, we have made it possible for a robot to move at high speed through a dense crowd. avoid collisions with obstacles.
In addition, the user interface presented in this paper is It was essential to the success of this robot that it be targeted at non-expert users. Museum Based on these experiences, this paper argues for: the development of AI-enabled commercial service robots to help people in their daily lives.

28 The Logic of Representing Dependencies by Directed Graphs

Abstract Knowing a data dependency z of the type “x tells us more about y than we already have” can be expressed in a variety of forms. Probabilistic dependencies. Embedded multivalued dependencies, undirected graphs, directed acyclic graphs, etc. Graphs (DAGs). This paper provides an axiomatic foundation, called a semigraph, that captures the common structure of the four types of dependencies, and the expressive power of DAGs to represent various types of data. dependencies; DAGs have been shown to be able to represent a richer set of dependencies; DAGs can represent closures more completely than undirected graphs; and DAGs can be used as the basis for specifications. It is the basis of the specification and provides an effective computational device.
It tests the membership of its closures and also infers new dependencies. For a given input These properties may be a straightforward indication of the widespread use of DAGs in causal inference and semantic nets.

29 The Tractability of Subsumption in Frame-Based Description Languages

Abstract The knowledge representation system provides an important service to the rest of the knowledge base system. Because knowledge base systems rely on these inferences in their operation (i.e., diagnosis, planning, etc.), it is important to realize their computational cost. Here, we present evidence of how the computational cost of certain types of reasoning is directly related to the expressive power of the representation language. The results show that this cost is very sensitive to small changes in the representation language. Even a seemingly simple frame-based description language can lead to computationally intractable obstacles.

30 Using CSP Look-Back Techniques to Solve Real-World SAT Instances

Abstract:We report on the performance of an extended version of the “Davis-Putnam” The Proof Procedure (DP) for propositional satisfiability (SAT) is a diagnosis and synthesis derived from real-world problems such as planning, scheduling, and circuits for large instances. As a result, CSP lookback
techniques, in particular the relatively new technique of relevance-bounded Learning can easily solve many problems that cannot be solved by DP. reach. since DP is a systematic algorithm, it often performs as well or better than DP. it is superior to probabilistic SAT algorithms such as GSAT and WSAT. We recommend that such techniques be included as an option in an implementation. As with the more general algorithms, the constraint satisfaction problem in DP

 

コメント

タイトルとURLをコピーしました