Answer Set Programming: A Brief History of Logic Programming and ASP

Machine Learning Artificial Intelligence Natural Language Processing Semantic Web Ontology Technology Algorithm Knowledge Information Processing Digital Transformation Reasoning Technology Navigation of this blog

Answer Set Programming: A Brief History of Logic Programming and ASP

From “Answer Set Programming” in Trends in Logic-Based Reasoning Research, Journal of the Japanese Society for Artificial Intelligence (2010.5).a reference paper on logic programming for constructing complex knowledge information in artificial intelligence technology.

Prolog, a logic programming language developed in the early 1970s, has attracted attention as a new artificial intelligence language that combines declarative statements based on post-operative logic with computational procedures based on theorem proving, and has been widely used in expert systems, natural language processing, and arithmetic databases since the 1980s.

While Prolog is Turing-complete and has high computational power, its base, Horn clause logic programs, have limited applicability to real-world knowledge representation and problem solving due to syntactic constraints and lack of reasoning power.

In order to solve these problems, many attempts to extend the expressive capability of logic programming and to enhance its reasoning capability have been proposed since the late 1980s. As a result, since the late 1990s, the concept of answer set programming, which combines the concepts of logic programming and constraint programming, has been established, and it is now one of the core languages of logic programming.

The features and significance of ASP can be summarized as follows.

The first is (a) the realization of declarative programming, and ASP has the declarativeness required for a knowledge representation language. When the concept of logic programming was first introduced, the separation of logic and control was said to be its most important feature, but in prolog, the declarativeness of logic programming was sacrificed by the introduction of execution procedures and controllers (cuts) that depended on each type of program. ASP, on the other hand, is different from Prolog in that the declarativeness is maintained and the extended language is implemented on the ASP solver.

The second is (b) integration with constraint programming. While ASP can be viewed as a development of conventional logic programming, it can also be regarded as a kind of constraint programming, and can be useful as a computational language for constraint satisfaction problems. Although ASP differs from CLP in terms of the history of language development, syntax, and semantics, they are ideologically close, and there have been attempts to integrate the two frameworks. The third is (c).

The third is (c) the development of programming theory and the development environment. ASP has a rigorous mathematical semantics based on logic, and logical research on the properties of programs and the amount of computation is progressing. On the other hand, the programming environment using ASP solvers has also been developed, and the ASP processing system has been released as freeware to users around the world via the web. (Clasp Answer Set Solver, DLV, DLV-Complex, Smodels, clingo, Potassco, etc.)

The latest developments in ASP include language extensions to describe various constraints in programs, a language that combines ASP and description logic to describe ontologies in the semantic web, research on combining ASP and probabilistic inference, and research on the theory of multi-agent systems using ASP. The theory of multi-agent systems using ASP is also being studied.

First, I will describe the historical background of ASP to understand the factors that led to its development.

Early logic programming was created as a procedural interpretation of Horn clause logic, a subclass of first-order predicate logic. A Horn clause is an implication of the following form.

\[\mathbf{A}\leftarrow\mathbf{A}_1,\dots,\mathbf{A}_m\quad(1)\]

(A,A1,… The left side of ← is called the consequent or head of clause (1), and the right side is called the conditional or body. A comma (,) in the body indicates a conjunction. In particular, a set consisting of definite clauses whose head is not empty is called a definite logic program. The declarative semantics of a definite logic program is given by the LEAST model of the program.

van Emden showed that the minimal model of a definite logic program can be computed operationally as the least fixpoint of a continuous function defined on the program. In addition, SLD resolution, which was introduced as a procedural semantics for deterministic logic programs, the head considers an empty Horn clause as a goal to be proved and computes a refutation of the program. The agreement between the declarative and procedural meanings of a deterministic logic program is guaranteed by the fact that the set of basis atoms in the minimal model and the set of basis atoms to be proved by the SLD derivation agree.

The fundamental atoms contained in the minimal model of a deterministic logic program represent the true facts deduced from the program. On the other hand, the closed world assumption (DWA) is a position that interprets the fundamental atoms that are not deduced from the program as false. On the other hand, the method of interpreting a fundamental atom as false when the proof by SLD derivation fails in a finite number of steps is called negation as failure (NAF).

The set of negation facts defined procedurally under NAF coincides with the set of negation facts deduced from the set of logical formulas that have undergone a transformation procedure called completion on the program. If an underlying atom is interpreted as false under NAF, it is also interpreted as false under CWA, but the reverse is generally not true.

CWA and NAF allow programs to be complete and infer negative information by interpreting as false incomplete information that cannot be proven to be true in the program. On the other hand, since such default reasoning is nonmonotonic reasoning, which is different from logical reasoning in predicate logic, there is a gap between logic programming and predicate logic in terms of inference rules.

Once the logic for inferring negation from a deterministic logic program is in place, we want to describe the negation information obtained in this way as knowledge in the program and use it in the process of inference. In fact, prolog has a built-in predicate in the body of a clause to describe the negation of an atom as a condition, and in the case of calculating the difference between two relations R and S in a deductive database, the

\[difference_{R,S}(x_1,\dots,x_n)\leftarrow R(x_1,\dots,x_n),not\ S(x_1,…,x_n)\]

On the other hand, the negation obtained in CWA and NAF is the default negation resulting from default inference, which is different from the negation defined in predicate logic. On the other hand, the negation obtained by CWA and NAF is the default negation obtained as a result of default inference, which has a different meaning from the negation defined in predicate logic.

Therefore, if default negation is explicitly described in a program, the program will be different from the clause set defined under predicate logic. Here, there will be a discrepancy between logic programming and predicate logic even in the written form. Therefore, a formula in which the default negation appears in the conditional part of the program’s description statement

\[A\leftarrow A_1,\dots,A_m,\ notA_{m+1},\dots,notA_n\quad(2)\]

(A,A1,… (An is an atom) is called a rule and is distinguished from a clause. Here not is called a NAF operator, and the set consisting of rules of the form (2) is called a normal logic program. Due to the extension of program syntax, we need to consider the semantics of programs with default negation, but if At is the set consisting of all basis atoms, it is natural to define that notA is true under I if and only if the basis atom A is not dead under the interpretation I ⊆ At.

Then, for foundation rules of the form (2) (rules that do not contain variables), if and only if the interpretation I is “{A1,… ,Am}⊆Iand{Am+1,… ,An}∩I=∅ then A∈I” (*), we define I to satisfy this rule. Condition (*) is defined as “if {A1,… ,Am}⊈I or {Am+1,. ,An}∩I=∅ or A∈I” (+) is equivalent to “{A1,…,Am}⊈I or {Am+1,.

As a result, the model of a standard logic program P is defined to be an interpretation that satisfies all the rules contained in the program that ground instantiated P.

The declarative meaning of a deterministic logic program was defined by a minimal model, but in standard logic programs there is generally no minimal model, for example, the program

\[p\leftarrow not\ q\]

has two minimal models {p} and {q}, but none of them is a minimal model. Therefore, we define the declarative meaning of a standard logic program using a minimal model instead of a minimal model, and consider which of the multiple minimal models that exist in general reflects the meaning of the program’s descriptive text. In the example above, p is derived under the condition that q is not proved. On the other hand, there is no rule in the program that leads to q. An atom is said to be supported when there is a rule in the program that provides a basis for deriving it.

All atoms in the minimal model that reflect the meaning of the program are required to be supported by the program. In order to compute a model consisting of supported atoms, we can consider the atoms in the head to be in a higher layer than the atoms in the default negation in the body, and evaluate the truth or falsehood of each atom in the interpretation, starting from those belonging to the lower layer.

In the above example, q is first interpreted as false because it cannot be proven, and p becomes true under this assumption. As a result, only {p} of the two minimal models consists of supported atoms, which reflects the meaning of this program. This {p} is called the perfect model.

In general, the predicates in a program are decomposed into a hierarchy, and for any rule (2) in the program, the predicates in the head A are the atoms Am+1,…,An that appear in the default negation. For any rule (2) in a program, a program is called a stratified program if it can be partitioned in such a way that the predicates in the head A are higher than the predicates in the atoms Am+1,…, An that appear in the default negation.

A stratified program always has only one complete model, and the complete model matches the minimal model if the program contains no default fixation. A stratified program is a program in which each predicate does not recursively invoke its own default negation, and has the feature that completeness is consistent.

The proof procedure in standard logic programs is SLDNF derivation, which incorporates NAF into SLD derivation. There are also proposals to relax the syntactic constraints of stratified programs under the condition that the completeness of the program is consistent, such as a locally stratified program that extends the predicate hierarchy to a hierarchy of atoms, and a call that allows an even number of recursive calls to the default negation. There are also call-consistent programs, which allow an even number of recursive calls to the default negation.

A stratified program is a class of programs whose declarative and procedural meanings are clearly defined, allowing the description of default negation. On the other hand, some of the knowledge and information we have in our daily lives cannot necessarily be expressed in terms of synergistic negation. For example, suppose we describe the situation in which we drink tea when there is no coffee to wake us up, and drink coffee when the tea rings, with a program P as follows.

\[wake\leftarrow coffee\\wake\leftarrow tea\\tea\leftarrow not\ coffe\\coffee\leftarrow not\ tea\]

This program is not a stratified program because tea and coffee each recursively invoke their own default negation. In order to define the declarative meaning of such non-layered programs, stable model semantics was introduced.

Now, given an interpretation I, for any foundation rule (2) contained in a foundationalized standard logic program P, (a){Am+1,… (b) for the default negation in the remaining rules, {Am+1,…,An}∩I=∅. (b) For the default negation in the next remaining rule, since {Am+1,…,An}∩I=∅t holds, not Am+1,…,not An is satisfied by I. and not An are satisfied by I. Therefore, we eliminate these negation conditions from the rule.

If we denote the resulting program by P’, we can identify P’ with the logic program consisting of Horn clauses without default negation. We call I a stable model of P when it coincides with the minimal model of P’. The intuitive meaning of a stable model is to construct a set of rules P’ such that the default negation condition is satisfied from the program P for some interpretation I. Here, we consider I to be a stable model of P when the set of facts derived from P’ coincides with I.

In the above example, if I1={wake,tea}, then \(P{I_1}\) is

\[wake\leftarrow coffee\\wake\leftarrow tea\\tea\leftarrow\]

It becomes where I1 is the minimum model of \(P{I_1}\), so I1 is a stable model of P. Similarly, I2={wake,coffee} is a stable model of P. A stable model is a minimal model, but the opposite is not true. Also, in a stratified program, the stable model is consistent with the complete model.

The stable model is a natural extension of the complete model, extending the class of programs for which declarative meaning is defined. On the other hand, standard logic programs that are not stratified may have multiple stable models, or none at all. This fact not only introduces nondeterminism into the semantics of logic programs, but also limits the goal-driven computational procedures that have been used to request SLD derivation.

For example, the following program

\[p\leftarrow not\ q\\r\leftarrow not\ r\]

Although the goal ←p succeeds in deriving SLDNF, the program does not have a stable model. In other words, even if the goal succeeds, the program is not sound because the existence of a stable model that makes the atoms in the goal true is not guaranteed. In a deterministic logic program or a layered program, if a goal ← G is refuted using some rules of the program, it is guaranteed that G is a consequence of the whole program, but under the stable model semantics of standard logic programs, the locality principle does not hold. However, this locality principle does not hold under the stable model semantics of standard logic programs.

Another approach to the semantics of standard logic programs that extends the complete return is well-founded model emantics. Well-founded model emantics is defined under three-valued logic, and it guarantees that every standard logic program has only one well-founded model. Also, unlike stable model semantics, it preserves the deceleration of locality, which has the advantage that the conventional goal-driven proof procedure is still valid.

On the other hand, a basic atom that is interpreted as true or false under the stable model semantics may be considered as undefined under the cornerstone model. For example, in the aforementioned program of coffee and tea, it can be said that both coffee and tea will wake you up under the stable model, but under the cornerstone model semantics, it is said to be more skeptical than the stable model because the fundamental atom that should determine the deliberation may take an unknown value ( (Conversely, stable models are said to be credulous.) Thus, both the stable model semantics and the cornerstone model semantics have their advantages and disadvantages, and most of the semantics proposed based on them have complicated theories to give intuitive meanings to syntactically special programs, and are rarely used today.

A description statement in a deterministic logic program must be a deterministic clause whose consequent consists of a single atom, a fact that generally constrains the description of indeterminate information. Therefore, the framework of disjunctive logic programs, which consist of non-Horn clauses such that the head of the rule contains a choice of atoms, has been introduced. A disjunctive program is a clause of the following form

\[A_1∨\dots∨A_l\leftarrow A_{l+1},\dots,A_m\quad(3)\]

(A1,… Am is an atom). In this case, a deterministic logic program is considered to be a special case consisting of only clauses such that l=1 in clause (3). Since disjunctive logic programs generally do not have minimal models, declarative meaning is defined by the multiple minimal models of the program. For example

\[p∨q\leftarrow\]

The program consisting of the elective clause {p} has two minimal models {p} and {q}. These minimal models are equivalent and represent the two possible interpretations of the program. On the other hand, when CWA is applied to a choice logic program, the program may contradict the negative information obtained from CWA. In the aforementioned program, both p and q are not proved by the program, and therefore ¬p and ¬q are inferred by CWA, but their negative facts contradict the program’s election clause.

To solve these problems, Minker et al. introduced the generalized CWA (GCWA), an extension of the CWA that interprets A as false if and only if there is no minimal model for the underlying atom A to be true. In the previous example, there are minimal models containing p or q, so these negations are not inferred under GCWA. On the other hand, the problem of negation inference by GCWA is that the following program, for example

\[land\ animal\ ∨\ aquatic\ animal\leftarrow\\amphibian\leftarrow land\ animal,aquatic\ animal\]

has two minimal models {land animal} and {aquatic animal}, but since there is no minimal model that includes amphibian, ¬amphibian is inferred under GDWA. Thus, GCWA defined under the minimal model interprets the electorate as exclusive, and the inclusive interpretation, in which multiple atoms in the electorate are true simultaneously, may be excluded.

In this sense, the inference result of GCWA is too strong, since the classical logic’s prediction also includes inclusive interpretations. WGCWA (weak GCWA) or DDR (disjunctive database rule) has been proposed as a method to weaken GCWA and perform negative inference based on inclusive interpretations of selections. In the above example, we have amphibians. In the above example, since amphibian∨land_animal∨aquatic_animal is derived from the program, the negation of amphibian in this set is not inferred.

WCGWA and DDR provide a relaxation of GCWA for negation inference, but do not provide a model theory for programs based on the inclusion interpretation of preferences. Also, neither GCWA nor WGCWA(DDR) can deal with the case where a program contains both exclusive and inclusive preferences at the same time. Therefore, possible model semantics was introduced as a declarative semantics for programs that contain two different kinds of preferences, distinguishing between exclusive and inclusive preferences. In the example above, the program has three possible models {land_animal},{aquatic_animal},{land_animal,aquatic_animal,amphibian}. Here the first two are minimal models, but the third is a non-maximal model. Under the possible model semantics, exclusive and inclusive preferences are distinguished by negative examples written in the program.

For example, if there are three menus (curry rice, ramen, and vegetable salad), and the program says that eating curry rice and ramen at the same time is not allowed because of excessive calories, the program can be written as follows

\[curry\ ∨\ noodles\ ∨\ salad\leftarrow\\\leftarrow curry,noodles\]

The possible models of this model are [curry},{noodles],{salad},{curry,salsd},{noodel,salad}. Compared to minimal model semantics, possible model semantics not only has the advantage in knowledge representation that the interpretation of the selection can be described in Jonan, but also the computational complexity is generally lower. In deterministic logic programs, the possible models coincide with the minimal models.

A standard disjunctive logic program is one in which a default negation is introduced into the description of the disjunctive logic program. A standard disjunctive logic program is a set of rules of the following form

\[A_1∨\dots∨A_l\leftarrow A_{l+1},\dots,A_m,not\ A_{m+1},\dots,not\ A_n\quad(4)\]

(A1,…. ,An is a set of atoms). As for the semantics of standard elective logic programming, the stable model and the cornerstone model are proposed to be extended, and among them, the extension of the stable model semantics is developed into the solution set semantics.

The goal-driven poultice down-type proof procedure is generally complicated because the head of a rule in elective logic programming contains an elective. Therefore, a method to compute the entire minimal model of an elective logic program in a bottom-up manner by a model generation procedure has been proposed. This method was later developed into a procedure for computing stable models and solution sets of general selection programs.

In standard disjunctive logic programs, not means default negation, but in contrast, the class of programs that use explicit negation by logical negation¬ as distinct from default negation is called extended disjunctive logic programs. extended disjunctive logic program. An extended disjunctive program has rules of the following form

\[L_1;\dots;L_i\leftarrow_{l+1},\dots,L_m,not\ L`{m+1},\dots,not\ L_n\quad(5)\]

This is the set of Unlike equation (4), L1,…,Ln are positive and negative literals. , Ln are positive and negative literals, and the semicolon (;) denotes a choice word. The reason why “;” is used instead of “∨” in the rule head is that p∨¬p is a tautology in classical logic, but the rule p;¬p← is not a tautology in extended elective logic programs, so the two are distinguished by different symbols. Also, in classical logic, the logical formulas p←q and ¬p←¬q are equivalent, but in extended choice-oriented logic programs, they are not. An extended logic program is one that consists of rules such that the head does not contain a choice. This distinction between the two types of negation makes it possible to represent knowledge with slightly different meanings.

\[cross\leftarrow not\ car\]

expresses the behavior of crossing the street if a car is not recognized to be running, whereas

\[cross\leftarrow ¬ car\]

expresses the action of crossing the road after recognizing that no car is coming. Comparing the two, the former results in cross if the positive fact of car cannot be proved from the program, while the latter results in cross after the negative fact of ¬car is proved from the program.

Since the negative fact is proved directly from the program using explicit negation in an augmented affirmative logic program, the closed world assumption (CWA), which considers unproven facts as negation, is not used, and the open world assumption (OWA), which neither negates nor affirms unproven facts, is used. CWA is effective when the positive information described in the program is complete and all other information is regarded as negative, while OWA is effective when the positive information is incomplete and CWA should not be applied.

Since negative knowledge is described explicitly in an augmented optogenetic logic program, there is a possibility that the program will be inconsistent. There are several ways to avoid inconsistency, such as introducing priorities between rules that lead to conflicting results, computing the solution set of a non-contradictory maximal subset of the rules in the program, and formalization based on paraconsistent logic to localize inconsistency in the program field. and so on.

The class of programs that allow the appearance of default negations in the heads of rules included in extended disjunctive logic programs is called general extended disjunctive logic programming. General extended disjunctive logic programming is a class of programs that allow rules of the following form

\[L_1;\dots;L_k;not\ L_{k+1};\dots;not\  L_t leftarrow L_{t+1},\dots,L_m,not\ L_{m+1},\dots,not\ L_n\quad(7)\]

(L1,…. Ln is a set of positive and negative literals). General Extended Selection Logic programs were introduced by Lifschiz et al. and their application to knowledge representation was shown by Inoue. The semantics of general extended word choice logic programs is defined by extending the solution set of extended word choice logic programs.

The difference between a general extended choice logic program and an extended choice program is that the solution set is not necessarily minimal due to the effect of the default negation appearing in the head of rule (7). For example, the program

\[p;not\ p\leftarrow\quad(8)\]

has two solution sets ∅ and {p}, and rule (8) declaratively describes the two possibilities that the solution set may or may not contain atom p. This form of rule can be used to represent candidate hypotheses in abductive logic programming. This form of rule has been found to be useful for representing candidate hypotheses in abductive logic programming.

General extended disjunction logic programs generally have multiple solution sets, but there are frameworks for introducing priorities among solution sets, such as the logic prigram with ordered disjunction (LPOD), which has rules of the following form

\[L_1\times\dots\times L_l\leftarrow L_{l+1},\dots,L_m,not\ L_{m+1},\dots,not\ L_n\quad(9)\]

(L1,…. Ln is a positive and negative literal), where x is called an ordered disjunction. In rule (9), the meaning of L1x. . xLl in rule (9) means that if L1 is true, then L1 is attributed, if L1 is not true, then L2 is attributed, and so on. And so on. In other words, in LPOD rules, the literals in the head are ranked, and if the body is valid, the consequent Li of the head has priority over Lj(i>j).

One example of knowledge representation using LPOD is shown below. When the following program is given

\begin{eqnarray}&(r_1)&\quad cinema\times beach\leftarrow not\ hot\\&(r_2)&\quad beach\times cinema\leftarrow hot\\&(r_3)&\quad hot\leftarrow summer, not\ ¬hot\\&(r_4)&\quad  ¬beah\leftarrow rain\end{eqnarray}

The meaning of each rule is: (r1) if it is not hot, I prefer to go to the movies than to the beach; (r2) if it is hot, the opposite is true; (r3) summer is usually hot; and (r4) if it rains, I will not go to the beach.

In addition to LPODs, there are other mechanisms for introducing precedence into solution sets, such as specifying precedence among rules in extended logic programs, and describing precedence relations between literals and default negated literals in general extended choice logic programs.

In ASP, a program is considered to be a set of constraints to be satisfied by the solution of a problem, and the solution set of the program is the solution of the problem. The solution set of the program becomes the solution of the problem.

In ASP, a program is considered to be a set of constraint equations to be satisfied by the solution of a problem, and the solution set of the program is the solution of the problem. Therefore, ASP requires a procedure to calculate the solution set of a given program, and such a procedure is called an ASP solver. A typical ASP solver consists of two steps: (1) basicize a program containing variables, and (2) compute the solution set of the basicized program.

One of the problems with ASP solvers is that in order to complete (1) above in finite time, the entire set of basis terms must be finite, and function symbols cannot be used in general. In the logic programming language Prolog, function symbols are used to represent higher-order objects as terms, and list symbols, a kind of function symbols, [||], are also effective in defining complex objects recursively. On the other hand, since the use of function symbols is restricted in ASP solvers, it is necessary to change the conventional programming method in describing the problem. Therefore, in ASP solvers, the constraint equations to be satisfied by the objects are written in the program, and the objects that used to be expressed using terms or recursive definitions are now expressed as solution sets.

Let us use an example to see the difference between these knowledge representations. Suppose we now write the following deterministic logic program to solve the problem of finding a clique of a given undirected graph (we call C a clique if there exists an edge connecting any two nodes belonging to C out of a subset C ⊆ V of nodes in the undirected graph G=(V,E)).

\begin{eqnarray}&clique([ ])&\\&clique([x])&\leftarrow node(x)\\&clique([y|x])&\leftarrow clique(x),allconnected(y,x)\end{eqnarray}

Let allconected(y,x) be true if node y is not contained in node x and is connected to all nodes contained in x by an edge and there is no duplication of elements in x. Now the fact that the nodes and edges in the graph G are

\[F=\{node(a),node(b),node(c),edge(a,b),edge(b,a),edge(b,c),edge(c,b)\}\]

the above program will return the clique in G as the list contained in the argument of the predicate clique

\[clique([ ]),clique([a]),clique([b]),clique([c]),\\clique([a,b]),clique([b,a]),clique([b,c]),clique([c,b])\]

and these solutions are included in the only minimal model of the program.

On the other hand, in ASP, the same problem can be described as follows.

\[clique(x)\leftarrow node(x),not\lnot clique(x)\\\lnot clique(x)\leftarrow node(x),not\ clique(x)\\\leftarrow clique(x),clique(y),x\neq y,not\ clique(x,y)\]

The first two rules in the above program classify nodes into those that are in the clique and those that are not (¬clique), and the third rule describes the conditions that nodes in the clique must satisfy as a constrain with an empty head.

The meaning of this constraint states that edge(x,y) must hold for two different nodes x and y in the clique. As a result, the nodes that make up the clique are expressed as arguments to the predicate clique in the solution set, and given the fact F, each time is computed as six different solution sets of the program.

\[\{\lnot clique(a),\lnot clique(b), \lnot clique(c)\}\cup F,\\\{clique(a),\lnot clique(b), \lnot clique(c)\}\cup F,\\\{\lnot clique(a),clique(b), \lnot clique(c)\}\cup F,\\\{\lnot clique(a),\lnot clique(b), clique(c)\}\cup F,\\\{clique(a), clique(b), \lnot clique(c)\}\cup F,\\\{\lnot clique(a),clique(b), clique(c)\}\cup F\]

As seen in the above example, ASP calculates the solution set to be obtained by writing constraints with empty heads in the program. In general, the constraint

\[\leftarrow L\quad(10)\]

expresses the prohibition that L must not be, and

\[\leftarrow not\ L\quad(11)\]

expresses the enforcement that L must be L. The constraint (10) excludes the set containing the literal L from the candidate solutions, and the constraint (11) excludes the set without the literal L from the candidate solutions. The rule of empty head was used in logic programming by Kowalski et al. as a goal for expressing queries according to the procedural interpretation of programs, and was introduced as an integrity constraint in deductive databases and abductive logic programming. However, it was not until knowledge representation by ASP became popular that it was described in programs and used in the process of reasoning.

From the viewpoint of computational complexity, the problem of deciding whether a standard logic program consisting of a finite number of basic rules has a stable model is NP-complete. This means that decision problems belonging to the class of NP can in principle be described by standard logic programs without variables, and the existence of a solution to a decision problem can be attributed to the existence of a stable model for the program.

As ASP solvers were developed and ASP became used as a programming language for problem solving, the syntax of the ASP language was extended in order to describe programs compactly and to handle problems efficiently. In the following, we will discuss some of the typical language extensions that are currently being adopted in ASP.

SMODELS is a system for computing stable models of standard logic programs, and the following formula, called cardinarity constrait, is used in its syntax.

\[U\{A_1,\dots,A_m,not\ A_{m+1},\dots,not\ A_n\}\quad(12)\]

where each Ai is an atom and the meaning of equation (12) is A1,… Am, not Am+1,… If there is no upper or lower limit, V or U may be omitted, respectively. This number constraint is used instead of an atom to describe a rule. For example, the rule

\[1\{p,\}\leftarrow\{r,s,not\ t\}\]

means that if two or more of r, s, or not t in the body are true, then at least one of p or q in the head is true. If the atom contains a variable, we can condition on the variable. For example

\[\{clique(x)\quad node(x)\}\leftarrow\]

describes that the variable contained in the creek cloque is node(x). In this case, the above equation contains the constraint

\[\leftarrow clique(x),clique(y)x\neq y,not\ edge(x,y)\]

The program, which consists of two rules with the addition of “+”, calculates the clique when given the facts of node and edge. Comparing this program with the standard ASP programming, we can see that the descriptive text is more compact.

The piecewise constraint (12) is further extended to the following constraint expression, called weighted constraint.

\[\{A_1=\omega_1,\dots,A_m=\omega_m,not\ A_{m+1}=\omega_{m+1},\dots,not\ A_n=\omega_m\}V\quad(13)\]

where each ωi represents a weight of atom Ai or default negation notAi. The meaning of equation (13) is A1,… ,Am,not Am+1,… The meaning of equation (13) is that the weights of A1,…,Am,not Am+1,…,not An that are satisfied by the solution set are between U and V. The individual constraint (12) is a weighted constraint (13) with the special property that ω1=….. The individual constraint (12) can be thought of as a special case of the weighted constraint (13) where ω1= =ωn=1.

Using weighted constraints, the standard logic program rules

\[A\leftarrow A_1,\dots,A_m,not\ A_{m+1},\dots,not\ A_n\]

is

\[1{A=1]\leftarrowm+n\{A_1=1,\dots,A_m=1,not\ A_{m+1}=1,\dots,not\ A_n=1\}\]

It is interpreted as In general, rules of the following form

\[C\leftarrow C_1,\dots,C_m,not\ C_{m+1},\dots,not\ C_n\quad(14)\]

(C,C1,…. (C,C1,…,Cn are weighted constraints) is called a weighted program. Among the weighted constraints, the most important are weighted rules of the following form

\[A \leftarrow\omega\{A_1=\omega_1,\dots,A_m=\omega_m,not\ A_{m+1}=\omega_{m+1},\dots,not\ A_n=\omega_n\}\quad(15)\]

and choise rules, which are of the following form

\[\{A_1,\dots,A_l\}\leftarrow A_{l+1},\dots,A_m,not\ A_{m+1}.\dots,not\ A_n\quad(16)\]

This is the case. The weighted rule specifies a lower bound for the condition part consisting of weighted constraints, and the selection rule expresses that any subset of the atom set of the head is selected when the condition of the body holds. Rules (15) and (16) are called basic constraints, and the weighted program rule (14) can be converted into these two types of basic constraints. The rules consisting of individual constraints and weighted constraints can be rewritten into the rules of a standard logic program by introducing new appropriate atoms.

 

コメント

タイトルとURLをコピーしました