9
SPECIALSECT/ON LOGIC PROGRAMMING Logic programming is programming by description. The programmer describes the application area and lets the program choose specific operations. Logic programs are easier to create and enable machines to explain their results and actions. MICHAEL R. GENESERETHand MATTHEW L. GINSBERG The key idea underlying logic programming is progrum- ming by description. In traditional software engineering, one builds a program by specifying the operations to be performed in solving a problem, that is, by saying hozo the problem is to be solved. The assumptions on which the program is based are usually left implicit. In logic programming, one constructs a program by describing its application area, that is, by saying what is true. The assumptions are explicit, but the choice of operations is implicit. A description of this sort becomes a program when it is combined with an application-independent inference procedure. Applying such a procedure to a description of an application area makes it possible for a machine to draw conclusions about the application area and to answer questions even though these answers are not explicitly recorded in the description. This capability is the basis for the technology of logic programming. Fig- ure 1 illustrates the configuration of a typical logic pro- gramming system. At the heart of the program is an application-independent inference procedure, which accepts queries from users, accesses the facts in its 0 1985 ACM 0001.0782/85/0900-0933 750 knowledge base (the description), and draws appropri- ate conclusions. It is thus able to answer users’ ques- tions and, in some cases, to record its conclusions in its knowledge base. Because the inference procedure used by a logic pro- gram is independent of the knowledge base it accesses, program development amounts to the development of an appropriate knowledge base-finding a suitable description of the application area. There are several advantages to this. Chief among these is incremental development. As new information about an application area is discovered (or perhaps just discovered to be important to the problem the program is designed to solve), that information can be added to the program’s knowledge base and so incorporated into the program itself. There is no need for algorithm development or revision. A second advantage is explanation. With the piece- meal nature of automated reasoning, it is easy to save a record of the steps taken in solving a problem. By pre- senting this record to the user, it is possible for a pro- gram to explain how it solves each problem and, there- fore, why it believes the result to be correct. Explana- September 1985 Volume 28 Number 9 Communications of the ACM 933

Logic programminglogicprogramming.stanford.edu/readings/cacm.pdf · 2020. 4. 7. · Most logic programming systems use clausul form, which is a variant of the predicate calculus (see

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

  • SPECIAL SECT/ON

    LOGIC PROGRAMMING

    Logic programming is programming by description. The programmer describes the application area and lets the program choose specific operations. Logic programs are easier to create and enable machines to explain their results and actions.

    MICHAEL R. GENESERETH and MATTHEW L. GINSBERG

    The key idea underlying logic programming is progrum- ming by description. In traditional software engineering, one builds a program by specifying the operations to be performed in solving a problem, that is, by saying hozo the problem is to be solved. The assumptions on which the program is based are usually left implicit. In logic programming, one constructs a program by describing its application area, that is, by saying what is true. The assumptions are explicit, but the choice of operations is implicit.

    A description of this sort becomes a program when it is combined with an application-independent inference procedure. Applying such a procedure to a description of an application area makes it possible for a machine to draw conclusions about the application area and to answer questions even though these answers are not explicitly recorded in the description. This capability is the basis for the technology of logic programming. Fig- ure 1 illustrates the configuration of a typical logic pro- gramming system. At the heart of the program is an application-independent inference procedure, which accepts queries from users, accesses the facts in its

    0 1985 ACM 0001.0782/85/0900-0933 750

    knowledge base (the description), and draws appropri- ate conclusions. It is thus able to answer users’ ques- tions and, in some cases, to record its conclusions in its knowledge base.

    Because the inference procedure used by a logic pro- gram is independent of the knowledge base it accesses, program development amounts to the development of an appropriate knowledge base-finding a suitable description of the application area. There are several advantages to this. Chief among these is incremental development. As new information about an application area is discovered (or perhaps just discovered to be important to the problem the program is designed to solve), that information can be added to the program’s knowledge base and so incorporated into the program itself. There is no need for algorithm development or revision.

    A second advantage is explanation. With the piece- meal nature of automated reasoning, it is easy to save a record of the steps taken in solving a problem. By pre- senting this record to the user, it is possible for a pro- gram to explain how it solves each problem and, there- fore, why it believes the result to be correct. Explana-

    September 1985 Volume 28 Number 9 Communications of the ACM 933

  • Special Section

    Queries Answers

    Facts I I

    Conclusions

    An application-independent inference procedure is at the cen- ter of any logic program. When interrogated by the user, the inference procedure replies after drawing conclusions from the facts in the knowledge base. In some instances, the conclusions drawn are stored in the knowledge base for later use.

    FIGURE 1. A Logic Programming System

    tions of this sort are especially valuable to programmers for debugging logic programs..

    DESCRIPTION The process of describing an application area begins with a conceptualization of the objects presumed or hypothesized to exist in the application area and the relations satisfied by those objects. Facts about these objects and relations as sentences are then expressed a suitable formal language.

    Our notion of object is quite broad: They can be

    in

    concrete (e.g., a specific circuit, a specific person, this paper) or abstract (e.g., the number 2, the set of all integers, the concept of justice); they can be primitive or composite (e.g., a circuit that consists of many com- ponents); they can even he fictional (e.g., a unicorn, Peer Gynt, Miss Right).

    A relation is a quality or attribute of a group of ob- jects with respect to each other. Of course, a single relation may hold for more than one group of objects (e.g., the “father of” relation holds for many different pairs of individuals), and a single group of objects may satisfy more than one relation (e.g., one individual may be the “father of” another and also a “neighbor”).

    The esseni:ial characteristic of a logic programming language is r;!eclarutiue semantics. There must be a sim- ple way of d’etermining the truth or falsity of each

    statement, given an interpretation of the symbols of the language. The meaning of each statement must be inde,. pendent of the intended use of that statement in solving any particular problem.

    Most logic programming systems use clausul form, which is a variant of the predicate calculus (see Figure 2). Clausal form has the requisite declarative semantics, although it is not the only language with this property. Declarative semantics can also be defined for such dis- parate languages as tables, graphs, and charts. Clausal form is also extremely expressive in that the existence of logical operators and variables makes it possible to encode partial information about an application area. In more restricted languages such as frames, procedures must be written to encode partial information.

    The simplest kind of expression in clausal form is the ufom. An atom states that a specific relation holds for a specific group of objects. For example, one can express the fact that Art is the father of Bob by taking a relation symbol like F and object symbols Art and Bob and combining them:

    F(Art,Bob).

    More complex facts can be written using the oper- ator :-. For instance, 4 :- p indicates that 4 is true if p is true; in other words, p implies 4. The “reverse” impli- cation operator is often used in logic programming lan- guages in place of the usual “forward” operator +, to

    c; 41) . * . , qn :- p, , . - . , pe

    _I c2

    c3 rtt,, . ..I tk)

    ‘& _,* i .’ 4

    c: v f(ti,, . . . , tl)

    A logic program is an arbitrary set of expressions known as clauses. A clause is an expression of the form shown on the upper right. A sentence of this form says that one of the 9, must be true if all of the p, are true. The expressions to the left and right of the :- operator in a clause must be atoms. that is, expressions of the form shown on the middle right, where r is a relation constant and where each f, is a term. A term is either an object constant, a variable, or an expression of the form shown on the lower right, where f is a function constant and each ti is a term. A constant is a contiguous sequence of lowercase characters and numbers, and a vari- able is a contiguous sequence of uppercase characters and digits beginning with an uppercase character.

    FIGURE 2. The General Form of a Logic Program

    934 Communications of the ACM September 1985 Volume 28 Numbet 9

  • Special Section

    call attention to the conclusion of the implication. The of this sort is available at the outset and can be incorpo- following sentence states that Art is Bob’s parent if Art rated into the program itself; in others, it becomes is the father of Bob: available only at run time.

    P(Art,Bob) :- F(Art,Bob) Bl: F(Art,Bob) :-

    If more than one sentence is written to the right of this operator, then the conclusion is guaranteed to be true only if all of the conditions are true, that is, if the conditions are conjoined. The following sentence states that Art is Cap’s grandparent if Art is Bob’s parent and

    B2: F(Art,Bud) :- B3: M(Amy,Bob) :- B4: M(Amy,Bud) :- BS: F(Bob,Cap) :- B6: F(Bud,Coe) :-

    Bob is Cap’s parent: - The next example shows how a numerical concept

    If there are no sentences to the right of the operator,

    G(Art,Cap) :- P(Art,Bob), P(Bob,Cap)

    then the sentence to the left is true under all condi- tions. Thus, atoms can be written as clauses in which the right-hand side is empty.

    If more than one sentence is written to the left of the implication operator, then one of the conclusions is guaranteed to be true, that is, the conclusions are dis- joined. The following sentence states that Art is either Bob’s mother or Bob’s father if Art is Bob’s parent:

    clause handles the recursive case. A number n is the

    like the factorial function can be defined. The first -

    factorial of a number k if k - 1 is I, the factorial of 1 is

    clause states that the factorial of 0 is 1. and the second

    m, and k * m is n.

    Cl : Fact(O,l) :- c2: Fact(k,n) :- k-1=1, Fact(l,m),

    k*m=n

    Functions can also be defined on lists. A list in clausal form is an expression of the form [x1, x2, . , x,]. This expression can also be written xI.[xl, , x,]. For example, the “append” function on two lists can be defined by the following:

    F(Art,Bob), M(Art,Bob) :- P(Art,Bob)

    Although the conclusions in this case are mutually ex- clusive and jointly exhaustive, this need not generally be the case.

    A clause with no sentences to the left of the operator is a statement that the conditions to the right are incon- sistent, that is, that at least one of them is false. Thus, one can express the negation of an atom by writing it as a clause in which the left-hand side is empty.

    More general facts can be expressed using variables (lowercase letters) instead of constants to refer to all objects in the programmer’s conceptualization of the world. The following sentence states that x is the par- ent of y if any person x is the father of any person y , that is, that the connection between the “father of” relation and the “parent of” relation is true for every- one:

    P(x,Y) :- F(x,Y)

    A logic program is simply a set of sentences written in this language. The sentences below constitute a logic program for kinship relations. The objects are people in this case, although individual names are not known initially. There are two unary relations: “male” (written Ma 1 e) and “female” (written Fema 1 e]; and four binary relations: “father of” (F), “mother of” (M), “parent of” (P), and “grandparent of” (G).

    Al: P(X,Y) :- F(x,Y) A2: P(x,Y) :- M(x,Y) A3: G(x,z) :- P(x,Y), P(Y,z) A4: Male(x) :- F(x,y) AS: Female(x) :- M(x,y)

    The next set of sentences captures the kinship rela- tionships among six individuals, Art, Amy, Bob, Bud, Cap, and Coe. In some situations detailed information

    Dl : Append( [I ,m,m) :- D2 : Append(x.l,m,x.n) :- Append(l,m,n)

    DEDUCTION Logic is the study of the relationship between beliefs and conclusions. For example, if we believe that Art is the father of Bob and that fathers are parents, then we can conclude that Art is the parent of Bob. The first two sentences logically imply the conclusion. In logic programming the programmer encodes a set of beliefs about the application area, and the machine derives conclusions that are logically implied by those beliefs.

    Research in mathematical logic and artificial intelli- gence has led to the development of many application- independent inference procedures. Most of these proce- dures can be described as the application of one or more rules of inference to known facts for the purpose of deriving new conclusions. Subsequent applications to other facts and to the conclusions allow a program to derive further conclusions. And so forth.

    Of the rules that have been invented, resolution is the most extensively studied. Given a clause with an atom p on its left-hand side and another clause with an atom p on its right-hand side, it is possible to create a new clause in which the left-hand side is the union of the left-hand sides of the two original clauses with p deleted. and the right-hand side is the union of the right-hand sides of the two clauses with p deleted. In the following example, the expression P ( Art, Bob ) appears on the left-hand side of one rule and on the right-hand side of the other. Consequently. we can ap- ply resolution to these two clauses to produce the con- clusion shown. Note that the net effect is to replace the

    September 1985 Volume 28 Number 9 Communications of the ACM 935

  • Special Secfioti

    expression P (Art , Bob) in the second clause with the expression F ( Art , Bob ) .

    Given: P(Art,Bob) :- F(Art,Bob) G(Art ,Cap) :- P(Art,Bob), P(Bob,Cap)

    Conclude: G(Art ,,Cap) :- F(Art,Bob), P(Bob,Cap)

    This example is very simple in that the atoms re- solved upon are identical. The resolution rule is some- what more general in that it can be used even when two atoms are not identical, so long as they can be made to look alike by appropriate instantiations of their variables. The process of finding such instantiations is called unification [see Figure 3).

    The combination of unification with the resolution rule permits more complicated conclusions to be drawn (see Figure 4). For example, in the following deduction the expression P ( x , Bob) in the first clause unifies with the expression P ( Art , y ) in the second by re- placing the variable x with the constant Art and the variable y with the constant Bob. In the conclusion, note that the occurrence of x in F ( x , Bob) has been replaced by Iir t and that the occurrence of y in P ( y , z ) has been replaced by Bob.

    Given: P(x,Bob) :- F(x,Bob) G(Ari:,z) :- P(Art,Y), P(Y,z)

    Conclude: G(Ari:,z) :- F(Art,Bob), P(Bob,z)

    The resolution rule can be used as the basis for a number of different inference procedures. Resolution refutation is an especially important case. It is sound in that any conclusion it draws is guaranteed to be correct

    P(x,Bob) /x/Art, y/Bob1

    P(Art,Y) + P(Art,Bob)

    (a)

    P(X,X)

    P(Art,Bob) - ?

    (b)

    Unification is the process of finding a set of bindings for the variables in two expressions that makes the two expressions look alike. The expressions in (a) can be unified by the sub- stitution (x/Art:, y/~ob) The expressions in (b) cannot be unified sinzs there is no single binding for the variable x that witt make the expressions look alike.

    FIGURE 3. Unification

    G(Art,z) :- i, P(Y,Z)

    G(Art,z) :-F(Art,Bob), P(Bob,z)

    Whenever an atom on the left-hand side of one clause unifies with one on the right-hand side of another, a new clause can be deduced, as shown here. The bft- and right-hand sides of the new clause are the unions of the left- and right-hand sides of the original clauses, with the unified expressions deleted and the unifying substitution applied to the remaining expressions. In this example, the expression P ( x , Bob ) on the left-hand side of the first clause unifies with the expres- sion P ( Art , y ) on the right-hand side of the second clause. This allows us to apply resolution to these two clauses to produce a conclusion. Note that the occurrence of x in F ( x , Bob ) in the conclusion has been replaced by Art and that y in P ( y , z ) has been replaced by Bob.

    FIGURE 4. Resolution

    so long as its premises are correct. It is also conrplete in that it can derive any logical implication from a given set of premises.

    In a resolution refutation we assume the negation of the goal we are trying to prove and attempt to derive the empty clause, that is, a clause with no conditions and no conclusions. For example, given the preceding kinship facts, we can use resolution refutation to prove that Art is Bob’s grandparent. The trace-of-rule applica-, tion shown below is an example of a proof of this con- clusion. The labels on the right indicate the clauses resolved to produce the conclusion on that line.

    El :- G(Art,Cap) E2 :- P(Art,Y), P(y,Cap) El A3 E3 :- F(Art,y), P(y,Cap) E2 Al E4 :- P(Bob,Cap) E3 B2 E5 :- F(Bob,Cap) E4 Al E6 :- ES B5

    In this case the final result of the deduction is simply the determination that the goal is logically implied by the facts in the knowledge base. When a goal contains variables, however, we can get further information. FOI example, suppose that our goal is to determine whethe:: or not there is a person of whom Art is a grandparent. This goal is false if and only if for every person Art is not his grandparent, Therefore, the negation of this goal is the clause : - G ( Art , z ) . If we can find a binding for the variable z from which the empty clause can be derived, then we have proved the goal and. further- more, found an object that makes it true. The upshot 0:. this is that resolution can be used for computing an- swers other than “true” and “false.”

    936 Commutlicationj of the ACM September 1985 Volume 28 Number 3

  • Special Section

    To see how this is done, consider the following reso- lution proof. In addition to the justification information, this proof contains information about the bindings of all variables, in particular the value cap for the grand- child variable z .

    swer to a question, even when there are many answers; it can be used “backwards” to compute answers to other questions; and it can be used on actual logic pro- grams to produce new logic programs that are simpler and/or run more efficiently. Finally, as depicted in Fig-

    Fl: :- G(Art,z) ure 5, the trace of proof can be used as the basis for

    F2: :- P(Art,Y), P(Y,z) Fl A3 explanations of the system’s conclusions.

    F3: :- F(Art,Y), P(Y,z) F2 Al F4: :- P(Bob,z) y=Bob F3 Bl

    CONTROL

    F5: :- F(Bob,z) y=Bob F4 Al Automated reasoning with resolution is a combinatoric

    F6: :- y=Bob z=Cap F5 B5 process. There are usually several inferences that can be drawn from an initial knowledge base; once an in-

    A particularly noteworthy feature of this approach to computation is that there can be more than one answer to a question of this sort. Other answers can be deter- mined through other lines of reasoning. In this case, Coe is also one of Art’s grandchildren, as demonstrated by the following proof:

    ference has been drawn, new inferential opportunities arise, and so on. In order to achieve efficient computa- tion, it is important for one to choose the best possible opportunity for exploitation at each point in time. This section summarizes several techniques for controlling deduction.

    Gl: :- G(Art,z) The most straightforward approach is to perform de-

    G2 : :- P(Art,Y), P(Y,z) Gl A3 ductions in a fixed but arbitrary order. The PROLOG

    G3: :- F(Art,Y), P(Y,z) G2 Al G4 : :- P(Bud,z) y=Bud G3 B2 G5 : :- F(Bud,z) y=Bud G4 Al G6: :- y=Bud z=Coe G5 B6

    => := G(Art,Cap)?

    Another noteworthy feature is that we can get an- Yes swers to other questions by using variables in different positions. For example, by using a variable as the first

    => Why?

    argument in a grandparent question and a constant as Zl: G(Art,Cap) :- because

    22: P(Art,Bob) :- the second argument, we can compute the grandpar- ents of the individual specified as the first argument:

    23: P(Bob,Cap) :- A3: G(x,y) :- P(x,y), P(y,z)

    Hl: - c(x,cap) H2: - P(X,Y), P(Y,CdP) Hl A3 H3: - F(x,Y), P(Y,C~P) H2 Al H4: - P(Bob,Cap) x=Art y=Bob H3 Bl H5: - F(Bob,Cap) x=Art y=Bob H4 Al H6: - x=Art y=Bob H5 B5

    Of course, we can also use variables in all positions and compute all grandparent-grandchildren pairs. The derivation of one such pair would be the following:

    11: - G(x,z) 12: - P(X,Y), P(Y,Z) 11 A3 13: - F(x,Y), P(Y,z) 12 Al 14: - P(Bob,z) x=Art y=Bob 13 Bl 15: - F(Bob,z) x=Art y=Bob 14 Al 16: - x=Art y=Bob z=Cap 15 85

    => Why 22? 22: P(Art,Bob) :- because

    Bl: F(Art,Bob) :- Al: P(x,Y) :- F(x,Y)

    => Where Rl? Z2: P(Art,Bob) :- because

    Bl: F(Art,Bob) :- Al: P(X,Y) :- F(x,Y)

    23: P(Bob,Cap) :- because B5: F(Bob,Cap) :- Al: P(X,Y) :-- F(x,Y)

    Finally, note that nonatomic clauses can be resolved with other nonatomic clauses to produce new clauses that are more efficient to use than original clauses.

    One of the key advantages of logic programming is that it enables a machine to explain its results and its actions. In this case, the conclusion that Art is the grandparent of Cap is explained by citing the facts that Art is the parent of Bob,

    Jl: GF(x,z) :- G(x,z), Male(x) that Bob is the parent of Cap, and that the parent of a parent

    52: G(x,z) :- P(X,Y), P(Y,Z) is a grandparent. The conclusion that Art is Bob’s parent is

    J3: F(x,Y) :- P(x,y), Male(x) explained by the fact that Art is Bobs father and that fathers

    J4: GF(x,z) :- P(x,y), P(y,z), Male(x) Jl 52 are parents. The final interaction shows that it is also possi-

    55: GF(x,z) :- F(x,Y), P(Y,z) 54 53 ble for the machine to show which conclusions depend on which premises.

    The lesson to be learned from these examples is that deduction is a very flexible “interpreter” for logic pro- grams. Deduction can be used to compute a single an- FIGURE 5. Explanation

    September 1985 Volume 28 Number 9 Communications of the ACM 937

  • Special Section

    interpreter is a good example of this approach. Starting with a clause of the form :- ql, . . . , +,, PROLOG finds the first clause of the form 4 :- pl, . , p,, where 4 unifies with Q, and then resolves the two. This process is then applied recursively with the resulting subgoal until the empty clause is produced. If this recursion fails to produce the empty clause, the interpreter backs up, finds the next applicable clause, and tries again. This process repeats until no further clauses can be found. Note that every resol,ution involves one clause of the form :- q,, . . . , +, and that no resolution involves a conjunct other than the first.

    The following proof shows all of the deductions re- sulting from this procedure, not just the key steps in the successful proof. The nesting illustrates the subgoal-supergoal relationship among the various con- clusions.

    Kl: :- G(Art,Coe) K2: :- P(Art,y), P(y,Coe) K3: :- M(Art,y), P(y,Coe) K4: KS: K6: K7: K8: K9: KlO: Kll:

    :- F(Art :- P

    :- P

    Y), p(y,Coe) Bob,Coe) - M(Bob,Coe) - F(Bob,Coe) Bud,Coe) - M(Bud,Coe) - F(Bud,Coe)

    .-

    The disadvantage of this approach is potential ineffi- ciency. If we pay no attention to how the clauses in a logic program are going to be used, we may end up writing them in a way that makes the derivation of desired conclusions computationally complex. There is a basic trade-off between cognitive cost for the pro- grammer and computational cost for the machine. In light of this trade-off, many programmers compromise the methodological purity of logic programming to take details about the interpreter into account. In particular, most PROLOG programmers are careful to order their clauses with respect to each other and to order the conjuncts within each clause.

    As an example, consider the ordering of the condi- tions on the right-hand side of rule A3 (p. 935). In pro- cessing a query of the form G ( Art, z ) , PROLOG enu- merates Art’s children and tries to find a child for each. If the conditions had been written in the opposite or- der, PROLOG would enumerate all parent-child pairs and check to see whether Art is a parent for each. Since the number of parent-child pairs is likely to be very much larger than the number of Art’s children, inverting th’e order would be disastrous.

    Unfortunately, this ordering is not always best. For example, in using A3 to answer the query G ( x , Coe ) , the situation is reversed. PROLOG would enumerate all parent-chilcl pairs and check the child in each pair to see whether he or she is Coe’s parent. For this query the opposite order would be computationally superior.

    To resolve this dilemma, we need an interpreter that can determine the order of its deductions in a situation-

    specific way. The MRS interpreter is a good example of this approach. MRS differs from PROLOG in that it provides a vocabulary for expressing facts about the process of problem solving and not just about the con- tent. In addition to content clauses that describe the application area of a program, we can write control clauses in MRS that prescribe how those content clauses are to be used (see Figure 6).

    The basic cycle of the MRS interpreter is similar to the instruction fetch and execute cycle of a digital com- puter. The interpreter applies a PROLOG-like deduc- tive procedure to the control clauses in a program to “fetch” the ideal deduction to perform on the content clauses. This deduction is then “executed,” and the cycle repeats.

    In writing control clauses, we use a conceptualization different from that of the application area. The objects include expressions such as variables, constants, atoms, and clauses. There are also actions such as the act of resolving a clause p with a clause q to produce a new clause r (written R ( p , q , r )), relations on expressions such as “variable” (written ear) and “constant” (writ- ten Const), and relations on actions such as the binary relation “before” (written Before).

    As an example of the control of deduction in MRS, consider once again the use of rule A3. The following control clauses state that it is better to work on a “par- ent of” goal in which one of the arguments is known than to work on a “parent of” goal in which both argu- ments are variable. Using this control clause, MRS would automatically choose the correct conjunct to work on, independent of the order of conjuncts speci-

    Before(R(pl,q,rl),R(p2,q,r2)) :- Better(pl,p2) Better(Rl,R2) Better(rl,r2) :- Length(rl,m), Length(rZ,n), mCn

    Automated reasoning with resolution is a nondeterministic process-it usually requires a search of multiple inference paths before a desired result can be proved. In some applica- tions it is desirable to eliminate portions of the search space and to order the exploration of the remaining portions. Differ- ent logic programming systems provide different ways for the user to exercise this sort of control. PROLOG allows for implicit control via clause and conjunct ordering and explicit control via syntactic features like the ! operator. MRS em- phasizes explicit control by providing a general metalevel control language. The first clause in this example states that any resolution involving clause p 1 should be done before any resolution involving clause p2 if p 1 is “better than” p2 . The second clause states that clause R 1 is better than clause R2. The third clause states the more general rule that short clauses are better than long clauses.

    FIGURE 6. Control

    936 Communicatio~rs of the ACM September 1985 Volume 28 Number 9

  • fied by the “grandparent of” rule and of the position of the variable in the query:

    Ll: Before(R(pl,ql,rl),R(p2,q2,r2)) :- Better(pl,p2)

    L2: Better(P(u,v),P(x,y)) :- Const(u), Var(v), Var(x), Var(y)

    L3: Better(P(u,v),P(x,y)) :- Var(u), Const(v), Var(x), Var(y)

    The use of control clauses like these can substan- tially reduce the run time of a logic program. Unfortu- nately, the overhead of interpreting the control clauses offsets this improvement: For every step of a content- level deduction, the interpreter must complete an en- tire control-level deduction. In many applications the improved performance is clearly worth the overhead. In others, the overhead swamps the gains, and a sim- pler interpreter like PROLOG’s is preferable.

    EXAMPLE Recent advances in design methodology and fabrication technology have made digital hardware of unprece- dented complexity possible. The disadvantage of this complexity is that it substantially increases the diffi- culty of reasoning about designs. Logic programming is a powerful way of writing programs to assist designers in simulating, diagnosing, and generating tests for their designs.

    The “full adder” is a good example of a digital circuit. A full adder is essentially a one-bit adder with carry in and carry out, and it is usually used as one of n ele- ments in an n-bit adder.

    A graphical representation of the structure of the full adder Fl is shown in Figure 7. The structure of this circuit is described in clausal form below. Each part is designated by an object constant (e.g., x I). The struc- tural type of each part is specified using type relations (e.g., Xorg); and the inputs and outputs (i.e., the ports) of each device are named using the functions in and Out. Connections are made between ports of devices.

    Ml: Xorg(X1) :- M2: Xorg(X2) :- M3: Andg(A1) :- M4: Andg(A2) :- M5: Org(01) :-

    M6: Conn(In(l,Fl), In(l,Xl)) :- M7 : Conn In(2,Fl), In(2,Xl)) :- M8: Conn In(l,Fl), In(l,Al)) :- M9: Conn In(2,Fl), In(2,Al)) :- MIO: Conn In(3,Fl), In(2,X2)) :- Mll: Conn In(3,Fl), In(l,A2)) :- M12: Conn Out(l,Xl), In(l,X2)) :- M13: Conn Out(l,Xl), In(2,A2)) :- M14: Conn(Out(l,A2), In(l,Ol)) :- M15: Conn(Out(l,Al), In(2,Ol)) :- M16: Conn(Out(l,X2), Out(l,Fl)) :- Mli': Conn(Out(l,Ol), Out(2,Fl)) :-

    The behavior of each of the components can also be described in clausal form. A proposition of the form

    September 1985 Volume 28 Number 9

    Special Section

    The full adder consists of two “xor” gates (x 1 and x2), two “and” gates (A 1 and A2), and an “or” gate (0 1). The solid lines between the gates depict their interconnections. The device as a whole has three inputs and two outputs.

    FIGURE 7. The Full Adder Digital Circuit Fl

    I (n, y , z ) states that the value on the nth input of device y is z . A proposition of the form 0 ( n , y , z ) states that the value on the nth output of device y is z . The first six clauses characterize the behavior of “and,” “or,” and “xor” gates, and the final three characterize the behavior of connections.

    Nl : 0(1,X,1) :- Andg(x), 1(1,x,1), 1(2,x,1)

    N2: 0(1,X,0) :- Andg(x), I(n,x,O)

    N3: 0(1,x,1) :- Org(x), I(n,x,l)

    N4: 0(1,X,0) :- Org(x), I(',X,O), 1(2,X,0)

    NS: 0(1,x,1) :- Xorg(x), I(l,x,Y), IQ,x,z), Y#z

    N6: 0(1,x,0) :- Xorg(x), I(l,x,z), 1(2,x,z)

    Nl: I(n,y,z) :-- Conn(In(m,x),In(n,y)), I(m,x,z)

    N8: I(n,y,z) :-- Conn(Out(m,x),In(n,y)), O(m,x,z)

    N9 : O(n,y,z) :- Conn(Out(m,x),Out(n,y)), O(m,x,z)

    The advantage of describing the circuit in this form is that we can use the description to reason about the circuit in a variety of ways. For example, we can simu- late the behavior of the circuit by adding information about the values of the inputs to the knowledge base and proving facts about the outputs:

    01: I(l,Fl,l) :- 02: 1(2,Fl,O) :- 03: 1(3,Fl,l) :- 04: 1(1,X1,1) :- OS: 1(2,X1,0) :- 06: 0(1,x1,1) :- 1(1,X1,x), 1(2,Xl,Y),

    X#Y

    Communications of the ACM 939

  • Specid Section

    07: 0(1,X1,1) :- 1(2,Xl,Y), l#y

    08: 0(1,X1,1) :- Ifi0

    09 : 0(1,X1,1) :-

    010: 1(1,X2,1) :-

    011: 1(2,X2,1) :-

    012: 0(1,X2,0) :- 1(1,X2,2), 1(2,X2,Z)

    013: 0(1,X2,0) :- 1(2,X2,1)

    014: 0(1,X2,0) :-

    015: O(l,Fl,z) :- O(l,X2,z)

    016: O(l,Fl,O) :-

    We can also diagnose faults in an instance of the circuit. In this case, let us suppose that the first output of the circuit is 1 instead of 0. Something must be wrong. Either a gate is not working correctly or a con- nection is bad. For simplicity, assume that all connec- tions are guaranteed to be OK. In order to avoid contra- dictions, the type statements about the components must be removed from the knowledge base. By starting with a statement of the symptom (the negation of the expected behavior), we can deduce a set of suspect components. Recall that in a statement of the form :- p, 4 is equivalent to saying that either p is false or 4 is false or both. Therefore, P'I 7 asserts that either ~2 is not acting as an “xor” gate or X 1 is not acting as an “xor” gate, th.at is, that at least one of the two is broken.

    Pl : :- O(l,Fl,O) P2 : :- Conn(Out(1,:32),Out(l,Fl)),

    0(1,X2,0) P3 : :- 0(1,X2,0) P4: :- Xorg(XZ), I(l,X2,z), 1(2,X2,2) P5: :- Xorg(XZ),

    Conn(Out(l,Xl),In(l,X2)), O(l,Xl,z), .I(2,XZ,Z)

    P6: :- Xorg(X2), 0(1,X1,2), 1(2,X2,2) P7 : :- Xorg(X2), Xorg(Xl), I(l,Xl,u),

    1(2,Xl,V), u#v, 1(2,X2,1)

    p8: :- Xorg(XZ), Xorg(Xl),

    Conn(In(l,F1),In(l,X1)), I(l,Fl,u), :1(2,Xl,v), U#V, 1(2,X2,1)

    P9 : :- Xorg(X2), Xorg(Xl), I(l,Fl,u), I(Z,Xl,V), u#v, 1(2,X2,1)

    PlO: :- Xorg(XZ), Xorg(Xl), I(Z,Xl,v), l#v, I(2,X2,1)

    Pll: :- Xorg(X2), Xorg(Xl), Conn(In(2,F2),In(2,Xl)), 1(2,Fl,v), l#v, 1(2,X2,1)

    P12: :- Xorg(X2), Xorg(Xl), I(Z,Fl,v) l#v, 1(2,X2,1)

    P13: :- Xorg(X2), Xorg(Xl), l#O,

    1(2,X2,1) Pl4: :- Xorg(XZ), Xorg(Xl), 1(2,X2,1)

    P15: :- Xorg(XZ), Xorg(Xl), Conn(In(3,Fl),In(2,X2)), I(3,~1,1)

    ~16: :- Xorg(XZ), Xorg(Xl), 1(3,Fl,l) P17: :- Xorg(X2), Xorg(X1)

    In diagnosing digital hardware, it is common to make the assumption that a device has at most one malfunc-

    tioning component at any one time. The clauses shown below provide a simple but verbose way of encoding this assumption. Clauses Q 1 -Q4 together state that either x 1 is a working “xor” gate or the other devices are OK. Clauses Q 1 and Q5-Q7 state the same for X2, and so forth. The single fault assumption can be stated more succinctly as a single axiom, but the encoding is somewhat more complex.

    Ql : Xorg(Xl), xorg(X2) :- Q2: Xorg(Xl), Andg(A1) :- Q3: Xorg(Xl), Andg(A2) :- Q4: Xorg(Xl), Org(01) :- Q5: Xorg(X2), Andg(A1) :- Q6: Xorg(X2), Andg(A2) :- Q7: Xorg(X2), Org(01) :- Q8: AnNCAl 1, Andg(A2) :- Q9 : Andg(Al), Org(O1) :- QlO: Andg(AZ), Org(O1) :-

    Using the single fault assumption and the fact that a fault is guaranteed to be in some subset of parts, we cart exonerate the parts not in that subset. For example, if we know that either x 1 or ~2 is broken, as in the example above, we can prove that components

    Al > A2 , and 0 1 are OK. The following proof makes the case:

    RI : :- Xorg(Xl), Xorg(X2) R2 : Andg(A1) :- Xorg(X2) Rl Q2 R3 : Andg(A1) :- R2 Q5 R4 : Andg(A2) :- Xorg(X2) Rl Q3 R5 : Andg(A2) :- R4 Q6 R6 : Org(Ol) :- Xorg(X2) Rl Q4 R7 : Org(O1) :- R6 Q7

    Finally, we can devise tests to discriminate possible suspects. Starting with a behavioral rule for one of the suspect components, we can derive a behavioral expec tation for the overall device that implicates a subset of the suspects. For example, clause S 18 below states that using the same inputs as in the previous example when the output is not 1 implies that Xorg ( X 1 ) is false.

    Sl: 0(1,X1,1) :-

    Xorg(Xl), I(l,Xl,Y), 1(2,Xl,Z), Y#Z

    s2: 0(1,X1,1) :-

    Xorg(Xl), 1(1,X1,1), 1(2,X1,0)

    s3: 0(1,X1,1) :-

    xorg(Xl), Conn(In(l,Fl),In(l,Xl)), I(l,Fl,l), 1(2,X1,0)

    s4: 0(1,X1,1) :-

    Xorg(Xl), I(l,Fl,l), 1(2,X1,0)

    s5: 0(1,X1,1) :-

    xorg(Xl), I(l,Fl,l), Conn(In(2,Fl),In(2,Xl)), I(2,Fl ,O)

    S6: 0(1,X1,1) :-

    Xorg(Xl), I(l,Fl,l), 1(2,Fl,O)

    940 Communications of fhe ACM September 1985 Volume 28 Number 9

  • 57: 1(2,A2,1) :- Xorg ( Conn

    S8: 1(2,A2, Xorg (

    s9 : 0(1,A2, Xorg ( Andg(Al),

    SIO: O(l,A2,1) :-

    Xorg(Xl),

    I(l,A2,1) Sll: 0(1,A2,1) :-

    Xow(Xl), Conn( In I(3,Fl, 1

    S12: O(l,A2,1)

    Xorg(X1 ) I(3,Fl>

    s13: I(l,Ol,l)

    - > I(l,Fl,l), I(2,F1,0), ) -

    Xow(Xl), I(l,Fl,l), 1(2,Fl,O), I(3,Fl,l), Conn(Out(l,A2),In(l,Ol))

    s14: I(l,Ol,l) :-

    Xorg(Xl), I(l,Fl,l), 1(2,Fl,O), I(3,Fl,l)

    s15: 0(1,01,1)

    Xorg(X 1

    I(3,Fl 516: O(l,Ol,l)

    Xorg(X

    I(3,Fl S17: 0(2,Fl,l)

    .-

    ), 1(1,F1,1), 1(2,Fl,O), 1) > Org(01) .-

    I(l,F1,1), 1(2,Fl,O), 1 ) > ,I)

    .- Xorg(Xl), I(3,Fl,l) Conn(Out(

    Sl8: 0(2,Fl,l) :-

    Xorg(XI),

    I(3,Fl,1)

    The use of logic programming in this application area has several important advantages. The most obvious is that a single design description can be used for multiple purposes. We can simulate a circuit, diagnose it, and generate tests all from one description. A second advan- tage is that the expressive power of the language allows higher level design descriptions to be written and used for these purposes. By working with more abstract de- sign descriptions, these tasks can be performed far more efficiently than at the gate level. Finally, the flexibility of the language and deductive techniques allow these tasks to be performed even with incomplete informa- tion about the structure or behavior of a design.

    CONCLUSION The practicality of logic programming as a software- engineering methodology depends to a large degree on the underlying technology. Unfortunately, there are several areas where technological progress is currently needed.

    One of the key limitations of most logic programming systems is that their deductive methods are inadequate.

    Specral Section

    Resolution is complete in that it can prove any conclu- sion logically implied by its knowledge base. However, a good logic programming system should be able to draw conclusions from uncertain data, reason analogi- cally, and generalize its knowledge appropriately if it is to be really effective.

    Another problem with current logic programming systems is the inefficiency caused by the absence of user-specified control. The development of smarter in- terpreters and compilers should relieve the logic pro- grammer of much of the burden of control.

    Finally, there is a need for considerable improvement in the usability of logic programming systems. This sit- uation could be remedied by the design of more per- spicuous languages for expressing knowledge and by the development of better tools for manipulating and debugging logic programs.

    Although the technology of logic programming is al- ready adequate for supporting the methodology in a wide variety of situations, advances in the functional- ity, efficiency, and usability of logic programming sys- tems should substantially broaden the range of applica- bility. Although we do not expect that logic program- ming will completely supplant traditional software en- gineering, its advantages and range of applicability sug- gest that it may become the dominant programming methodology in the next century.

    FURTHER READINGS 1. Clocksin. W.F.. and Mellish. C.S. Programming in PROLOG. Springer-

    Verlag. New York. 1981. 2. Genesereth, M.R. Partial programs. HPP-84-l. Heuristic Program-

    ming Project. Stanford University. Calif., 1984. 3. Genesereth, M.R. The role of design descriptions in automated diag-

    nosis. Artif. Infell. 24, 1-3 (Dec. 1984). 411-436. 4. Genesereth. M.R.. Greiner. R.. and Smith, D.E. MRS-A meta-level

    representation system. HPP-83-27. Heuristic Programming Project. Stanford University, Calif.. 1983.

    5. Hayes, P. Computation and deduction. In Proceedings of fhe 2nd MFCS Symposium. Czechoslovak Academy of Sciences, 1973, pp. 105-118.

    6. Kowalski. R. Algorithm = logic + control. Commun. ACM 22. 7 (July 1979). 424-436.

    7. Kowalski. R. Logic for Problem Solving. North-Holland. Amsterdam. 1979.

    8. McCarthy, J. Programs with common sense. In Semanfic lnformafion Processing, M. Minsky. Ed. MIT Press. Cambridge. Mass.. 1968, pp. 403-410.

    9. Moran, T. Efficient PROLOG pushes AI into wider market. Mini- Micro Sysf. (Feb. 1985).

    10. Smith, D.E., and Genesereth. M.R. Ordering conjunctive queries. Arfif. Infell. 26, 2 [May 1985). 171-215.

    CR Categories and Subject Descriptors: D.2.m [Software Engineer- ing]: Miscellaneous--rapid profofyping: D.3.2 [Programming Languages]: Language Classifications-very high-level languages; 12.3 [Artificial In- telligence]: Deduction and Theorem Proving-answer/reason exfracfion, deduction. logic programming. mefafheoy

    General Terms: Languages Additional Key Words and Phrases: MRS. PROLOG

    Authors’ Present Address: Michael R. Genesereth and Matthew L. Gins- berg. Computer Science Dept., Stanford University. Stanford. CA 94305.

    Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commer- cial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise. or to republish. requires a fee and/or specific permission.

    September 1985 Volume 28 Number 9 Communications of the ACM Ml