22
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979 error scheme and the computational effort comparisons of the paper, suggest that these algorithms have some inherent advantage over the extended Kalman filters derived by augnenting the state variable of innovations models with the unknown parameter vector, as in Ljung [3]. Further case studies could confirm or dispel such a conclusion. In a later paper, the RPE schemes of this paper (including RML2 algorithm) are modified so as to avoid the stability test at each iteration as a necessary step to ensure conver- gence. The simulation studies of this later paper further support the approach developed here. REFERENCES [1] D. G. Orlhac, M. Athans, J. Speyer, and P. K. Houpt, "Dynamic stochastic control of freeway corridor systems: Vol. IV-Estimation of traffic variables via extended Kalman filter methods," Rep. ESL-R-611, MIT, Cambridge, MA, Sept. 1975. [2] L. W. Nelson and E. Stear, "The simultaneous on-line estimation of parameters and states in linear systems," IEEE Trans. Automatic Control, vol. AC-21, pp. 94-98, Feb. 1976. [3] L. Ljung, "The extended Kalman filter as a parameter estimator for linear systems," Rep. LITH-ISY-I-0154, Dept. of Elec. Eng., Linkop- ing University, Sweden, May 1977. [4] L. Ljung, "Convergence of an adaptive filter algorithm," Int. J. Control, vol. 27, no. 5, pp. 673-693, May 1978. [5] G. Ledwich and J. B. Moore, "Multivariable self-tuning filters," pre- sented at 4th ConfJ Differential Games and Control Theory, July 1976. Also "Multivariable adaptive parameter and state estimators with convergence analysis," J. Austr. Math. Soc., to be published. [6] T. Soderstr6m, "An on-line algorithm for approximate maximum likelihood identification of linear dynamic systems," Rep. 7308, Divi- sion of Automatic Control, Lund Institute of Technology, Sweden, Mar. 1973. [7] T. S6derstrom, L. Ljung, and I. Gustavsson, "A comparative study of recursive identification methods," Rep. 7427, Dept. of Automatic Control, Lund Institute of Technology, Sweden, Dec. 1974. [8] L. Ljung, "Prediction error identification methods," Rep. LITH-ISY- 1-0139, Dept. of Elec. Eng., Linkoping University, Sweden, 1977. [9] B. D. 0. Anderson, J. B. Moore, and R. M. Hawkes, "Model approxi- mations via prediction error identification," Automatica, vol. 4, no. 6, 1978. [10] L. Ljung, "Analysis of recursive stochastic algorithms," IEEE Trans. Automatic Control, vol. AC-22, pp. 551-575, Aug. 1977. [11] L. Ljung, "Theorems for the asymptotic analysis of recursive stoch- astic algorithms," Rep. 7522, Dept. of Automatic Control, Lund In- stitute of Technology, Sweden, Dec. 1975. [12] N. K. Gupta and R. K. Mehra, "Computational aspects of maximum-likelihood estimation and reduction in sensitivity function calculations," IEEE Trans. Automatic Control, vol. AC-19, pp. 774- 783, Dec. 1974. [13] G. C. Goodwin and R. L. Payne, Dynamic System Identification: Experiment Design and Data Analysis. New York: Academic, 1977. [14] B. D. 0. Anderson and J. B. Moore, Optimal Filtering. Englewood Cliff, NJ: Prentice-Hall, 1979. [15] M. Morf and T. Kailath, "Square-root algorithms for least-squares estimation," IEEE Trans. Automatic Control, vol. AC-20, pp. 487-497, Aug. 1975. [16] R. Kumar and J. B. Moore, "Inverse state and decorrelated state stochastic approximation," submitted for publication. [17] R. L. Kashyap and A. R. Rao, Dynamic Stochastic Models from Em- pirical Data. New York: Academic, 1976. Artificial Intelligence Programming Languages for Computer Aided Manufacturing CHUCK RIEGER, JONATHAN ROSENBERG, AND HANAN SAMET, MEMBER, IEEE Abstract-Eight Artificial Intelligence programming languages (SAIL, LISP, MICROPLANNER, CONNIVER, MLISP, POP-2, AL, and QLISP) are presented and surveyed, with examples of their use in an automated shop environment. Control structures are compared, and distinctive features of each language are highlighted. A simple programming task is used to illustrate programs in SAIL, LISP, MICROPLANNER, and CONNIVER. The report assumes reader knowledge of programming concepts, but not necessarily of the languages surveyed. Manuscript received October 20, 1977; revised November 3, 1978. This work was supported in part by the National Bureau of Standards, in part by the Office of Naval Research, and in part by the National Aeronautics and Space Administration. C. Rieger and H. Samet are with the Department of Computer Science, University of Maryland, College Park, MD 20742. J. Rosenberg is with the Department of Computer Science, Carnegie- Mellon University, Pittsburgh, PA 15213. I. INTRODUCTION EARLY INTEREST in computers and computing tended to revolve around the high speeds at which numerical calculations could be performed for such tasks as discrete analysis, simulation, payroll handling, and the like. These first applications were supported by such languages as FORTRAN, ALGOL, and COBOL. Today, numerical tasks are still prevalent, and there is a host of new languages. However, there has been a growing interest in the applica- tion of computers to computations which are less numeric and more symbolic in nature, in particular in applications in which the key problems are not the speed of multiply and divide hardware, but rather in the forms of data storage and control that are needed to carry out complex decision making and planning tasks. 0018-9472/79/0400-0205$00.75 © 1979 IEEE 205

Artificial Intelligence Programming Languages for Computer ...hjs/pubs/aiplcam-ieeesmc79.pdf · RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES II. SAIL A. Introduction

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    error scheme and the computational effort comparisons ofthe paper, suggest that these algorithms have some inherentadvantage over the extended Kalman filters derived byaugnenting the state variable of innovations models withthe unknown parameter vector, as in Ljung [3]. Further casestudies could confirm or dispel such a conclusion.

    In a later paper, the RPE schemes ofthis paper (includingRML2 algorithm) are modified so as to avoid the stabilitytest at each iteration as a necessary step to ensure conver-gence. The simulation studies of this later paper furthersupport the approach developed here.

    REFERENCES[1] D. G. Orlhac, M. Athans, J. Speyer, and P. K. Houpt, "Dynamic

    stochastic control of freeway corridor systems: Vol. IV-Estimationof traffic variables via extended Kalman filter methods," Rep.ESL-R-611, MIT, Cambridge, MA, Sept. 1975.

    [2] L. W. Nelson and E. Stear, "The simultaneous on-line estimation ofparameters and states in linear systems," IEEE Trans. AutomaticControl, vol. AC-21, pp. 94-98, Feb. 1976.

    [3] L. Ljung, "The extended Kalman filter as a parameter estimator forlinear systems," Rep. LITH-ISY-I-0154, Dept. of Elec. Eng., Linkop-ing University, Sweden, May 1977.

    [4] L. Ljung, "Convergence of an adaptive filter algorithm," Int. J.Control, vol. 27, no. 5, pp. 673-693, May 1978.

    [5] G. Ledwich and J. B. Moore, "Multivariable self-tuning filters," pre-sented at 4th ConfJ Differential Games and Control Theory, July 1976.Also "Multivariable adaptive parameter and state estimators with

    convergence analysis," J. Austr. Math. Soc., to be published.[6] T. Soderstr6m, "An on-line algorithm for approximate maximum

    likelihood identification of linear dynamic systems," Rep. 7308, Divi-sion of Automatic Control, Lund Institute of Technology, Sweden,Mar. 1973.

    [7] T. S6derstrom, L. Ljung, and I. Gustavsson, "A comparative study ofrecursive identification methods," Rep. 7427, Dept. of AutomaticControl, Lund Institute of Technology, Sweden, Dec. 1974.

    [8] L. Ljung, "Prediction error identification methods," Rep. LITH-ISY-1-0139, Dept. of Elec. Eng., Linkoping University, Sweden, 1977.

    [9] B. D. 0. Anderson, J. B. Moore, and R. M. Hawkes, "Model approxi-mations via prediction error identification," Automatica, vol. 4, no. 6,1978.

    [10] L. Ljung, "Analysis of recursive stochastic algorithms," IEEE Trans.Automatic Control, vol. AC-22, pp. 551-575, Aug. 1977.

    [11] L. Ljung, "Theorems for the asymptotic analysis of recursive stoch-astic algorithms," Rep. 7522, Dept. of Automatic Control, Lund In-stitute of Technology, Sweden, Dec. 1975.

    [12] N. K. Gupta and R. K. Mehra, "Computational aspects ofmaximum-likelihood estimation and reduction in sensitivity functioncalculations," IEEE Trans. Automatic Control, vol. AC-19, pp. 774-783, Dec. 1974.

    [13] G. C. Goodwin and R. L. Payne, Dynamic System Identification:Experiment Design and Data Analysis. New York: Academic, 1977.

    [14] B. D. 0. Anderson and J. B. Moore, Optimal Filtering. EnglewoodCliff, NJ: Prentice-Hall, 1979.

    [15] M. Morf and T. Kailath, "Square-root algorithms for least-squaresestimation," IEEE Trans. Automatic Control, vol. AC-20, pp.487-497, Aug. 1975.

    [16] R. Kumar and J. B. Moore, "Inverse state and decorrelated statestochastic approximation," submitted for publication.

    [17] R. L. Kashyap and A. R. Rao, Dynamic Stochastic Models from Em-pirical Data. New York: Academic, 1976.

    Artificial Intelligence Programming Languagesfor Computer Aided Manufacturing

    CHUCK RIEGER, JONATHAN ROSENBERG,AND HANAN SAMET, MEMBER, IEEE

    Abstract-Eight Artificial Intelligence programming languages(SAIL, LISP, MICROPLANNER, CONNIVER, MLISP,POP-2, AL, and QLISP) are presented and surveyed, with examplesof their use in an automated shop environment. Control structures arecompared, and distinctive features of each language are highlighted.A simple programming task is used to illustrate programs in SAIL,LISP, MICROPLANNER, and CONNIVER. The report assumesreader knowledge of programming concepts, but not necessarily ofthe languages surveyed.

    Manuscript received October 20, 1977; revised November 3, 1978. Thiswork was supported in part by the National Bureau of Standards, in partby the Office of Naval Research, and in part by the National Aeronauticsand Space Administration.

    C. Rieger and H. Samet are with the Department of Computer Science,University of Maryland, College Park, MD 20742.

    J. Rosenberg is with the Department of Computer Science, Carnegie-Mellon University, Pittsburgh, PA 15213.

    I. INTRODUCTIONEARLY INTEREST in computers and computing

    tended to revolve around the high speeds at whichnumerical calculations could be performed for such tasks asdiscrete analysis, simulation, payroll handling, and the like.These first applications were supported by such languages asFORTRAN, ALGOL, and COBOL. Today, numericaltasks are still prevalent, and there is a host ofnew languages.However, there has been a growing interest in the applica-tion of computers to computations which are less numericand more symbolic in nature, in particular in applications inwhich the key problems are not the speed of multiply anddivide hardware, but rather in the forms of data storage andcontrol that are needed to carry out complex decisionmaking and planning tasks.

    0018-9472/79/0400-0205$00.75 © 1979 IEEE

    205

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    In this paper we attempt to develop a perspective of boththe nature of symbolic computing and of several of thecurrent major symbolic computing languages: SAIL [28],LISP [19], MICROPLANNER [36], CONNIVER [20],MLISP [34], AL [15], POP-2 [6], and QLISP [31], all ofwhich have emerged historically as derivatives ofthe branchof computer science known as Artificial Intelligence, thebranch most closely concerned with symbolic computing.Although it is our purpose to provide insight into generalsymbolic computing issues, we focus primarily on symboliccomputing and decision making in the context ofcomputer-aided manufacturing environments.The traditional approach to computing is one of a batch

    environment, in which a sequence of large jobs is submittedto a mainframe computer. In this environment, the com-puter schedules and runs thejobs, then collects the results onmass storage for later routing to the scientist or administra-tor. In symbolic computing, however, where planning anddecision making play primary roles, there is typically agreater demand for real-time interactive systems. This isespecially true in a computer-aided manufacturing environ-ment, for example, where "results" are not so much tables ofnumbers as they are commands to tools, job steps, andinteractions with human operators that must occur in realtime, in the face of a constantly changing environment.Although the development of interactive computing was

    first motivated by a desire for more efficient utilization ofsystem capabilities, the shift from batch to interactivereal-time computing has also stimulated research in areas inwhich computers serve not only as numerical engines, butalso in administrative and planning capacities. In thesehigher-level applications, the computer is no longerdedicated to the local control of a single tool, but rather tothe global management of the activities of an entire shop oftools. In this context, "tool" no longer necessarily has itstraditional meaning, since each controlled "tool" might infact be another cooperating computer, with its own internalreasoning capacities.When "intelligent" computers interact at this level, there

    is a need for some type ofsymbolic communication protocolby which exchanges involving error conditions, scheduling,routing, and so forth can occur. Because of this need, one ofthe main areas of research in symbolic computing languagesconcerns the form in which symbolic data are represented inshared data bases and the form of the access languages forsuch data bases. As will become apparent, a major issue ofany symbolic computing language is the style in whichsymbols can be combined and organized to represent plan-ning and decision making knowledge.

    Equally important to the style of data representation insymbolic computing is the style of run-time control offeredby the programming language. In complex decision makingenvironments, for example, there is the ever-present need toexplore alternative solutions and to recover from prior baddecisions and unforeseeablemishaps duringthe execution ofa plan. Equally important is the generation and internalsimulation and certification of plans prior to their applica-tion. This frequently requires control mechanisms that allowthe program to back up or revert to a prior state of

    reasoning, or that allow the program to switch convenientlyamong several alternatives as it learns more about thefeasibility or adequacy of each. Such activities, in turn, callfor schemes in which the data base of planning knowledgecan be organized contextually, with parts beingmasked andretrieved as a function of the current phase of the planningactivity. Such operations are clearly beyond the scope ofmost traditional, numerically oriented computinglanguages.An attendant problem of symbolic reasoning is that of

    generating new knowledge from existing knowledge viaeither formal or plausible inference systems. Thus symboliclanguages frequently support inference mechanisms thatmake it convenient to express rules of the form ""WheneverA, B, and C occur, infer that D has also occurred." Althoughthis type of facility is fundamentally a software emulation ofassociative hardware, much research is currently devoted tothe development of systems that can react to complex sets ofconditions and that can generate elaborate inferences,sometimes affecting the control of the system. Such associa-tive processes impose additional demands on the structureof the symbolic data base.Our survey follows languages which have distinct lin-

    eages. SAIL is primarily an outgrowth of ALGOL, whichitself was motivated from some deficiencies of control inFORTRAN. While SAIL has a powerful numerical facility,it extends the traditional control available in FORTRANand ALGOL, provides more extensive and powerful I/Ocapabilities, and facilitates user interface with the operatingsystem. As with the other symbolic languages, it alsosupports an associative data base and inference system.

    LISP, MICROPLANNER, and CONNIVER comprisethe second family of symbolic languages. LISP, oftenregarded as the "assembly language of Al," is rooted inrecursive function theory and has the somewhat uniquefeature that program and data are structurally indistinguish-able. This permits control environments in which oneprogram can construct, then execute and even debuganother program. MICROPLANNER builds more controland data base power into LISP, providing several powerfulfacilities for plan generation and reasoning about alterna-tives in the presence of a context-sensitive data base of"assertions" and "theorems." CONNIVER refines several ofthe ideas introduced by MICROPLANNER and solves anumber of the deficiencies ofMICROPLANNER's backupand search mechanisms.To round out the survey, we also briefly describe four

    other symbolic languages: MLISP, POP-2 (ALGOL-likevariants of LISP), QLISP (an analog of MICRO-PLANNER with a somewhat richer data structure facility),and AL (a SAIL-based manipulator task specificationlanguage).Our survey contains a short description of each of the

    major languages, illustrating each with a common exampletypical of tasks in a computer-aided manufacturing environ-ment. We also briefly discuss the standardization status ofthe languages. The conclusion reviews the desirable featuresand suggests what the desirable characteristics of an 'ideal"computer-aided manufacturing language might be.

    206

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    II. SAILA. IntroductionSAIL has its origins in a merger of LEAP [13], an

    associative language, and a version of ALGOL 60 [23].Therefore, unlike most of the other artificial intelligencelanguages, it is not LISP based. Instead, it is a generalpurpose compiled language with an extensive run-timelibrary of functions. As befits its ALGOL origins, SAIL hasblock structure and explicitly typed, statically scoped var-iables. The data types available include INTEGER, REAL,STRINGS of arbitrary length, structure, pointer, LIST, SET,ITEM, and aggregates of the previous (i.e., ARRAYS).Some of the more important features of SAIL are dis-

    cussed separately below. These include the associativedata-base facility, the capability for usage of SAIL as a hostlanguage in a CODASYL [9] data-base managementsystem, the control structures, and the system-buildingfacilities. Finally, a summary is presented ofcurrent standar-dization efforts.

    B. Associative Data BaseSAIL contains an associative data base facility known as

    LEAP which is used for symbolic computations. Thisenables the storage and retrieval of information based onpartial specification of the data. Associative data is stored inthe form of associations which are ordered three-tuples ofITEMS, denoted as TRIPLES. Examples of TRIPLES are:

    FASTEN 0 NAIL _ HAMMER;FASTEN 0 SCREW SCREWDRIVER;FASTEN 0 BOLT _ PLIER;.

    Associations may be conceptualized as representing a rela-tion of the form

    attribute 0 object _ value

    or

    attribute (object)= value.

    Most programming languages (e.g., LISP) provide thefollowing associative-like mechanism:

    given: attribute, objectfind: value.

    However, SAIL enables the programmer to specify any ofthe components of the association and then have the LEAPinterpreter search the associative store for all triples whichhave the same items in the specified positions. In fact, thereare eight possibilities of such queries, although only a few ofthem are actually used. Of course, since SAIL runs on anonassociative processor, some of the queries are moreefficient than others. For example, the following may be usedto retrieve all items which can be fastened by a hammer (e.g.,nails, thumbtacks):

    FASTEN 0 X HAMMER.

    An ITEM is a constant and is similar to a LISP atom.Items have names and may also be typed so that data can beassociated with them. An item may be declared or createdduring execution from a storage pool of items by use of thefunction NEW. For example,

    REAL ITEM VISE;

    declares VISE to be an item which may have a datum of typereal associated with it. The datum associated with an item isobtained by use of the function DATUM. Thus DATUM (VISE)might be interpreted as the capacity of the vise.

    In order to deal with items, the user has the capability ofstoring them in variables (ITEMVARS), SETS, LISTS, and asso-ciations. The distinction between SETS and LISTS is that anexplicit order is associated with the latter, whereas there isno explicit order associated with the former. In addition, anitem may occur more than once in a list.

    Associations are ordered three tuples of items, and maythemselves be considered as items and, therefore, participatein other associations. Triples are added to the associativestore by use of a MAKE statement and erased from theassociative store by use of an ERASE statement. For example,the following code could be used to detach assembly 1 fromassembly2 and attach it to assembly3:

    ERASE ATTACHED 0 ASSEMBLY1 I ASSEMBLY2;MAKE ATTACHED 0 ASSEMBLY 1I ASSEMBLY3;.

    The motivation for using an associative store is a flexiblesearch and retrieval mechanism. Binding Booleans andFOREACH statements are two methods of accomplishingthese goals.The Binding Boolean expression searches the associative

    store for a specified triple and returns TRUE if the triple isfound and FALSE otherwise. The aim of the search is to findan association which meets the constraints imposed by thespecified triple. If some of the components of the triple areunknown (such components are preceded by the specialitem BIND), then a successful search will result in the bindingof the designated component. For example,IF FASTEN 0 BIND OBJECT

    = PLIER THEN PUT OBJECT IN PLIER! SET;.

    In this case the store is searched for an object that can befastened by a PLIER, and ifsuch an object is found, it is placedin the set PLIER! SET. Note the use ofthe item variable OBJECTin the association. A successful search will result in thisvariable being bound.The FOREACH statement is the heart of LEAP. It is similar

    to the FOR statement of ALGOL in that the body of thestatement is executed once for each binding of the controlvariable. For example,

    FOREACH X PART (0 B747 X AND DATUM (X) < 3DO PUT X IN B747 ! ORDER! SET;.

    In this case, assuming that the datum associated with eachpart denotes quantity at hand, the associative store issearched for all parts of a B747 of which there are less thanthree on hand. These parts are placed in the setB747! ORDER! SET.

    207

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AN) CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    C. Data Management Facility

    Unlike other artificial intelligence languages, SAIL hasthe capability of being used with an existing data-basemanagement system DBMS-10 [10] to handle large databases stored on external storage. An interface exists [32]which allows SAIL to be used as the data manipulationlanguage in a CODASYL-based data base managementsystem. SAIL is relatively unique in this respect in thatCOBOL [8] has almost been exclusively used as the datamanipulation language (DML) of such systems. This situa-tion is not surprising since examination of the data descrip-tion facility of the CODASYL report reveals a very strongsimilarity to the data division of COBOL. Nevertheless,there have been some attempts to use FORTRAN ([26] and[35]).

    Ideally, a data manipulation language should include thefollowing features. First, a full procedure capability whichallows parameter passing, dynamic storage allocation, andrecursion. Second, processing of Boolean requests shouldnot be difficult. In a COBOL-based system, this task israther cumbersome as pointed out by [25]. In order to avoid

    creation and modification of such records. The dynamicstorage allocation capability of SAIL enables the creation ofseveral instances of each record type each of which isidentified by an entity known as a record pointer.As an example of the use of SAIL as a host language in a

    data base management system, consider the following pro-gram fragment. The task is to traverse a set named SUPPLIERowned by a WAREHOUSE record and extract an integer dataitem known as PARTNUM from each PART record which is amember of the set. The exact instance ofthe set occurrence isidentified by the owner record, WAREHOUSE, having thevalue ELECTRICAL for the data item INDUSTRY. Since SAILhas a data structuring facility (known as a RECORD!CLASSand similar to a PL/1 [Beech 70] structure), we define a datastructure known as LISTX and a function to add items to thefront of a LISTX structure. The data structure LISTX has twofields-ELEMENT which is of type INTEGER and NEXT which isof type RECORD! POINTER (and points to another instance ofthe LISTX data structure). The function ADDTOLIST has twoarguments.a pointer to the head of an instance of LISTXand the integer to be added to this instance.

    RECORD! CLASS LISTX(INTEGER ELEMENT;RECORD! POINTER (LISTX) NEXT);

    PROCEDURE ADDTOLIST(REFERENCE RECORD! POINTER(LISTX) HEAD;INTEGER VAL);

    BEGIN

    RECORD! POINTER (LISTX) TEMP;TEMP: NEW! ELEMENT(LISTX);LISTX: ELEMENT[TEMP] :-' VAL;LISTX: NEXT[TEMP]: HEAD;HEAD: TEMP;

    END;.

    currency problems raised by partial satisfaction of Booleanrequests (the backtracking problem [37]), the user mustbuild collections of pointers to related records. Third, thereshould be a capability for building an in-core data base sothat operations such as set UNION and set INTERSECTION canbe performed without the overhead of accessing extendedstorage more than once for any record.SAIL has a mechanism, LEAP, for building associative

    data bases. Currently, this only works for internal memorydue to implementation decisions. SAIL also has a recordstructure capability which enables the user to build anin-core data base. In a COBOL-based data base manage-ment system, whenever the user obtains an instance of arecord type from the data base (i.e., he locates it via a FINDand fetches it via a GET), he has no convenient way ofkeepingit in temporary memory while obtaining another instance ofthis record type. Of course, he can allocate temporarystorage for the various fields; however, this becomes ratherunwieldy, especially when he wishes to keep track of morethan two instances of a record type. Alternatively, instancesof certain record types can be refetched from the data base.In fact, this is the strategy that is generally followed.However, the cost is high.

    Briefly, the SAIL interface provides a SAIL record struc-ture declaration for each record type that has been defined inthe data base management system. Primitives exist for the

    The COBOL/DML and SAIL encodings are given below.The critical difference is the step "Add PARTNUM in PART toresult list." It is not immediately obvious how the concept ofa list would be implemented in COBOL.COBOL Program:

    NEXT:

    MOVE ELECTRICAL' TO INDUSTRY IN WAREHOUSE.FIND WAREHOUSE RECORD.IF SUPPLIER SET EMPTY GO TO NONE! SUPPLIED.FIND NEXT PART RECORD OF SUPPLIER SET.

    IF ERROR-STATUS = 0307 GO TO ALL! FOUND.GET PART.Add PARTNUM in PART to result list.GO TO NEXT.

    ALL! FOUND:

    SAIL Program:

    INDUSTRY:= "ELECTRICAL";FIND! CALC(WAREHOUSE);IF EMPTY! SET(SUPPLIER) GO TO NONE! SJPPLIED;WHILE TRUE DO BEGIN

    FIND! NEXT(PART, SUPPLIER);IF ERROR! STATUS = 0307 THEN DONEGET(PART);ADDTOLIST(HEAD, PARTNUM);END;

    208

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    D. Control StructuresIn addition to the usual control structures associated with

    ALGOL-like languages (e.g., FOR lOOpS, WHILE loops, casestatements, recursive procedures, etc.), SAIL has capabilitiesto enable parallel processing, backtracking, and coroutines.In SAIL, a process is a procedure that may be run indepen-dently of the main procedure. Thus several processes may berun concurrently. Note that the main procedure is also aprocess.A process is created with a SPROUT statement as follows:

    sPROUT(, , )where names the process for future reference,

    procedure. When one of the parameters is an unboundFOREACH itemvar, then upon success the parameter will bebound. The matching procedure is actually SPROUTed as acoroutine process, and SUCCEED and FAIL are variants ofRESUME which return values of TRUE or FALSE, respectively.In addition, FAIL causes the process to terminate whereaswhen the matching procedure is called by the surroundingFOREACH via backup, then the procedure is resumed where itleft off on the last SUCCEED.For example, consider a box containing a number of

    different fasteners (nails, regular screws, bolts, nuts, tacks,etc.). The goal is to obtain Phillips screws. This can beachieved by the following MATCHING PROCEDURE whichreturns a different Phillips screw each time it is invoked.

    MATCHING PROCEDURE GET! FASTENER (?ITEMVAR FASTENER, F! TYPE);BEGIN

    FOREACH FASTENER I FASTENER IN BOX ANDTYPE (0 FASTENER _ F! TYPE

    DO SUCCEED;FAIL;END;

    indicates what the process is to do, and is used to specify attributes of the sPRouTed andcurrent process. Unless otherwise stipulated (in ),a SPROUTed process begins to run as soon as it is SPROUTedand in parallel with the SPROUTing process.

    Similarly, there exist primitives which result in the suspen-sion of a process, the resumption of a process, and in theblocking of a process until a number ofother processes haveterminated. These tasks are accomplished by the SUSPEND,RESUME, and JOIN primitives, respectively.SUSPEND and RESUME have as their arguments single items,

    while JOIN has a set of items as its argument. These items arethe names that have been set up for the process by anappropriate SPROUT command.For example, a procedure to tighten a bolt may be defined

    as follows:

    ITEM P1, P2;

    SPROUT(P1, GRASP(HAND1, SCREWDRIVER));SPROUT(P2, GRASP(HAND2, BOLT));

    JOIN({P1, P2});TURN(HAND1, CLOCKWISE);

    Since SAIL runs on a single processor computer system,true multiprocessing is not possible. Instead, the SAILruntime system contains a scheduler which decides whichprocess is to run and for how long. The programmer makesuse of the field of the SPROUT statement to specifyinformation which the scheduler uses to determine the nextprocess to be run. Such information includes time quantumsizes, priority, whether or not to immediately run theSPROUTed process, etc.A process may result in the binding of ITEMVARS by use of a

    MATCHING PROCEDURE which is basically a Boolean

    Note that FASTENER is a FOREACH ITEMVAR which uponsuccess will be bound.

    Backtracking is supported by variables of type CONTEXT.However, the programmer must specify the points to whichbackup is to occur (for example, recall SUCCEED). Statesaving and restoring is achieved by use ofCONTEXT variableswhich act as pointers to storage areas of undefined capacityin which are stored the entities to be saved and restored.Actual state saving and restoring is accomplished by use ofthe primitives REMEMBER and RESTORE.

    Processes may communicate with each other by use oftheSAIL event mechanism. This is a message processing systemwhich enables the programmer to classify the messages andto wait for certain events to occur. Events occur via theCAUSE construct which has as its arguments the event type,the actual notice, and instructions with respect to thedisposition of the event. Similarly, there is a constructknown as INTERROGATE which specifies a set of event typesand instructions with respect to the disposition of the eventnotice associated with the designated event types. A variantof this facility has been used extensively in the implementa-tion of the Stanford Hand Eye Project [14].E. System Building CapabilitiesSAIL includes many features which are designed to aid in

    system building. Assembly language statements may beinterspersed with regular SAIL statements by use of theSTART! CODE and QUICK! CODE constructs. A number ofdifferent files which are to be used with the program can bespecified via use of REQUIRE statements.The statements:

    REQUIRE "TOOLS" LOAD! MODULE;REQUIRE "CAMLIB[1, 3]" LIBRARY;

    will cause SAIL to inform the loader that the file TOOLS. REL

    209

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4. APRIIl 1979

    must be loaded. In addition, the file CAMLIB on disk area [1,3] serves as a library and is searched for needed routines.The statement:

    REQUIRE "HEADER SAI" SOURCE! FILE;

    will cause the compiler to save the state of the current inputfile and scan HEADER SAI for program text. When HEADER-

    . SAI is exhausted, scanning of the original file resumes at apoint immediately following the REQUIRE statement. Thisfeature is particularly useful when dealing with libraries,since in this case the REQUIREd file can contain EXTERNALdeclarations, thereby freeing the application programmerfrom such work and possible errors.A rather extensive conditional compilation capability is

    associated with SAIL. This enables the development of largeprograms which can be parameterized to suit a particularapplication without compiling unnecessary code and ther-eby wasting memory for program segments which are neverused. This capability is used to enhance a macro facility toinclude compile-time-type determination; for loops, whilestatements, and case statements at compile-time; generationof unique symbols, and recursive macros. For example,

    DEFINE GRASP(SIZE) = [IFCR SIZE > 1 THENC VISEELSEC PLIERSENDC];

    results in the definition of a macro named GRASP having oneformal parameter, SIZE. The result is the name ofa tool that isappropriate for the size ofthe item that is to be grasped, i.e., avise in case size is greater than 1 (assuming size is measuredin centimeters, etc.) and pliers otherwise. For example,

    TOOL 1 := GRASP(I10.0);TOOL2 = GRASP(0.5);

    will result in the following statements:

    TOOL 1 = VISE;TOOL2 = PLIERS;

    Note that the choice is made at compile-time, and thus theprogrammer need not be concerned with the availablegrasping mechanisms. Thus the program compilation stepcan be used to aid in the writing of the program. Theexample illustrates the importance of such a feature whencertain tasks can be achieved by similar, yet not identical,means.

    SAIL also provides an excellent interface with the operat-ing system. This enables its use for real-time applicationssuch as control of external devices. In fact, interrupts can behandled, and the user has at his disposal all of the I/Ocapabilities that an assembly language programmer has.This enables the development of programs ranging fromscanners to mechanical arm controllers. In addition tocompatibility with assembly language debuggers, SAIL hasa high-level breakpoint package known as BAIL [27].

    F. Standardization

    Currently, SAIL has only been implemented on thePDP-10. It runs under both the TENEX [3] and TOPS-10

    [39] operating systems. There is an effort underway atSUMEX to develop a language similar to SAIL known asMAINSAIL [41]. The goal of that project is to capture thefeatures that make SAIL an attractive language (in particu-lar, the ease of interaction with the operating system) and todevelop a language that is capable of being run on a largenumber of machines. The orientation of the project istowards minicomputers. The language is considerably dif-ferent than SAIL, and existing SAIL programs will have tobe modified in order to be capable of compiling. Anextensive run time library is being provided as is a recordstructuring facility. It is still uncertain whether the associa-tive data base capability of SAIL (i.e., LEAP) will beincorporated in MAINSAIL.

    III. THE LISP FAMILY OF LANGUAGES

    A. LISP

    LISP ([18], [19], [33], and [40]), a list processing languagedeveloped by John McCarthy at MIT in the late 1950's? is animplementation of parts of Alonzo Church's work [7] in thelambda calculus. McCarthy's intention was to recast theelegance of recursive function theory as a theory of compu-tation. Thus, the first implementations of LISP relied exclu-sively upon recursion as the computational paradigm (i.e.,no iteration), which, although elegant, resulted in a firstversion of LISP which was not competitive with FOR-TRAN as a practical programming tool. However, LISP'scharacter has changed considerably, so that today LISP isan extremely powerful and general-purpose programminglanguage which nevertheless retains its original elegance.The most interesting features of LISP are as follows.1) The language is practically devoid of syntax; all con-

    structions in LISP fall into two categories: atoms andcompositions of atoms.

    2) Program and data are interchangeable, since they arerepresented in the same format. Therefore, in LISP it ispossible for one function to construct another function asdata, then execute it by indicating to the LISP system toregard it as code; alternatively, an existing function's codemay be examined, modified or augmented by anotherfunction at run time. In fact, a function is capable ofself-modification if appropriate care is exercized.

    3) Memory allocation and management are automaticand transparent to the user, except where the user explicitlydesires to influence them. With the exception of arrays, thereare no space declarations to be made, freeing the program-mer from the details of space allocation and generallyallowing for the unlimited growth of any given data struc-ture. (For the most part, LISP data structures have no size orcomplexity constraints.) Used memory which is no longerinvolved in the computation is recycled automatically by agarbage collector either on demand from the user atspecified points or automatically.

    4) LISP is an interpreted language. The system proper isa function of one argument (EVAL X) such that calling EVALwith any LISP data structure as its argument causes thatargument to be regarded as code and executed. However.

    210

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    + 4++---4. +---+ + 4 4.,---+ +I _+ ___ +I __ + + 4-- I--+ +--- 4 +-I__-_4 _

    NAME | TOOL-TYPE STYLE PHILLIPS COLOR-CODIN6SChEWDRIVEk-1 SCREWDRIVER YELLOW

    4---* _I,4-- I - ---I/--I* 4.4I_I*I*I>

    SHAFT-LENGTH 10HEAD-SI 2E

    Fig. 1. LISP data structure.

    most LISP systems include a compiler which will producestand-alone machine code for interpreted functions.Typically, compilation provides an order of magnitudespeedup which makes LISP competitive with othercompiled languages, or even with well-coded assemblylanguage. Since interpreted and compiled code may beintermixed, it is possible to retain the flexibility and power ofthe interpreter, while obtaining the speed required forproduction applications.

    5) LISP remains recursive, while also accommodatingiterative algorithms via a so-called PROG feature. Bothrecursion and iterative programming are illustrated in sub-sequent sections.

    6) Because ofthe technique LISP uses in storing local andglobal variables, some very powerful context switching canbe carried out, providing a fast way to enter and exithypothetical planning environments and to cause the beha-vior of a program to vary as a function of its environmentalcontext.

    1) LISP Data Structure: LISP's data structure, called theS-expression, is simple yet extraordinarily flexible, provid-ing a substrate upon which a programmer may design hisown complex data structures. An S-expression is either an"atom" or a "CONS node." An atom can be regarded as eithera variable, a constant (a passive symbol), or both. There areno type declarations in LISP; new atoms are simply ad-mitted to the system as they are scanned at the input level,and atoms with the same name are guaranteed by the systemto be unique (i.e., they have the same internal pointer, oraddress).The other type of S-expression, the CONS node, provides a

    means of structuring atoms and other CONS nodes intohierarchical data structures. A CONS node is ordinarilyimplemented as a single computer word (say, 36 bits long)which contains a left pointer, called its CAR, and a rightpointer, called its CDR. CONS nodes are created dynamicallyvia the function (CONS X Y), where X and Y are any otherS-expressions, or passively (as data constants) via the con-struction (X. Y). CONS nodes can be composed to formarbitrarily complex hierarchies, the bottommost elements ofwhich are usually atoms (i.e., pointers to atomicS-expressions).To illustrate, suppose we wish to represent a particular

    tool, say a screwdriver, in a LISP data structure. We firstdecide upon a name for it, say, SCREWDRIVER-1, and whatcharacteristics of it we wish to encode. Let us suppose thecharacteristics are: type is Phillips, color is yellow, shaftlength is 10 cm, and head size is 0.3 cm. There are many waysto encode this in LISP; the external representation of theone we adopt here is

    ((NAME SCREWDRIVER- 1)(TOOL-TYPE SCREWDRIVER)(STYLE PHILLIPS)(SHAFT-LENGTH 10 CM)(COLOR-CODING YELLOW)(HEAD-SIZE 0.3 CM)).

    Here, all symbols such as NAME, YELLOW, etc. are LISPatoms. (So too are the numbers; however numbers are notentirely equivalent with symbolic atoms.) The particularhierarchy we have adopted is a list of lists, where each sublistconsists of an initial atom describing that sublist's role in thestructure, and a list of the information associated with thatrole in the description.

    This structure would be graphically represented, as shownin Fig. 1, and could be constructed passively (as a fullyconstant structure) via a quoted S-expression (""'denotesQUOTE):

    ((NAME SCREWDRIVER-1) (TOOL-TYPE SCREWDRIVER) ...)

    or dynamically via CONS:

    (CONS (CONS 'NAME (CONS 'SCREWDRIVER-1 NIL))(CONS 'TOOL-TYPE (CONS 'SCREWDRIVER NIL))

    (CONS 'HEAD-SIZE (CONS 0.3 (CONS 'CM NIL)))

    Since it would be a rather harrowing experience to constructvery large S-expressions dynamically in this fashion, LISPprovides a spectrum of higher level functions for construct-ing, modifying, and accessing S-expressions. Some high-lights of these will be covered briefly in a subsequent section.For our example, a more concise expression of code whichwould build this structure dynamically would be

    (LIST (LIST 'NAME 'SCREWDRIVER-1)(LIST 'TOOL-TYPE 'SCREWDRIVER)

    (LIST 'HEAD-SIZE 0.3 'CM)

    Presumably, having defined this tool, we would want torecord it as one available tool in a large supply of tools.Again, there would be numerous methods ofdoing this. Oneway would simply be to maintain a global list of all knowntools in the system and to add this entire description as a newtool on this list:

    (SETQ NEW-TOOL '((NAME SCREWDRIVER-1) (TOOL-TYPE SCREWDRIVER) ... ))(SETQ MASTER-TOOL-LIST (CONS NEW-TOOL MASTER-TOOL-LIST)).

    211

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, No. 4, APRIL 1979

    (SETQ is one of LISP's assignment statements.) Alternatively,we might wish to put only the name ofthe screwdriver on themaster tool list and associate all the remaining informationwith property DESCRIPTION on SCREWDRIVER- l'S propertylist:

    C) (SUBST X Y Z): X, Y, and Z are arbitrary S-expressions. SUBST creates a new copy of Z, where alloccurrences of Y in Z are replaced with X's.

    Example: (SUBST 0.2 0.3 '((NAME SCREWDRIVER- 1) ...

    (PUT 'SCREWDRIVER- DESCRIPTION

    '((TOOL-TYPE SCREWDRIVER) (HEAD-SIZE 0.3 CM)))

    (SETQ MASTER-TOOL-LIST (CONS 'SCREWDRIVER-I MASTER-TOOL-LIST))

    2) Property Lists: Any LISP atom may have a propertylist (built up from CONS nodes). Conceptually, the propertylist allows the attachment of an arbitrary number ofattribute-value pairs to the atom, thereby serving to describethe characteristics ofthe real-world entity represented by theatom. This is a powerful feature for any programminglanguage, since it allows "microdescriptions" of atomswhich ordinarily will not be seen by the processes thatmanipulate the hierarchical structures in which the atomparticipates. These microdescriptions can be maintainedand accessed by the functions PUT, GET, and REMPROP in casemore detail about an atom is desired.

    Properties are attached to an atom via the function (PUT ), looked up via (GET ), and removed via (REMPROP ). We have seen one way to associate the screwdriverinformation with the atom SCREWDRIVER-I using propertylists. Another, more convenient way would be to split apartall the various attributes of this atom, making each a-different entry on the property list:

    (PUT 'SCREWDRIVER- I 'TOOL-TYPE 'SCREWDRIVER)

    (PUT 'SCREWDRIVER-1 'STYLE 'PHILLIPS)

    (PUT 'SCREWDRIVER-1 'HEAD-SIZE '(0.3 CM)).

    To determine SCREWDRIVER- 's head size, we would thenwrite: (GET 'SCREWDRIVER-1 'HEAD-SIZE). Ifsuch an attributeof SCREWDRIVER-1 exists, it will be located and returned.

    3) Representative LISP Data Structure ManipulatingFunctions: We include here a definition and briefexample ofseveral of the more standard, high-level LISP functions thatpertain to data structure creation, modification andsearching.

    a) (MEMBER X Y): If S-expression X is a member ofS-expression Y (assumed to be a list), return "TRUE," other-wise, return "FALSE."

    Example: (MEMBER 'SCREWDRIVER- I MASTER-TOOL-LIST)

    returns a pointer to the atom I ("true") if SCREWDRIVER-1 iSon the MASTER-TOOL-LIST, and a pointer to the atom NIL("false") otherwise.

    b) (ASSOC X Y): Y is a list of lists. Y is scanned,comparing the first item of each sublist to X until a match isfound, or until Y is exhausted. In case a match is found,ASSOC returns the entire sublist whose first item matched X.

    Example: (ASSOC 'HEAD-SIZE '((NAME SCREWDRIVER-1) ...(HEAD-SIZE 0.3 CM))) would return the sublist (HEAD-SIZE 0.3CM).

    (HEAD-SIZE 0.3 CM))) would produce a new structure for ourscrewdriver, identical in all respects to the original, exceptthat its head width would be 0.2 instead of 0.3.

    d) (APPEND X Y): X and Y are lists. A new list is createdwhich is the result of appending Y onto the end of X.

    Example: (APPEND '((NAME SCREWDRIVER- 1) (STYLE PHIL-LIPS)) '((COLOR-CODE YELLOW) (HEAD-SIZE 0.3 CM))) wouldproduce ((NAME SCREWDRIVER-1) (STYLE PHILLIPS) (COLOR-CODE YELLOW) (HEAD-SIZE 0.3 CM)).

    4) LISP Data Types: In addition to atoms and CONSnodes, most LISP systems include the following other datatypes:

    1) integer numbers,2) real numbers,3) strings,4) arrays,5) octal numbers (for bit-level manipulations).

    Some versions of LISP (notably MACLISP [22]) havehighly developed numerical and trigonometric facilities andaccompanying optimizing compilers geared to the efficientgeneration of "number crunching" software.

    5) LISP Functions: A LISP "'program" is a collection offunctions. No function is syntactically declared as the "mainprogram." Functions are generally typeless (i.e., no distinc-tion such as "integer," "real," "string," etc. is made).However, each function may be declared so that its callingarguments are passed to it either evaluated (as in mostprogramming languages) or unevaluated. Except for thisdistinction, there is no need for function-relateddeclarations.A function is regarded as simply another type of data. As

    such, one typically defines a function by assigning to someatom the function as the atom's value. Strictly speaking, thefunction itself is nameless, and is identified by the form:

    (LAMBDA ).When a "lambda expression" is stored as the value of anatom, we say that a function has been defined. Although theimplementation details governing how a lambda expressioncomes to be associated with an atom vary considerably, onecommon format for defining a function in LISP is

    (DEFUN ).DEFUN is a macro which creates the appropriate lambdaexpression and assigns it to the atom as thefunction's body. A function may be annihilated or alteredsimply by reassigning the value of the atom which represents

    212

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    it. Another virtue of this separability of a function from itsname is that nameless functions can be created and passed asarguments to other functions without having to bother toname them if they are needed only once.To illustrate LISP functions, let us define a function oftwo

    arguments, (LOCATE-ALL ), which,given the name of a tool type (e.g., SCREWDRIVER), and amaster tool list, will search the tool list for tools of thespecified type and report back a list of all tools ofthat type itfinds. Framing this as a recursive function, we write

    Since a PROG introduces some temporary variables whichmust be reclaimed as the PROG is exited, there must be someway of informing LISP that a PROG is about to be exited. Thefunction RETURN iS used for this purpose, informing thesystem that aPROG is being exited, and specifying what valuethe PROG iS to return to the calling environment.PROGS may be nested and may appear at any point in a

    LISP program. The PROG construction will typically resultin a more efficient implementation of an algorithm than the

    (DEFUN LOCATE-ALL (TYPE MASTER-LIST)(COND ((NULL MASTER-LIST) NIL)

    ((EQUAL (GET (CAR MASTER-LIST) 'TOOL-TYPE) TYPE)(CONS (CAR MASTER-LIST)

    (LOCATE-ALL TYPE (CDR MASTER-LIST))))(T (LOCATE-ALL TYPE (CDR MASTER-LIST)))))

    that is, if (COND) the master list is (or has been reduced to)NIL, then report back "nothing;" otherwise, if the next itemon the master list (its CAR) iS of the correct type (asdetermined by the GET), then add this tool to the list to bereported (i.e., CONS it onto the front of this list) and proceedwith the search on the remainder of the list (its CDR);otherwise (T ...), simply proceed, without recording thecurrent tool.

    Alternatively, we could express this algorithm in iterativeform via the PROG feature

    corresponding recursive implementation. Although somefeel that PROG makes LISP "impure," it is in reality thefeature which is probably most responsible for LISP'spresent widespread acceptance in both the Al communityand elsewhere.

    7) LISP Macros: Most LISP implementations supporttwo types of macros: compile-time macros and scannermacros. A compile-time macro is nothing more than a

    (DEFUN LOCATE-ALL (TYPE MASTER-LIST)(PROG (RESULT)LOOP (COND ((NULL MASTER-LIST) (RETURN RESULT))

    ((EQUAL (GET (CAR MASTER-LIST) 'TOOL-TYPE) TYPE)(SETQ RESULT (CONS (CAR MASTER-LIST) RESULT))))

    (SETQ MASTER-LIST (CDR MASTER-LIST))(GO LOOP)))

    i.e., enter a PROG (akin to an ALGOL begin-end block),defining one temporary local variable, RESULT; then, whilethe master-list remains nonnil, repeatedly examine its nextitem, collecting those with the correct type on the RESULTlist (via SETQ, LISP's "6assignment statement"), scanning tothe next tool on the master list (SETQ MASTER-LIST (CDRMASTER-LIST)).

    6) The PROG Feature: As just illustrated, LISP accommo-dates iteratively phrased algorithms via a constructioncalled a "PROG." A PROG has the form

    (PROG ...).

    As a PROG is entered, the local variables (if any) are allocatedfor the scope of the PROG, and each is initialized to NIL. Next,the statements which comprise the PROG's body are sequen-tially executed (evaluated) until execution either "falls offthebottom" of the PROG (an implicit exit from the PROG), oruntil a GO or RETURN is encountered. Statements which areatoms are interpreted as labels within a PROG and areignored during sequential execution. When a GO is en-countered, a branch to the specified label occurs, andsequential execution proceeds from that point.

    function which, when evaluated, computes not a final resultbut another S-expression which, when evaluated, will com-pute a final result. Thus when a macro is encountered by theLISP interpreter, a double evaluation is performed (the firstto compute the intermediate form, the second to run theintermediate form). When LISP functions are compiled intoactual machine code, the compiler recognizes macros andevaluates them once to obtain the intermediate form whichit then compiles. This technique is a very general andpowerful implementation of the macro concept.Most LISP scanners are quite modular, in the sense that

    they can be conditioned to initiate an arbitrary computationupon encountering a given character in the input stream.For example, in Wisconsin and Maryland LISP ([1], [24]),there exists a facility called (READMAC ),which conditions the scanner to call (no argu-ments) whenever is detected in the input stream. is free to perform any computation, andwhatever returns is spliced into the scanner'sinput stream. This style of table-driven scanner makes itpossible to superimpose additional syntax on LISP input,even to the point where LISP can model another language's

    213

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    syntax (by redefining delimiters, etc.). MLISP [34] is anexample of this.

    8) Variable Scoping: LISP variable values are derived asa function of the run-time environment rather than as afunction of lexical environment. As a program executes,there are two times at which new variables are introduced or"bound": 1) at function entry time (these are the names ofthe function's arguments that are mentioned in the LAMBDAexpression), and 2) at PROG entry time (i.e., the PROGStemporary variables). Variables are "unbound" at the corre-sponding exit times: when a function returns or when a PROGis exited.At the "top-level" of LISP (when no function is currently

    executing), any variables which receive values are thought ofas "global" to the system. Therefore, at any given momentduring execution, there will be a pool ofglobal atoms plus allthe atoms introduced via LAMBDA or PROG on the currentsequence of function calls. All these variables and theirassociated values ("bindings") are recorded on a structurecalled the "association list" (A-LIST), a user-accessible listof CONS nodes. All variable lookups consult this list, frommost recent to least recent. Since this list is dynamicallymaintained at run-time, the question of what variables areand are not bound (i.e., are on the A-LIST) is exclusivelydetermined by the dynamic calling environment, rather thanthe lexical scope of variables at the time functions weredefined. This means that "free" variables (ones which haveno binding at the current level) will assume a value atrun-time which is dependent upon their definitions infunctions farther up the calling hierarchy. In this manner,one function "peeks into," or borrows, the variables ofanother.By changing the system's A-LIST pointer while inside a

    function, that function's entire environment can be altered.For this reason, LISP is a very powerful tool whereverhypothetical reasoning (involving switches to altered con-texts) is necessary. Most other languages either lack such anability, or make it difficult to carry out. In LISP, contextswitching and "taking snapshots" of contexts to whichexecution is to be returned are very natural operations.

    9) LISP I/O: Traditionally, input/output has beenLISP's weakest link. Most systems define at least thefollowing I/O-related functions:

    (READ) read an S-expression,(READCH) read an individual character,(PRINT X) print S-expression X, skipping to a new line,(PRINI X) print S-expression X on the current output

    line,(TERPI) skip to beginning of new line on output.

    While these functions provide adequate formatting control,most LISPs are deficient in file-handling operations.(INTERLISP [38] is the exception, with more highlydeveloped interfaces to the TENEX virtual operatingsystem.) We regard this deficiency as more of a historicalaccident than as an inherent problem ofLISP (since addingthese features is simply a matter of writing the code). In fact,there are efforts underway for improved multiple-file inter-action and random access facilities both at MIT(MACLISP) and at Maryland (Wisconsin/Maryland LISP).

    10) Garbage Collection: Since LISP data structures cangrow in unrestricted ways, a crucial part ofany LISP systemis a conceptually asynchronous process called the "garbagecollector." The role of this process is periodically to takecontrol, mark parts of storage that are still referenced by theongoing computation, then reclaim all storage that is not soreferenced (garbage). Garbage collection is an unavoidableoverhead of any system with no declarations and in whichdata structures can grow in unrestricted ways.One potential disadvantage of garbage collection is that,

    once the system runs out offree storage, a garbage collectionmust occur. Since a garbage collect causes current comput-ing activity to be suspended, if LISP is controlling areal-time process, disastrous consequences can accrue. Suchproblems can normally be avoided by forcing the systeminto a premature garbage collect prior to entering real-timecritical sections ofcomputation. Alternatively, there is grow-ing interest in truly asynchronous (parallel) garbage collec-tion techniques which could obviate the problem altogether(see [11] for instance).

    11) LISP as a Self-Contained System: LlSP interpretersare typically implemented in assembly language. After thisbasic facility has been brought up, most other supportingsoftware can be written in LISP itself. Typical softwareincludes:

    1) a compiler which will generate (potentially quite good)machine code for LAMBDA expressions (i.e., functions)and PROGS. Typically, the LISP compiler will bewritten in interpreted LISP, then used to compile itself.The compiled version is subsequently used as the LISPsystem compiler.

    2) a debug package which will permit the tracing andinteractive development of functions. Typically, func-tions (together with their calling arguments) can betraced at entry time, and (together with their returnedvalues) at return time. Most LISP's will also accom-modate the tracing of variables (i.e., inform the userwhenever a traced variable's value is about to bechanged). The debugging potentials of LISP are essen-tially unlimited (the INTERLISP system is the mostadvanced to date), and are responsible (in part) forLISP's reputation as one of the best languages for theefficient and rapid development of complex software.In particular, there is no time-consuming interactionwith system compilers, loaders and linkers to becontended with; a program can be developed and putinto production within the confines ofthe LISP systemitself.

    3) an S-expression editor (or system editor interface)which makes possible the convenient editing of S-expressions and maintenance of files.

    B. MICROPLANNERWhile LISP is generally accepted as the standard for

    computing in Al, it does not supply the user with any a prioriconceptions about intelligence. LISP is simply the blanktablet onto which the user must write his theory of intel-ligence or control. Not surprisingly, this resulted innumerous reinventions of the wheel in areas like data base

    214

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    organization, problem solving, hypothetical reasoning, andlanguage understanding. Most reinventions were at a fairlylow level but occurred often enough to warrant someinvestigations into some of the undercurrents of Al pro-gramming techniques.MICROPLANNER [36] is the outcropping of some of

    these undercurrents, particularly where automatic problemsolving is concerned. MICROPLANNER was written in1970-71 as a small-scale implementation of ideas originallyproposed by Hewitt in 1969 [16]. The intent of the languagewas and is to provide some automatic mechanisms of database organization, context, and heuristic search.MICROPLANNER is implemented entirely in LISP.

    Because of this, its syntax is essentially LISP's syntax, andwhile in the MICROPLANNER environment, the user hasfull access to all of LISP. To distinguish MICRO-PLANNER (hereafter abbreviated MP) functions frompure LISP functions, the convention is to prefix all MPfunctions (there are about 50 of them) with TH (standing, wepresume, for "theorem," a key notion in MP).The most salient features of MP are these.1) Computation in MP is induced by pattern, rather than

    by calling functions by their names. In this style ofcomputa-tion (often called "pattern-directed invocation"), whenever agoal requires solution, a pattern describing the goal is postedto the entire system. "Entire system" normally means a largepopulation of problem-solving experts with patterns whichadvertise each one's expertise. Whenever a need is posted,the system searches through the database ofexperts lookingfor those whose advertised patterns match the need. Eachexpert so located is then tried in turn until one succeeds, oruntil all have failed. This is a radically different computingparadigm from the standard paradigm of "name calling,"since it makes for a very modular system where the requestorneedn't know any experts by name; problems are solved byanonymous experts in the population at large.

    2) MP automatically maintains a context-sensitive database of both factual assertions and the experts just men-tioned. The factual data base is a collection ofhighly indexedn-tuples, expressed as LISP S-expressions. Any one n-tuple("assertion"), or collection of n-tuples can be "associatively"accessed by presenting the lookup routines with a patterncontaining zero or more variables. Only those facts that aredeemed active in the current "context," regardless ofwhether they physically exist in the memory, will be located.

    3) MP does all the bookkeeping required for depth-firstnondeterministic programming. That is, anytime there is adecision of any sort in MP, the system makes a choice (eitherarbitrarily, or under the control ofuser-specified heuristics),records the alternatives for possible future reference, andthen proceeds. If a failure ever causes a "backup" to thatdecision point, the system automatically discards the cur-rent (failing) choice, selects the next alternative, and thenattempts to proceed again. In the backup process, allcomputations performed between the initial (bad) choiceand the failure point are undone (a record of all changes tothe database is maintained), and the system picks up fromthe decision point as though nothing had ever gone wrong.Thus MP can be said to maintain, at least implicitly, anentire goal tree (search tree) for each problem it attempts to

    solve. As we will suggest later, there are both advantages anddisadvantages to such automatic control.These are the three main contributions of MP. In the

    following sections we highlight and illustrate some of thespecific features of this problem solving language.

    1) The MICROPLANNER Data Base: Conceptually,the MP data base is divided into two segments: facts andtheorems. Theorems are further classified into threecategories: "antecedent" theorems, "erasing" theorems, and"4consequent" theorems. Theorems are discussed in SectionIII-B2.Both facts and theorems are entered into the data base via

    the function THASSERT; an item is deleted from the data basevia the function THERASE. Facts are fully constant LISPn-tuples. Thus to represent our screwdriver in MP, we mightaugment the data base as follows:(THASSERT (TOOL-TYPE SCREWDRIVER-1 SCREWDRIVER))(THASSERT (STYLE SCREWDRIVER-I PHILLIPS))

    (THASSERT (HEAD-SIZE SCREWDRIVER-1 0.3 CM)).Data base lookups and fetches are accomplished via the

    function THGOAL. Therefore, if at some point in an MPprogram we required a knowledge ofSCREWDRIVER- l's headwidth, we could write a fetch pattern of the form(THGOAL (HEAD-SIZE SCREWDRIVER-1 (THV X) (THV Y))).For our example, this would respond with "success" (i.e., afact which matched this template was located in the database, and it would produce the side effects ofbinding the MPvariables X and Y to 0.3 and CM, respectively. The THV formis used in MP to signal references to variables (all else isimplicitly constant).

    Every fact and theorem in the MP data base has a contextmarking. Whenever a fact or theorem is THASSERTed, if sucha fact is not already physically present in the data base, it iscreated and then marked as also being logically present. Ifthe THASSERTed fact is present physically, but marked aslogically not present, its logical status is changed to"present." If the fact is already logically and physicallypresent, THASSERT does nothing but reports a "failure" tostore a new copy of the fact. THERASE exerts opposite effectson facts in the data base; it causes a fact to be logicallymasked, either by changing the fact's logical context mark-ing or by actually physically deleting the fact (i.e., if the fact isbeing THERASEd at the level at which it was originallyTHASSERTed).Context markings allow MP to keep track of the history

    of the logical status of each fact and theorem. This enablesthe system to back up to prior context levels, therebyrestoring the data base to the corresponding prior state.Thus, although there are mechanisms for making permanentdata base changes (e.g., after some segment of MP code isconfident that what it has done is absolutely correct),normally (except at the top level), THASSERTS and THERASESare not permanent; instead, they normally exist only for theduration of some stretch of planning or hypotheticalreasoning.

    2) MICROPLANNER Theorems: All reasoning (in fact,all computation) in MP is carried out by THANTE, THERAS-

    215

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRII 1979

    ING, and THCONSE "theorems" which are called by patternrather than by name. The three types of theorem areindistinguishable in internal form, except with regard to thetype of event to which each responds. A THANTE theorem istriggered by the THASSERTion into the factual data base ofany pattern which matches its invocation pattern. A THERAS-ING theorem is triggered by the THERASEure from the database of any factual pattern which matches its invocationpattern. In the sense that these two classes of theoremsrespond spontaneously (not in response to any particularrequest), they represent a general interrupt capability. ATHCONSE theorem reponds to THGOAL requests whose goalpatterns match its invocation pattern.

    As a brief illustration of the uses of each of these, supposewe wish to implement the following three capabilities inMP: 1) whenever a new screwdriver is defined to the system,automatically cause its name to be added to the master toollist; 2) whenever a screwdriver is deleted from the system,automatically remove its name from the master tool list, andalso remove all its accompanying information; 3) whenever,during some assembly task, a THGOAL of the form: (SCREW-IN ) is announced,automatically search for, and return the name of an appro-priate screwdriver for the task (based on the screw's styleand head size). Task 1) will be modeled as a MP THANTEtheorem, part 2) by a THERASING theorem, and part 3) by aTHCONSE theorem as follows:

    (THANTE (X) (TOOL-TYPE (THV X) SCREWDRIVER)(SETQ MASTER-TOOL-LIST (CONS (THV X) MASTER-TOOL-LIST)))

    (THERASING (X) (TOOL-TYPE (THV X) SCREWDRIVER)(THPROG (ST CC ... HS HSU)

    (SETQ MASTER-TOOL-LIST (DELETE (THV X) MASTER-TOOL-LIST))(THAND (THGOAL (STYLE (THV X) (THV ST)))

    (THERASE (STYLE (THV X) (THV ST))))(THAND (THGOAL (COLOR-CODE (THV X) (THV CC)))

    (THERASE (COLOR-CODE (THV X) (THV CC))))

    (THAND (THGOAL (HEAD-SIZE (THV X) (THV HS) (THV HSU)))(THERASE (HEAD-SIZE (THV X) (THV HS) (THV HSU))))))

    (THCONSE (SCREW HOLE) (SCREW-IN (THV SCREW) (THV HOLE))(THPROG (ST HS HSU DRIVER DST DHS DHSU)

    (THGOAL (STYLE (THV SCREW) (THV ST)))(THGOAL (HEAD-SIZE (THV HOLE) (THV HS) (THV HSU)))(THGOAL (TOOL-TYPE (THV DRIVER) SCREWDRIVER))(THAND (THGOAL (STYLE (THV DRIVER) (THV DST)))

    (EQUAL (THV DST) (THV ST)))(THAND (THGOAL (HEAD-SIZE (THV DRIVER) (THV DHS)

    (EQUAL (THV DHS) (THV HS)))(THRETURN (THV DRIVER))))

    (THV DHSU)))

    Because of this last interaction between THGOAL'S and 3) Heuristic Guidance of Theorem Application: It isTHCONSE, a THGOAL can amount to considerably more than possible, by including special indicators in THGOAL, THAS-a simple data base fetch. In MP, when a THGOAL is issued, SERT, and THERASE calls, to influence the order in whichthe system first attempts to locate the desired goal directly as theorems are applied, or in fact to indicate whether or nota fact in the data base. If this fails, and the THGOAL request they should be applied at all. Specifically, a THGOAL (similarhas indicated that it is permissible to do so, MP will begin remarks apply to THASSERT and THERASE) with no indicatorssearching for THCONSE theorems whose invocation patterns will fail unless the requested goal can be satisfied exclusivelymatch the desired goal. If any are found, each is executed in by data base fetches (no theorems will be applied). (This isturn until one reports success (in which case the THGOAL iS the form we have been using for illustration purposes.) Ifsatisfied), or until all THCONSE theorems have failed (in there is an indicator present, it has either the form of awhich case the THGOAL fails). It is in this manner that more "filter" or a specific "recommendation list" of theoremscomplex knowledge (i.e., theorems, problem solving (referenced by name). When a filter is included in a THGOAL[techniques, etc.) can be automatically brought to bear on request, only those theorems whose properties pass thesome goal if that goal is not already explicitly present in the filtering test (theorems can possess property lists) will befactual data base. candidates for application. If the indicator has the form of aThe forms of these three MP theorem types are specific recommendation list, all theorems on that list will be

    (THANTE )(THERASING )(THCONSE ).

    216

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    applied first (in order) before any other theorems from thegeneral theorem base are attempted. Both forms allow theprogrammer to insert limited heuristic influences. Also,since one MP theorem can create or modify another MPtheorem, the filter facility provides a setting in which acollection of theorems themselves can evolve into a morestructured configuration on the basis of past experience (e.g.,who in the past has proven to be the most reliable expert).Although filtering and recommendations are a step in theright direction, as we will discuss later, CONNIVER pro-vides a more flexible environment in which to encodeheuristic knowledge.

    4) Searching and Backup in MP: Search and backup inMP can occur for two reasons: 1) some THCONSE theoremwhich was run to accomplish a THGOAL fails, and anothertheorem must be invoked (restoring the environment to thestate at which the first theorem took over), or 2) some objectto which the system has committed itself is discovered to beinappropriate, giving rise to the need of locating anothercandidate object and retrying. The THGOAL-THCONSEmechanism underlie the selection and backup wheretheorems are concerned, but object selection is handleddifferently, via the THPROG MP construction.

    In the previous THCONSE example, the goal was to locatesome screwdriver which satisfied some set of features (in thatcase, the correct STYLE and HEAD-SIZE). This was accom-plished by a THPROG which "conjectures" that such anobject, say X, exists, then proceeds to determine whether ornot this conjecture is true. In the example above, the THPROGsearched for a screwdriver of type and size which matchedthe type and size of the particular screw which was to beinserted. For the sake of illustration, suppose the screw wasof type Phillips of head size 0.3. Then, the THPROG in theexample above would have performed essentially the samesearch as the following, more specific, THPROG:

    (THPROG (X)(THGOAL (TOOL-TYPE (THV X) SCREWDRIVER))(THGOAL (STYLE (THV X) PHILLIPS))(THGOAL (HEAD-SIZE (THV X) 0.3))(THRETURN (THV X)))

    i.e., introduce an initially uncommitted variable X to repre-sent the object being searched for. First, obtain a candidatefor X by finding an object which is of TOOL-TYPE SCREW-DRIVER (the first THGOAL does this). At that point, X will betentatively bound to the first such object found. Continuewith this candidate until either all THGOALS have beensatisfied (in which case, the candidate is a success), or untilsome THGOAL fails (in which case, the system must back upand choose another candidate). Since some objects may passthe first THGOAL, or even two, but not all three, the systemmust automatically keep track of what object it is currentlyconsidering, and what other objects remain to be tested. Thisis the source of backups which are propagated because ofbad object selections.To keep track of theorem and object selection backups,

    MP maintains a decision tree, THTREE, which is essentially arecord of every decision made, and what to do in case the

    decision leads to a failure. The strength of THTREE is, ofcourse, that it frees the programmer from having to worryabout failures: if there is a solution, it will eventually befound by an exhaustive search. The fatal weakness ofTHTREEis that it imposes an often undesirable depth-first orderingon the search (i.e., one subgoal must be solved in its entiretybefore any other subgoals can be attacked). This makes itdifficult, if not impossible, to fabricate complexly in-tertwined solutions, since subgoals cannot communicatelaterally in the tree. The MP organization is also quiteawkward in its backup technique because of the depth-firstorganization of THTREE. Often, one small failure will causean entire branch of THTREE to be undone, when in fact mostof it was correct. It would be more desirable to be able todiscard only the bad part of the tree, retaining the partswhich are correct, so that wholesale resynthesis of largeparts of the THTREE does not have to occur. Unfortunately,this is, again, very difficult, if not impossible to do in MP.CONNIVER has a better control structure in these respects.

    5) Other Representative MP Capabilities: To completeour description of MICROPLANNER, we include tworepresentatives of the other functions available in thislanguage, together with a brief example of each.

    a) (THFIND ):THFIND provides a way of finding all objects in the systemwhich satisfy a certain set of criteria. A THFIND is essen-tially a THPROG which is made to fail artificially after eachsuccessful location of an object which satisfies the criteria. indicates how many objects are to be located (e.g.,ALL", "(AT-LEAST ) , ...); serve thesame role as THPROG variables; specifies what form toreturn as each object is found; contains theTHGOALS, etc. which define the criteria. THFIND returnseither a failure (in case number ofobjects could notbe found), or a list of 's, each corresponding toone successful object thus found.

    Example: (THFIND ALL (X) (THV X)(THGOAL (TOOL-TYPE (THV X) SCREWDRIVER))(THGOAL (STYLE (THV X) PHILLIPS))

    would return a list of all tools which were Phillipsscrewdrivers.

    b) (THMESSAGE ): Assubgoals are descended into (i.e. "on the way down" the goaltree), THMESSAGE statements have no effect. They are essen-tially "hooks" which will intercept failures beneath them inthe goal tree as such failures propagate back up to theTHMESSAGE via a (THFAIL THMESSAGE ). Uponbeing backed up to by a THFAIL, any THMESSAGE whosepattern matches the THFAIL pattern will take control (its will be executed). Thus, the THMESSAGE-THFAILcombination provides a way of anticipating possible prob-lems without actually checking for them beforehand. If allgoes well beneath the THMESSAGE, it will never run; however,ifsomeone gets into trouble beneath the THMESSAGE (in someway the THMESSAGE iS prepared for), the THMESSAGE cancorrect the problem and then cause the part of the treebeneath it to be reattempted.

    217

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    Example:... (anticipate difficulty in inserting a screw)(THMESSAGE (X Y) ((THV X) WILL NOT TURN IN (THV Y))(THGOAL (LUBRICATE (THV X))) (attempt a remedy)(THGOAL (SCREW-IN (THV X) (THV Y)))) (retry)

    ... (attempt to insert some screw in some hole)

    ... (report a failure back up to the THMESSAGE)(THFAIL THMESSAGE ((THV SCREW) WILL NOT TURN IN

    would anticipate, detect, report, and correct a problem, thenretry.

    C. CONNIVERA more recent stage in the evolution of the LISP family of

    languages was the esult of McDermott's and Sussman'sdevelopment of a language called CONNIVER [20].CONNIVER's development was principally motivated bythe control structure deficiencies ofMP, as suggested in theearlier discussion of THTREE. Although there were someimprovements in the data base and pattern-directed invoca-tion control (e.g., the pattern matcher is more sophisticated),the most significant feature of CONNIVER is its ability tomaintain numerous computations in states of suspendedanimation, then to switch among them, working on manysubgoals or alternate strategies in unison rather than one ata time. In such an environment, partial computations neednot be undone simply because some small aspect of theproblem solving has gone awry.CONNIVER is less a programming language than it is a

    collection of ideas about control structure. (The languageapparently has never been used for more than one or twosignificant programming tasks [12].) Because of this, ourdiscussion will omit most references to syntax, and highlightonly the aspects of CONNIVER's control structure whichare unusual or unique to it.

    1) Frames, AU-REVOIR and ADIEU: In a conventionalprogramming language (MP included), one function callsanother function either by name or pattern and waits untilthe called function returns control. In a conventional langu-age, once a function returns, that invocation instance of itdies; the function may be called anew, but the new call willcause a new instance of the function. No memory of afunction's current status can be preserved across call-returnsequences. This type of control is usually carried out underthe control of push-down stacks which record calling argu-ments and return addresses; calling a function causes stacksto be pushed, while returning from a function causes stacksto be popped, annihilating all control information.

    In CONNIVER, things are quite a bit different. To call afunction in CONNIVER is to create a so-called "frame" forthe called function, rather than to push information onto acentral stack. A function's frame will contain all the informa-tion needed to characterize the function at any moment (e.g.,from what A-LIST it derives values for its free variables, towhom it is to return when it has finished, etc.). There are two

    (THV HOLE)))important features of a frame. First, it is a user-accessibleLISP data structure. This means that a function may alter itsown or another function's frame in arbitrary ways, causingfree variables to be looked up on some other function'sA-LIST, or causing the identity of the function to whichcontrol is to be returned to be altered. Second, because thereis no central stack which is chronologically pushed andpopped at function entry/exit, execution control is free tomeander from one function to the next without permanentlyclosing any function. Thus, at any moment, there can benumerous suspended functions which may be resumed at thepoint at which they last relinquished control, or in fact, at anarbitrary labeled point within them.As one might expect, this ability makes the context

    marking technique for items in the data base more complexthan in MP. In particular, since control may eventually bereturned to any suspended function (the system in generalhas no way of knowing whether or not it actually will be),every fact in the data base must have markings which specifyfor every suspended function F whether or not that fact issupposed to be logically present while F is running. Toaccomplish this type of marking, the MP context schemewas generalized from a stack-like arrangement to a tree ofcontexts. Basically, every fact lives on some branch of thetree, and functions have access to limbs ofthe tree. Althoughthere is considerable overhead, the system manages to maskand unmask facts in the data base in synchrony with themeandering of execution control from one function tothe next.To distinguish the permanent return of a function from

    the case where a function merely relinquishes control,reserving the option to continue, CONNIVER defines twomethods of returning: ADIEU (final, permanent return) andAU-REVOIR (suspension). One very important application ofthe AU-REVOIR feature is in the (often costly) generation ofalternatives. Rather than calling a function (such as THFINDin MP) to generate all possible candidates before anydetailed filtering tests are applied (a procedure which maywaste an inordinate amount of time in the initial collectingphase), in CONNIVER it is possible to call a "generator"function which will locate and return candidates one at atime, suspending itself across calls. This makes for a moreintimate form of interaction between the generating andtesting functions than is possible in MP and can lead to moreefficient searches because of this intimacy. To facilitate theuse of generators, CONNIVER has some rather elaboratemachinery for maintaining "possibilities lists," including a

    218

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    function, TRY-NEXT, which controls the extraction ofpossibi-lities from such lists.Computation in CONNIVER is similar in most other

    regards to computation in MP. The counterparts ofTHANTE,THERASING, and THCONSE theorems are, respectively, IF-ADDED, IF-REMOVED, and IF-NEEDED "methods." Except fordifferences in syntax, and a more general pattern-directedinvocation scheme, these three functions are the same as theMP versions. CONNIVER counterparts of MP's data baseand goal-statement functions, THASSERT, THERASE, andTHGOAL are, respectively, ADD, REMOVE, and FETCH.

    D. Efficiency of the LISP Language FamilyBeing an interpreted language, LISP is slower than, say,

    FORTRAN, by between one and two orders of magnitude.However, compiled LISP can be competitive with a goodFORTRAN compiler. We feel that LISP provides the best ofboth worlds, in the sense that the interpreter provides foreasy program development and debugging, while the LISPcompiler can transform debugged code into production-level efficiency.MICROPLANNER and CONNIVER, on the other

    hand, are inherently less efficient, primarily because of thecontrol structures they superimpose on LISP. The fatal flawwith MP is its backup system, which can be extremely slow;compilation will not typically remedy the problem. CON-NIVER is slow for similar reasons; however, in addition todata structures, processes must also be garbage collected,and an elaborate context tree must be maintained. Althoughthese two languages contain many noteworthy features, wefeel that neither (as currently implemented) is appropriatefor production applications.

    E. Standardization of the LISP Language FamilyThere are LISP systems for the following machines:

    PDP-10, PDP-11, UNIVAC 1106, 1108, 1110, CDC 6500,6600, IBM 360, 370, SDS SIGMA 5, and others. Being a

    PLANE P1;

    SEARCH yellov

    ACROSS P1WITH INCR

    relatively easy language to implement, we would anticipateno significant problems when installing LISP on new ma-chines, including microcomputers. Furthermore, sinceLISP's syntax is uniform, transportability between variousLISP systems is more a question of semantics of functiondefinitions. However, such incompatibilities can normallybe ameliorated in about "one day's worth" ofmacro writing.Of course, there are some LISP systems such as INTER-LISP which have such a large number of built-in functionsthat conversion to a more primitive LISP system may entaila considerable amount of work in the form of writing newfunction definitions. Nevertheless, LISP can be fairly char-acterized as standard and transportable. Finally, most LISPsystems have an accompanying compiler, usually written inLISP itself.

    IV. RELATED LANGUAGES

    A. ALAL is a high-level programming system for specification

    of manipulatory tasks developed at Stanford Artificial Intel-ligence Laboratory [15]. It is a SAIL-like language andincludes large runtime support for controlling devices.

    Trajectory calculation is a crucial feature ofmanipulatorycontrol. AL contains a wide range of primitives to supportefficient trajectory calculations. As much computation aspossible is done at compile-time and calculations aremodified at run-time only as necessary.

    Besides a dimensionless scalar data type (i.e., REAL), ALrecognizes and manipulates TIME, MASS, and ANGLE SCALARS,dimensionless and typed VECTORS, ROT (rotation), FRAME(coordinate system), PLANE (region separator), and TRANS(transformation) data types. Proper composition of var-iables of these types gives a simple means of performingcalculations of any type of movement.

    Also included are PL/1-like ON-conditions, which allowmonitoring of the outside world, and concurrent processes.

    Example:

    {statements initializing pl}v {SEARCH is a primitive which causes

    a hand to move over a specifiedarea. yellow is a hand}{hand moves across plane}

    EMENT = 3 * CM {every 3 cm}REPEATING

    BEGINFRAME set;set yellow;MOVE yellow XOR-Z * CN

    {do at every iteration}

    {yellow is also coord system of hand}I{move hand 1 cm down from currentposition along Z-axis}

    ON FORCE(Z) > 3000 * DYNESDO TERMINATE; {keep in touch with real world}

    MOVE yellow TO set DIRECTLY; {move the hand back to whereit was in a straight line}

    END;

    219

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    B. MLISPMLISP (meta-LISP) is a high-level list-processing langu-

    age developed at Stanford University [34]. MLISP pro-grams are translated into LISP programs which are thenexecuted or compiled. The MLISP translator itself is writtenin LISP.MLISP is an attempt to improve the readability ofLISP

    programs as well as alleviate some inconveniences in thecontrol structure of LISP (e.g., no explicit iterativeconstruct). Since run-time errors are only detected by theLISP system (when actually executing the program), usersfrequently find themselves debugging the translated LISPcode. This somewhat defeats the purpose of any high-levellanguage.

    All LISP functions are recognized and translated inMLISP, but the Cambridge prefix notation of LISP hasbeen replaced by standard infix and prefix function notation.Instead of (PLUSX Y) one may writeX + Y, and (FOO A B C)becomes FOO(A, B, C).MLISP also provides a powerful set of iterative state-

    ments and a large number of "vector operators." Vectoroperators are used to apply standard operators in a straight-forward manner to lists. Thus, in MLISP, < 1, 2, 3> + ( yields . +@ is the vector addition operator and is equivalent to (LIST A B C) in LISP.Example: Given a list of the form ,

    this function will return a list of the form where holderi is eitherPLIERS, VISE, or NOTHING accordingly as needed to hold theobject. 04 . °/% is an MLISP comment.

    EXPR HOLD-LIST(OBJ-LIST);BEGIN

    NEW S;RETURN

    FOR NEW OBJ IN OBJ-LISTCOLLECT

    C. POP-2POP-2 is a conversational language designed by R M.

    Burstall and R. J. Popplestone at the University of Edin-burgh [6].POP-2 features an ALGOL-like syntax and draws heavily

    from LISP. Integers, reals, LISP-like lists and atoms (called"names"), function constants (lambda expressions), records,arrays, extensible data types, and run-time macros aresupported. A unique feature of the POP-2 system is theheavy use of a system stack, which the user may easilycontrol to enhance the efficiency of programs.A full complement of list-manipulation, numeric and

    storage-mangement functions are available.Example: Suppose we wish to obtain a list of all ma-

    chinery not currently functioning. A useful function wouldbe

    COMMENT sublist returns a list of all elements of argument list xl which satisfy argument predicate p;

    FUNCTION sublist xl p; {arguments are xl and ptVARS X; {declaration of local, no type}IF nul(xl) THEN nil bjust like LISP}

    ELSE hd(xl) x; {hd(a) = (car a)}IF p(X)THEN x:: sublist(tl(xl), p)

    {tl(a) = (cdr a), x:: l = (cons x l)}ELSE sublist(tl(xl), p)

    CLOSECLOSE

    END;.

    % EXPR starts a regular func

    llocal declaration% RETURN iS a unary operator

    % OBJ is local to the FOR lOOp.% OBJ will be bound in turn% to each element of OBJ-LIST.% COLLECT indicates that the% result of each iteration is% to be APPENDed to the previous% result and this whole list% returned as the result of% the FOR.

    (.*I0

    /o

    O,/

    '41)

    O:

    '0(/./n

    4)

    ', <

    4)

    4't

    C)O)

    l}

    IF (S:= GET(OBJ, 'SIZE)) LEQUAL 5THEN

    >END;.

    220

  • RIEGER et al.: ARTIFICIAL INTELLIGENCE PROGRAMMING LANGUAGES

    A call might then look like,

    sublist(machine-list,LAMBDA m; not(functioning(m)) END);

    which might return,

    [punch-pressl drill-press2 unit1O]

    which is a POP-2 list.

    D. QLISP

    QLISP is an extended version ofQA4 (a PLANNER-likeLISP derivative) [30] embedded in the sophisticatedINTERLISP system. QLISP supports a wide variety ofdatatypes designed to aid in the flexible handling of large databases. Among the data types supported are "TUPLE," "BAG"and "CLASS." A TUPLE is essentially a LISP list that can beretrieved associatively (see below). A BAG is a multiset, anunordered collection of (possibly duplicated) elements. Bagshave been found to be useful for describing certain commu-tative associative relations. A CLASS is an unordered collec-tion of nonduplicated elements (i.e., basically a set).

    Arbitrary expressions may be stored in the system database and manipulated associatively. The QLISP patternmatcher is used to retrieve expressions in a flexible manner.The system function MATCHQQ may be used to invoke thepattern matcher explicitly, as in:

    (MATCHQQ (+- X +- Y) (A B))

    which causes X to be bound to A and Y to B (" " indicatesthis "need for a binding"). The patterns to MATCHQQ may bearbitrarily complex, as in

    (MATCHQQ (A (+- X +- Y)) (+- X (A (B C))))

    in which X is bound to A and Y to (B C).QLISP expressions are represented uniquely in the data

    base, unlike LISP where only atoms are unique. To distingu-ish between "identical" expressions, "properties" may beassociated with any expression by QPUT.

    (QPUT (UNION (A B)) EQUIV (UNION (B C))).

    The above puts the expression (UNION (B C)) under theproperty EQUIV for the expression (UNION A B).QLISP provides facilities for backtracking and pattern-

    directed invocation of functions, as illustrated by

    (QLAMBDA (FRIENDS JOE (CLASS +-F +-sS REST))(IS (FATHER $s $F))BACKTRACK).

    This function will find an occurrence of a CLASS denotingFRIENDS of JOE. F and s will be bound to the first two elementsof the CLASS and REST will be bound to the remainder of theCLASS (indicated by "*--"). If s is a father of F, then thefunction succeeds. ("$" causes the current binding of itsargument to be used.) BACKTRACK causes reinvocation of thefunction with new bindings for s, F, and REST until thefunction succeeds or there are no untried bindings.The user may collect teams of functions to be invoked

    under desired circumstances. Many QLISP data base mani-pulation functions may have optional arguments whichdenote a team of routines to be used to perform antecedent-type functions (as in PLANNER).QLISP provides a general context and generator mechan-

    ism similar to that of CONNIVER. Also provided is asmooth, readily accessible interface to the underlyingINTERLISP system which aids in the development andmaintenance of large systems.

    Future plans for QLISP include multiprocessing primi-tives, semantic criteria for pattern matching (as opposed tothe current syntactic information), and the ability for thepattern matcher to return more information than a simplematch or fail.

    V. EXAMPLESA. Introduction

    A common example will be used to illustrate the distingu-ishing features of SAIL, LISP, MICROPLANNER, andCONNIVER. With only minor variations the programsegments use the same algorithm. The program-segmentsappear out of context and are not meant to indicate the mostefficient (or preferred) implementation of the problem ineach language but merely to illustrate the languages' majorattributes.

    Problem Statement: Given two distinct assemblies (say Aland A2), attempt to unscrew Al from A2, and indic4tesuccess or failure accordingly. The "world" ofthe examplk isassumed to include:

    1) Two hands, LEFT and RIGHT, capable of moving,grasping, twisting, and sensing force and motion;

    2) fixed number (possibly zero) of PLIERS,3) fixed number (possibly zero) of VISES,4) fixed number of "assemblies."

    For each PLIERS and VISE, the data base contains anassertion of the form, "PLIERS (VISE) * n is at location (X, Y,Z) and is of capacity C cm." In addition, for each assemblythe data base contains an assertion of the form, "assembly Ais at location (X, Y, Z) and is of size S cm." As we shall see,the languages are distinguished in part by the methods eachuses to represent such knowledge.Each example assumes the existence of the routines

    described below in ALGOL-like notation.ATTACHED(A 1, A2) TRUE if and only if the assembly

    represented by Al (hereafter referred to as Al) is attached tothe assembly represented by A2 (referred to as A2). Theroutine has no side effects.MOVE(HAND, LOCATION)-MoveS HAND (LEFT or RIGHT)

    to LOCATION (but see MICROPLANNER's description ofMOVE).TWIST(HAND, DIRECTION) Twists HAND (LEFT or RIGHT)

    in the given DIRECTION (CLOCKWISE or COUNTERCLOCK-WISE). The DIRECTION is oriented looking down the length ofthe arm. Except for SAIL, all programs assume a routinecalled TWIST-BOTH, which causes both hands to twist at once.GRASP(HAND, OBJECT) Causes HAND (LEFT or RIGHT) to

    221

  • IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, VOL. SMC-9, NO. 4, APRIL 1979

    grasp OBJ