Download pdf - Thinking Backwards

Transcript
  • 8/22/2019 Thinking Backwards

    1/7

    Thinking Backward forKnowledge AcquisitionRoss D. Shachter and David E. Heckerman

    This article examines the direction inwhich knowledge bases are constructed

    for diagnosis and decision making Whenbuilding an expert system, it is traditionalto elicit knowledge from an expert in thedirection in which the knowledge is to be

    applied, namely, from observable evi-dence toward unobservable hypothesesHowever, experts usually find it simpler to

    reason in the opposite direction-fromhypotheses to unobservable

    evidence-because this direction reflectscausal relationships Therefore, we arguethat a knowledge base be constructed fol-

    lowing the experts natural reasoningdirection, and then reverse the directionfor use This choice of representationdirection facilitates knowledge acquisi-

    tion in deterministic domains and isessential when a problem involves uncer-tainty We illustrate this concept with

    influence diagrams, a methodology forgraphically representing a joint probabili-ty distribution Influence diagrams pro-

    vide a practical means by which an expertcan characterize the qualitative and quan-titative relationships among evidence andhypotheses in the appropriate direction

    Once constructed, the relationships caneasily be reversed into the less intuitivedirection in order to perform inference and

    diagnosis, In this way, knowledgeacquisition is made cognitively simple;

    the machine carries the burden of trans-lating the representat ion

    A few years ago, we were dis-cussing probabilistic reasoningwith a colleague who works in com-puter vision He wanted to calculatethe likelihood of a tiger being presentin a field of view given the digitizedimage. OK, we replied, If the tigerwere present, what is the probabilitythat you would see that image? Onthe other hand, if the tiger were notpresent, what is the probability youwould see it? Before we could saywhat is the probability there is atiger in the first place? our colleaguethrew up his arms in despair Whymust you probabilists insist on think-ing about everything backwards?

    Since then, we have pondered thisquestion. Why is it that we want tolook at problems of evidential reason-ing backward? After all, the task ofevidential reasoning is, by definition,the determination of the validity ofunobservable propositions fromobservable evidence; it seems best torepresent knowledge in the directionit will be used. Why then should werepresent knowledge in the oppositedirection, from hypothesis to evi-dence?In this article, we attempt toanswer this question by showing howsome backward thinking can simplifyreasoning with expert knowledge 1 Webelieve that the best representa tionfor knowledge acquisition is the sim-plest representation which capturesthe essence of an experts beliefs Weargue that in many cases, this repre-sentation will correspond to a direc-tion of reasoning that is opposite thedirection in which expe rt knowledgeis used in uncertain reasoning.This question has relevance to arti-ficial intelligence applicationsbecause several popular expert systemarchitectures represent knowledge in

    the direction from observables tounobservab les. For example, in theMYCIN certainty factor (CF) model(Shortliffe and Buchanan 1975),knowledge is represented as rules ofthe form

    IF THEN

  • 8/22/2019 Thinking Backwards

    2/7

    simpler to construct assessments inthis direction (Kuipers and Kassier1984). Furthermore, representations ofexpert knowledge are often less com-plex in the causal direction. Thus,these three observations suggest thatthere might be advantages to repre-senting knowledge in the directionopposite the d irection of usage. Thisargument is summarized in figure I

    CAUSE a.ssesmentc EFFECT(usually) (usually)Unobservable - usage Observable

    Figure 1.Two Directions of Representation.

    Most real-world problems involve causalinteractions in which one event (thecause) affects the outcome of another (theeffect) We often use an effect that we canobserve to help us learn about a causewhich we cannot d irectly observe. How-ever, we are more comfortable quantify-ing the relationship between the twoevents in the causal direction, assessin gthe likelihood of the effect if only weknew whether the cause were true

    Influence DiagramsIn this article, we examine theseissues in the context of probabilitytheory. Nonetheless, we believe thatthe distinction between direction ofusage and direction of natural assess-ment is a fundamental issue, indepen-dent of the language in which belief isrepresented.Within the theory of probability,several graphical representations existfor uncertain ty that feature bothdirected graphs (Wright 1921; Howardand Matheson 1981; Pearl 1986;Cooper 1984) and undirected graphs(Speed 19781 Spiegelhalter 1985). Thedifferent approaches share the basicconcept of the factorization of anunderlying joint distribution and theexplicit representation of conditionalindependence. We use a directedgraph representation method becausedirection is central to our discussion.In particular, we use the influencediagram representation scheme(Howard and Matheson 1981).Although the influence diagram has

    proven intuitive for knowledge acqui-sition, the representation is preciseand well-grounded in theory. Each ofthe oval nodes in an influence diagramrepresents a random variable or uncer-tain quantity, which can take on twoor more possible values The arcs indi-cate conditional dependence: a vari-ables probability distribution dependson the outcomes of its direct predeces-sors in the graph. For example, threepossible influence diagrams exist forthe two random variables X and Yshown in figure 2. In the first case, Xhas no predecessors, so we assess amarginal (unconditional] distributionfor X and a conditional distribution forY given X. In the next case, with thearc reversed, we assess a marginal dis-tribution for Y and a conditional dis-

    Figure 2.Influence Diagrams with Two NodesThe joint probability distribution for twovariables can be represented by three dif-ferent influence diagrams In diagram a,there is a probability distribution for vari-able X and a conditional distribution forvariable Y given that we observe the valueof X (indicated by the arc from X to Y) Indiagram b, there is an unconditional dis-tribution for Y and a distribution for Xgiven Y Diagram c asserts that the twovariables are independent because we canobtain their joint distribution from anunconditional distribution for each. Anytwo var iables can be modeled by a or b,but the missing arc in diagram c imposesa strong assump tion on the relationshipbetween X and Y, namely, that knowing Xwill not change our distribution for Y andvice versa

    tribution for X given Y. Both corre-spond to the same fundamental modelat the underlying joint distribution,but the two diagrams represent twodifferent ways of factoring the model.The transformation between them,which involves reversing the directionof conditioning and, hence, the rever-sal of the arc, is simply Bayes theo-rem. Finally, in the third case, neithernode has any predecessors, so X and Yare independent. Therefore, we can

    obtain the joint by assessing marginaldistributions for both X and Y. Whenthe two variables are independent, weare free to use any of the three forms,but we prefer the last one, whichexplicitly reveals this independence inthe graph.As a detailed example, consider theinfluence diagram in figure 3. This fig-ure shows the relationship betweendiabetes and a possible symptom ofthe disease, blue toe. Associated withthe influence diagram a re the proba-bility distributions, a marginal(unconditional) distribution for dia-betes, and a conditional distributionfor blue toe given diabetes.In figure 4, we see four possibleinfluence diagrams for the uncertainvariables X, Y, and Z. In the firstcase-the general situation-the threevariables appear to be completelydependent. We assess a marginal dis-tribution for X and conditional distri-butions for Y, given X, and Z given Xand Y. In general, there are n! factor-izations of a joint distribution amongn variables. Each possible permutationleads to a different influence diagram.In the second case in the figure, thethree variables are completely inde-pendent; in the third case, X and Y aredependent, but Z is independent ofboth of them. In the fourth case, wesee conditional independence. Theabsence of an arc from X to Z indi-cates that although X and Z are depen-dent, they are independent given Y.This type of conditional independenceis an important simplifying assump-tion for the construction and assess-ment of uncertainty models.Returning to the earlier diabetesexample, blue toe is caused byatherosclerosis, which might becaused by diabetes. This relationshipis shown in figure 5. The absence of adirect arc from diabetes to blue toeindicates that given the state ofatherosclerosis, knowing whether thepatient has diabetes tells us nothingabout whether the patient has bluetoe. The probability distributionsneeded to completely describe thismodel are shown below the influencediagram. They are a marginal distribu-tion for diabetes [the same as previ-ously), and conditional distributionsfor atherosclerosis, given diabetes, andblue toe given atherosclerosis.

    56 AI MAGAZINE

  • 8/22/2019 Thinking Backwards

    3/7

    formation is accomplished by revers-ing the arcs in the influence diagram,which corresponds to the general ver-sion of Bayes theorem (Howard andMatheson 1981; Olmsted 1983;Shachter 1986), as shown in figure 6.As long as no other path existsbetween chance nodes (either proba-bilistic or deterministic node] X and Y,we can reverse the arc from X to Y. Ifthere were another path, the newlyreversed arc would create a cycle. Inthe process of the reversal, bothchance nodes inherit each others pre-decessors. This inheritance can add aconsiderable number of arcs to theinfluence diagram We are concernedin later sections about the relativecomplexity of the model before andafter reversals. Sometimes, not all thearcs are needed after reversals, but ingeneral they can be. If multiple rever-

    Figure 3A Simple Influence Diagram for Diabetes.

    The influence diagram tells us about the joint behavior of the events diabetes and bluetoe We have an unconditional probability distribution for diabetes: diabetes is presentwith probability 1 and absent with probability 9 We also have a conditional distribu-tion for blue toe given that we observe diabetes: if diabetes is present, then blue toe ispresent w ith probability 014, but if diabetes is absent, then blue toe is present withprobability 006

    Figure 4Influence Diagramswith Three Nodes

    The joint probability distribution forthree variables can be represented bymany possible influence diagrams, som eof which are shown in figure 4 Diagram ais the general case, and it could apply toany of the three variables: we have anunconditional distribution for X; a condi-tional distribution for Y, given X; and aconditional distribution for Z given bothX and Y Diagram b is the most restrictivecase: it indicates that the three variablesare independent because we have anuncondition al distribution for all threeDiagram c allows X and Y to be depen-dent but asserts that Z is independent ofX and Y Finally, diagram d representsconditional independence: X and Z mightbe dependent, but once we observe Y, wewould not learn anything more about Zby seeing X Z and X are said to be condi-tionally indepe ndent given Y

    In the influence diagram, we alwaysrequire that there be no directedcycles. By doing so, an orderingalways exists among the variables sothat we can recover the underlyingjoint distribution.One other type of influence diagramnode is relevant to our discussion, adeterministic node, drawn as a doubleoval (as opposed to the probabilisticnode, which we have shown as a sin-gle oval). The deterministic variable isa function of its predecessors, so itsoutcome is known with certainty ifwe can observe the outcomes of these

    predecessors. In general, we cannotobserve all these predecessors, sothere can be uncertainty in the out-come of a deterministic variable.In an influence diagram, knowledgecan be encoded in a comfortable andintuitive direction for the expert andlater transformed for use. Such trans-

    Pr I D,iabetes }Diabetes :Atherosclerosis Absent _ 0.9

    Present ;Absent -

    Figure 5An Influence Diagram Representing Conditional Independence

    This influence diagram shows that blue toe is conditionally independent of diabetesgiven atherosclerosis Our distribution for blue toe would change if we knew whetherdiabetes were present However, once we know whether atherosclerosis is present, find-ing out about diabete s would tell us nothing new about blue toe The distribution forblue toe is now conditioned by atherosclerosis If atherosclerosis is present, then bluetoe is present with probability 9; otherwise, it is only present with probability 005

    Figure 6Arc Reversal in an Influence Diagram.Bayes theorem is represented in the influence diagram by an arc leversal In the dia-gram on the left, Y is conditioned by X If we want to express the distribution of X con-ditioned by Y instead, we can reverse the arc between them In general, both X and Ywill have other conditioning variables, indicated by the three sets (The middle set con-tains their common conditioning variables) As a consequence of the arc reversal, thenew distributions for X and Y will (in general) be conditioned by all the variables in thethree sets As a rule, the distributions become more complicated after we perform arcreversals

    FALL 1987 57

  • 8/22/2019 Thinking Backwards

    4/7

    Figure 7Influence Diagram Transformation for Diagnosis.Arc reversals permit us to move from an influence diagram that is easy to assess to onewhich helps with diagnosis The influence diagram a shows that blue toe and glucose inurine are conditionally independent given diabetes This fact is also evident in diagramb, which is obtained by reversing the arc from diabetes to blue toe When the arc fromdiabetes to glucose in urine is reversed to obtain diagram c, we must also add an arcfrom blue toe to glucose in urine, and there is no longer any conditional independenceindicated in the diagram The distribution for diabetes is now conditioned by both bluetoe and glucose in urine If they are both present, then diabetes is present with probabili-ty 959; if they are both absent, then diabetes is present with probability .OOl.

    sals are needed, then the order inwhich they are performed can affectthis arc structure.3To continue with the diabetesexample, consider the influence dia-gram shown in figure 7a. We haveadded a second symptom of diabetes,the presence of glucose in the urine.The conditional probabilities for glu-cose in the urine, given diabetes, aregiven below the influence diagram.The two symptoms are shown to beconditionally independent in the dia-gram given diabetes. If we knowwhether the patient has diabetes, thenthe presence of one symptom tells usnothing about the other. Although thediagram is natural for assessment, itcannot be used for diagnosis We cantransform the diagram into one that isuseful for diagnosis by first reversingthe arc from diabetes to blue toe andthen reversing the arc from diabetesto glucose in urine as shown in figures71, and 7c. When performing the sec-ond reversal, an arc is added from bluetoe to glucose in urine The arc is

    needed because the symptoms are notindependent. When we do not knowwhether the patient has diabetes, thepresence of one symptom tells usmuch about the other.

    Deterministic ModelsThe importance of the distinctionbetween d irection of assessment anddirection of usage appears even in thesimplest categorical models. Suppose,for example, that we have an error inthe output from a small computer pro-gram. If we knew the source of theerror, then we would know with cer-tainty the type of output error toexpect Thus, we could use the modelshown in figure 8, in which the pro-gramming error is represented by aprobabilistic node conditioning thecomputer output represented by adeterministic node. When we observethe output and wish to learn about thesource of the error, we reverse the arcusing Bayes theorem and find thatafter the reversal, both nodes havebecome probabilistic. Given the out-

    Figure 8.An Influence Diagram for DiagnosingComputer Programming Errors.

    The influence diagram on the left is a sim-ple model of a computer program Outputis a deterministic variable; if we knowwhether a particular error is present, thenwe know exactly what output to expectHowever, we are uncertain about whatthe output will be because we are uncer-tain about the error Therefore, after wereverse the arc to perform diagnosis, out-put becomes a probabilistic variable. Theerror remains probabilistic because thesame output could arise from differenterrors

    put, we do not necessarily know whattype of error caused it, but we are ableto update our previous beliefs aboutthe possible errors in light of this newevidence. Our direction of usage isclearly in the more complex, reversedirection, but the model is easier toconstruct in the original direction,which exploits the categorical behav-ior of our man-made computer sys-tem.Suppose now that we have a muchlarger computer system. If it werewritten with a modular design, wemight have the influence diagramshown in figure 9. Again, this model isrelatively easy to construct because ofthe categorical nature and the inde-pendence among subsystems. If, how-ever, we observe the output and wishto update our knowledge about thesesubsystems, we find that they are nolonger categorical, nor are they inde-pendent, in light of the new informa-tion.4 This newer, more complexmodel is the correct one to use toupdate our beliefs as we observe evi-dence, but it is less convenient forknowledge acquisition.Probabilistic Models

    In most real-world domains, we do nothave a categorical model to assess, butthere is still considerable advantage tothinking about a problem in thecausal direction. Often, basic and

    58 AI MAGAZINE

  • 8/22/2019 Thinking Backwards

    5/7

    straightforward probabilistic modelsbecome complex when viewed in thedirection of usage.Consider the case of two effectswith a single common cause. Evenwhen the effects are conditionallyindependent given the cause, as in fig-ure 10, in general, they are dependentwhen the problem is reversed forusage. We saw a specific example ofthis case earlier in figure 7. Similarly,when there are two independent caus-es and a single common effect, as infigure 11, we see complete dependen-cy when the problem is reversed.Clearly, as the number of causes andeffects increases, the problem staysstraightforward in the causal directionbut becomes complex in the directionof usage.As an example, consider two disor-ders-congestive heart failure andnephrotic syndrome [a kidneydisease)-that essentially arise inde-pendently. Congestive heart failureoften results in an enlarged heart (car-diomegaly) and an accumulation offluid in the ankles (pitting edema),and nephrotic syndrome often leads toprotein in the urine and pitting edemaas well. A simple test for protein inthe urine is whether the urine isfrothy; an X ray can detect car-diomegaly. The model correspondingto this problem is shown on the left infigure 12 If we turn the model aroundto show how the unobservable eventsof interest-heart failure and nephrot-ic syndrome-depend on the observ-ables-X ray, pitting edema, andfrothy urine-then the modelbecomes the one shown on the rightin figure 12. The original model wasnot only simpler but more natural toassess, going in the causal direction.The reversed model would be intoler-ably confusing to assess, but it has allthe dependencies one needs for properusage.Another major advantage to view-ing a problem in different directionsfor construction and solution is thatparts of the assessment might notvary much from case to case. Consid-er the simple medical example in fig-ure 13, in which the presence orabsence of a disorder affects the likeli-hood that a set of symptoms willmanifest. Although the probabilitydistribution for the disorder can vary

    Figure 9An Influence Diagram for a Modular Computer ProgramThis influence diagram represents a modular computer system The diagram on the leftshows considerable conditional independence; for example, neither system2 nor subsys-tern12 will be affected by errors in subsystem11 The new diagnostic diagram obtainedafter arc reversals shows much less conditional independence It helps one to understandthe relationships among the variables after we observe output In this case, knowingwhether system1 is working could change the distribution for system2; if output revealsan error that is not explained by systemls state, then system2 must be responsible

    Figure 10.A Single Cause with Multiple Effects

    Figure IO ilhrstrates a cause for which there are multiple effects The diagram on theleft is easier to assess because the effects are conditionally independent given the truecause To reason about the cause from the effects, the arcs are reversed to obtain thecause conditioned by the effects The effects are now dependent because observing onewill change the distribution for the cause, which in turn changes the distribution forthe other effect

    Figure 11.A Single Effect with Multiple Causes

    In this case, multiple causes have a common effect The diagram on the left is easier toassess because causes are independent To learn about the causes from the effectrequires reversing the arcs The causes are now dependent because if the effect wereobserved, then cause2 would be much more likely if cause1 were not true

    FALL 1987 59

  • 8/22/2019 Thinking Backwards

    6/7

    sent in the model. We can then takeexpectation over nephrotic syndrometo arrive at our desired conditionaldistribution. If we did not explicitlyconsider nephrotic syndrome, then wewould be forced to perform the inte-gration mentally instead. The additionof variables can be of considerable cog-nitive aid when trying to assess theprobability distributions. This processis what Tribus (1969) calls extendingof the conversation We get little ben-efit from this technique unless we

    Figure 12 first build our model in the causalAn Influence Diagram for a Medical Diagnosis Problem direction.

    Figure 12 illustrates a medical example in which two disorders (causes), congestiveheart failure and nephrotic syndrome, are indicated by test results (effects), including a Conclusionscommon one, pitting edema The diagram on the left is fairly simple and one that canbe comfortably assessed by the physician To perform diagnosis, a number of arcs must We believe it is important to distin-be reversed to obtain the diagram on the right; this diagram has a distribution for con- guish between the direction in whichgestive heart failure conditioned by all possible test results Although the new diagram a model is constructed and the direc-is useful for diagnosis, it is practically impossible for a physician to assess its condition- tion in which it is applied. Modelsal distributions that capture rich interactions can

    become impossible to assess in theusage direction even though theymight be simple and natural to thinkabout in the opposite direction.

    &G&(G&& AI researchers have argued that vari-ous methods for reasoning with uncer-tainty are impractical because of thecomplexity of knowledge acquisition(Shortliffe 1976; Rich 1983). Indeed,many AI researchers have sacrificedself-consistency of the reasoningmechanism in order to facilitate sim-plicity in the knowledge representa-

    Figure 13 tion process (Heckerman 1986). WeTaking Advantage of Constant Likelihood contend that the desired simplicity

    In this generic medical example, a number of factors condition the distribution for a can often be found by constructingdisease (cause), but the symptoms (effect) are conditionally independent of those fac- models in the direction opposite thattors given the disease This diagram is natural to assess and, in fact, corresponds to the of usage without having to sacrificeorganization of most medical texts To perform a diagnosis, however, the arcs must be fundamentalsreversed to produce the diagram on the right, which is much more difficult to assessThis new diagram corresponds to the physicians clinical experience If an experiencedphysician changes jobs, there is often a period of adjustment as the physician learns Notesabout the new factors Similarly, a knowledge base organized like the diagram on theright would be less portable than one based on the diagram on the left 1 Many of the concepts discussed in thisarticle were examined urev iouslv inwidely between cases on the basis ofpatient-spec ific factors such as age,occupation , and climate; the probabil-ity distribution for the symptoms,given the disorder, is often indepen-dent of these factors. However, whenthe model is reversed to the directionof usage, both distributions becomepatient specific. Thus, a knowledgebase organized in the intuitive direc-tion is more portable than one con-structed in the direction for use.

    Moreover, by building models in thedirection of natural assessment, con-stant likelihoods can be exploited todecrease knowledge acquisition time.Finally, it is often useful to add newvariables that simplify the construc-tion process. Consider the medicalexample in Figure 12. If we are inter-ested in the probability distributionfor pitting edema given congestiveheart fa ilure, it is much easier toassess with nephrotic syndrome pre-

    Shachter and Heckerman (lj86) Also ;ele-vant to our discussion is Pearl (1987)z This is not to be confused with forwardversus backward chaining which is anissue of control rather than representation.3 It is interesting to note that the reversaloperation is simplified considerably whenthe predecessor X is a deterministic node.It is not simplified, however , when the suc-cessor Y is the deterministic node4 Notice that the newer model does showsome conditional independence which canbe exploited at the time of usage

    60 AI MAGAZINE

  • 8/22/2019 Thinking Backwards

    7/7

    AcknowledgmentsThis work was supported in part by Deci-sion Focus, Inc ; the National Library ofMedicine under grant ROI-LM04529;NASA-Ames Research Center, the Henry JKaiser Family Foundation; and the FordAerospace Corporation Computing facili-ties were provided by the SUMEX-AIMresource under NM grant RR-00785

    ReferencesCooper, G F 1984 NESTOR: A Comput-er-Based Medical Diagnostic Aid That Inte-grates Causal and Probabilistic KnowledgePh.D. diss., Technical Report STAN-CS-84-48, HPP-84-48, Dept. of Computer Sci-ence, Stanford Univ.Heckerman, D E. 1986 Probabilistic Inter-pretations for MYCINs Certainty FactorsIn Uncertainty in Artificial Intelligence,eds. L Kanal and J. Lemmer, 167-196 NewYork: North Holland

    Analysis, eds. R A. Howard and J. E

    Howard, R. A, and Matheson, J E. 1981.Influence Diagrams In Readings on thePrinciples and Applications of Decision

    Matheson, 721-762. Menlo Park, Calif :Strategic Decisions GroupKuipers, B, and Kassier, J. 1984 CausalReasoning in Medicine: Analysis of a Pro-tocol Cognitive Science, 8:363-385Olmsted, S. M 1983 On Representing andSolving Decision Problems Ph D diss ,Dept. of Economics and Engineering Sys-tems, Stanford Univ.Pearl, J 1987 Embracing Causality in For-mal Reasoning Technical Report, CSD860020, Dept. of Computer Science, Univof California at Los Angeles Also in Pro-ceedings of the Sixth National Conferenceon Artificial Intelligence, 369-373 MenloPark, Calif : American Association forArtificial IntelligencePearl, J 1986. Fusion, Propagation , and

    Shachter, R. D , and Heckerman, D. E.

    Structuring in Belief Networks ArtificialIntelligence 29:241-288Rich, E 1983 Artificial Intelligence NewYork: McGraw-HillShachter, R. D 1986. Intelligent Probabilis-tic Inference. In Uncertainty in ArtificialIntelligence, eds L. Kanal and J Lemmer,371-382 New York: North Holland

    1986 A Backwards View for Assessment.In Proceedings of the Second Workshop onUncertainty in Artificial Intelligence, 237-241 Menlo Park, Calif : American Associa-tion for Artificial IntelligenceShortliffe, E. H 1976 Computer-BasedMedical Consultations. MYCIN NewYork: Elsevier-North Holland.Shortliffe, E H., and Buchanan, B G. 1975.A Model of Inexact Reasoning in MedicineMathematical Biosciences 231351-379.Speed, T. P 1978. Graphical Methods inthe Analysis of Data Paper presented atthe University of Copenhagen Institute ofMathematical Statistics, Copenhagen.Spiegelhalter, D. J 1985 A Statistical Viewof Uncertainty in Expert Systems Paperpresented at the Workshop on AI andStatistics, Bell Laborator ies, Princeton,NJ, ll-12AprilTribus, M 1969. Bayes Equation andRational Inference. In Rational Descrip-

    20:55i-585. -

    tions, Decisions, and Designs New York:PergamonWright, S 1921. Correlation and Causa-tion lournal of Agricultural Research

    Acquaint s awindow based ystem, hatruns on personal omputers.li- you want to start simple, heAcquaint-Lightversion ets youexplore he world of Expert Systems(&$j$i::without having o make a largeinvestment.

    Onceyou haveoutgrown the facilities-g&qof Acquaint-Light,you may upgrade TheExpert System Development f;~&ryto the full version,which s intended Tool that grows with you. comparisons.to be usedby knowledgeengineers. ecauset is It hasa powerful forms acility, that yourwritten entirely n muLisp, you can get accesso userswill feel at home with.most of the systemsnternal functions and youmay add your own functions aswell. For more nformation, pleasewrite to:LlrlrP SYSTEpuV PO. BOX 553, 1140 AN PURMEREND, THE NETHERLANDS

    For free information, circle no. 77

    FALL 1987 61