Upload
anissa-french
View
214
Download
0
Embed Size (px)
Citation preview
B. Ross Cosc 4f79 1
Explanation• An important feature of expert systems is their ability to explain :
- why questions are being asked
- how certain conclusions were reached
- why other conclusions were not reached
- trace the inference engine for debugging purposes
- give "human level" explanation of rules
• Implementing explanation involves keeping a record of the inference steps that resulted in a computation:
- the rules that were executed: (i) rule #’s, (ii) symbolic images of rules
- the order in which rules were executed
--> ie. keep a record of the computation tree
• explanation utilities merely access this computation tree record, and print out text accordlingly
B. Ross Cosc 4f79 2
Explanation (cont)
• There are various methods for recording the computation tree:
a) assert and retract facts recording level, step, and rule numbers
- this adds complexity to knowledge base: rule numbers, "enter_rule" goal,... - KB becomes less declarative
b) new arguments record explanation in KB rules; rules keep tabs on this arg - still makes KB less declarative
c) Meta-interpreters:
- because KB should be declarative, we write a simple meta-interpreter to execute it
- this meta-interpreter will keep track of the computation tree via an added argument
- advantage: KB remains declarative and simple to maintain
- also, one can encode fairly sophisticated explanation facility
B. Ross Cosc 4f79 3
Explanation
• explain line of reasoning:
why - why query is being asked how - how a conclusion was reached why_not - why another conclusion wasn't reached trace - computation trace dump - dump one or all rules in a readable format
• Requires keeping track of computation tree
- identify rules: rule numbers, or symbolically - keep track of computation tree i) assert step/rule info in active database ii) an extra argument to a meta-interpreter
• Type of explanation generated:
1. print the rule - dump Prolog clause - print a rule number - print "attribute : value" - print a "english"-style version of rule
B. Ross Cosc 4f79 4
Explanation (cont)
2. print special text incorporated into the rule
eg. defect(12, 'the heater blower', is, defective) :- cause(13, 'the blower', is, stuck), cause(14, 'the motor', is, 'out of whack').
eg. meta-interpreter
bird(barn_swallow, 'the name of the bird') :- family(swallow, _ ), tail(square, _).
tail(square, X) :- X = 'the shape of the tail', ask(tail, square, X).
then: (1) ask will use this 3rd argument when querying the user
(2) meta-interpreter's "prove" predicate will include this text in its history argument, which is then available for any explanation required
B. Ross Cosc 4f79 5
Explanation (cont)
3. Associate some canned text with each rule
eg. rule numbers:
bird(26,barn_swallow) :- family(_,swallow), tail(_,square).
elsewhere...
big_explanation(26) :- write('Barn swallows have the following unique characteristics...").
• The shell utility will match this explanation with the rule for which a big explanation is sought.
• Could also have a text file for the rule:
bird('barnswallow.txt', barn_swallow) :- .....
B. Ross Cosc 4f79 6
1. MTA
• working data base: step(0) [1,2,3,...] <-- step in inference
tree(0,0) [ (1,_), (2,_),...] <-- inference tree record
level(0) <-- keeps track of which level in tree is currently being explained
• advantages: - high-level explanation of rules
disadvantages: - rules themselves are not printed (useful for debugging) - KB has more control info
* - step, tree, level predicates work as side-effects: a very nasty way to do logic programming!
B. Ross Cosc 4f79 7
2. Bowen toy system(ch.8)
should_take(Person, Drug, Reason) :- complains_of(Person, Symptom, ComplainsReason), suppresses(Drug, Symptom, SuppressesReason), not unsuitable_for(Person, Drug, UnsuitReason), append(SuppressesReason, UnsuitReason, InterReason), append(ComplainsReason, InterReason, Reason).
suppresses(Drug, Symptom, [relieves(Drug, Symptom)]) :- relieves(Drug, Symptom).
etc
- 3rd arguments are lists of reasons why goals succeed- need to append them together: ruins declarativity of KB
run :- write ('Name='), read(Person), should_take(Person, Drug, Reason), write_list(['Recommend taking ', Drug, nl]), write('Explanation:'), nl, write_nl_list(Reason).
B. Ross Cosc 4f79 8
3. Bird ID
• meta-interpreter keeps a list of the successfully solved goals • this list is printed as part of explanation
• Note that prov(G, [G | H ]) same as append([G], H, H2), prov(G, H2)
• to add "why" to our toy system:
- add a history argument to "prov" to keep a growing list of successful goals (represents the branch of computation tree)
- modify "ask" to recognize "why" from user (already reads "yes" and "no"); will also take history argument, and print it out when "why" is seen • advantage: - KB is kept simple & declarative
disadvantage: - the explanation written is terse
--> solutions: (i) add phrase arguments, pretty printing (ii) add canned text predicates for why
B. Ross Cosc 4f79 9
Comparing these styles
- MTA KB is more difficult to maintain ; meta-interp'ed KB is more declarative
- MTA shell code is side-effect driven , while Meta-interp is more straight-forward
eg. compare MTA's write_explanation with bird's process_ans
- each repeated "why" in MTA will retract/assert new level clause, which is a side effect
- process_ans can be made to print elements in history list for each why given by user
• Bowen's ch. 8 method is better, but it still complicates the KB
- when Kb rules have "append", something is amiss
• meta-interpreter: ideal method, because we can in essence design our own KB language, whose explanation, I/O, inference scheme, etc, is tailored to our needs
- can keep KB as pure as possible
B. Ross Cosc 4f79 10
User interface
• user interface should provide a variety of user commands
- standard explanation ones: why, how, why_not, trace, ...
- query input: yes, no, ( values - white, long, etc...) menus - choices, numeric input, windows, ... unsure - not certain how to determine query answer unknown - a definite answer is not possible
• when recording input: assert(fact(Attribute, Value, X)) , where X is one of yes, no, unknown
• unknown can mean that some rules are possible eligible
• if user types "unsure", can give guidance as to how to proceed . This is called "test" advice in text.