27
March 26, 2007 Explaining Task Processing in Cognitive Assistants that Learn Deborah McGuinness 1 , Alyssa Glass 1,2 , Michael Wolverton 2, Paulo Pinheiro da Silva 3* 1 Knowledge Systems, AI Laboratory Stanford University {dlm | glass} @ ksl.stanford.edu 2 SRI International [email protected] 3 University of Texas El Paso* *Work done while on staff at Stanford KSL [email protected] *thanks to Li Ding, Cynthia Chang, Honglei Zeng, Vasco Furtado, Jim Blythe, Karen Myers, Ken Conley, David Morley

March 26, 2007 McGuinness et al Explaining Task Processing in Cognitive Assistants that Learn Deborah McGuinness 1, Alyssa Glass 1,2, Michael Wolverton

Embed Size (px)

Citation preview

  • Explaining Task Processing in Cognitive Assistants that Learn Deborah McGuinness1, Alyssa Glass1,2, Michael Wolverton2, Paulo Pinheiro da Silva3*1Knowledge Systems, AI LaboratoryStanford University{dlm | glass} @ksl.stanford.edu2SRI [email protected] 3University of Texas El Paso**Work done while on staff at Stanford [email protected]

    *thanks to Li Ding, Cynthia Chang, Honglei Zeng, Vasco Furtado, Jim Blythe, Karen Myers, Ken Conley, David Morley

  • General MotivationInteroperability as systems use varied sources and multiple information manipulation engines, they benefit more from encodings that are shareable & interoperableProvenance if users (humans and agents) are to use and integrate data from unknown, unreliable, or evolving sources, they need provenance metadata for evaluationExplanation/Justification if information has been manipulated (i.e., by sound deduction or by heuristic processes), information manipulation trace information should be availableTrust if some sources are more trustworthy than others, representations should be available to encode, propagate, combine, and (appropriately) display trust valuesProvide interoperable knowledge provenance infrastructure that supports explanations of sources, assumptions, learned information, and answers as an enabler for trust.

  • Inference Web Infrastructure primary collaborators Ding, Chang, Zeng, FikesFramework for explaining question answering tasks by abstracting, storing, exchanging, combining, annotating, filtering, segmenting, comparing, and rendering proofs and proof fragments provided by question answerers.

  • ICEE: Integrated Cognitive Explanation EnvironmentImprove Trust in Cognitive Assistants that learn by providing transparency concerning: * provenance * information manipulation * task processing * learning

  • Task Management FrameworkProcedure LearnersExecution Monitor& PredictorProPLTask ManagerSPARKTime ManagerPTIMEProcess ModelsTask ExplainerICEEAdvicePreferencesTailor, LAPDOG, Execution Monitor& PredictorProPLTask ManagerSPARKTime ManagerPTIMEProcess ModelsTask ExplainerICEEAdvicePreferencesPreferencesActivity RecognizerLocationEstimatorPrimTL, PLOW

  • ICEE ArchitectureCollaboration AgentJustification GeneratorTask Manager (TM)TM WrapperExplanation DispatcherTM Explainer

  • Task ExplanationAbility to ask why at any pointContextually relevant responses (using current processing state and underlying provenance)Context appropriate follow-up questions are presented

    Explanations generated completely automatically; No additional work required by user to supply information

  • Explainer StrategyPresent QueryAnswerAbstraction of justification (using PML encodings)Provide access to meta informationSuggest context-appropriate drill down options (also provide feedback options)

  • Sample Introspective Predicates: ProvenanceAuthorModificationsAlgorithmAddition date/timeData usedCollection time span for dataAuthor commentDelta from previous versionLink to originalGlass, A., and McGuinness, D.L. 2006. Introspective Predicates for Explaining Task Execution in CALO. Technical Report, KSL-06-04, Knowledge Systems Lab., Stanford Univ.

  • Task Action SchemaWrapper extracts portions of task intention structure through introspective predicatesStore extracted information in action schemaDesigned to achieve three criteria:Salience info relevant to information needsReusability info usable by cognitive agent activities like procedure learning or state estimationGenerality conceptual model appropriate for action reasoning in bdi, blackboard systems, production systems, etc.

  • User Trust StudyInterviewed 10 Critical Learning Period (CLP) participantsProgrammers, researchers, administratorsFocus of study:TrustFailures, surprises, and other sources of confusionDesired questions to ask CALOInitial results:Explanations are required in order to trust agents that learnTo build trust, users want transparency and provenanceIdentified question types most important to CALO users --> motivation for future work

  • Selected Future DirectionsBroaden explanation of learning (and CALO integration) Explain learning by demonstration (integrating initially with CALO component LAPDOG)Explain preference learning (integrating initially with CALO component PTIME)Investigate explanation of conflicts/failures. Explore this as feedback and a driver to initiate learning procedure modifications or learning new procedures.Expand dialogue-based interaction and presentation of explanations (expanding our integration with Towel)Use trust study results to prioritize provenance, strategy, and dialogue work.Exploit our work on IW Trust - a method for representing, propagating, and presenting trust within the CALO setting already have results in intelligence analyst tools, integration with text analytics, Wikipedia, likely to be used in IL, etc.

  • Advantages to ICEE ApproachUnified framework for explaining task execution and deductive reasoning, built on the Inference Web infrastructure.Architecture for reuse among many task execution systems.Introspective predicates and software wrapper that extract explanation-relevant information from task reasoner.Reusable action schema for representing task reasoning.

  • ResourcesOverview of ICEE:Deborah McGuinness, Alyssa Glass, Michael Wolverton and Paulo Pinheiro da Silva. Explaining Task Processing in Cognitive Assistants That Learn. In the proc. of the 20th International FLAIRS Conference. Key, West, Florida, May 7-9, 2007.Introspective predicates:Glass, A., and McGuinness, D.L. Introspective Predicates for Explaining Task Execution in CALO. Technical Report, KSL-06-04, Knowledge Systems, AI Lab., Stanford University, 2006.Video demonstration of ICEE:http://iw.stanford.edu/2006/10/ICEE.640.movExplanation interfaces:McGuinness, D.L., Ding, L., Glass, A., Chang, C., Zeng, H., and Furtado, V. Explanation Interfaces for the Semantic Web: Issues and Models. 3rd International Semantic Web User Interaction Workshop (SWUI06). Co-located with the International Semantic Web Conference, Athens, Georgia, 2006.Inference Web (including above publications):http://iw.stanford.edu/

  • Extra

  • GS: GetSignatureBL: BuyLaptopGA: GetApprovalSupportsTopLevelGoal(x) & IntentionPreconditionMet(x) & TerminationConditionNotMet(x) => Executing(x)TopLevelGoal(y) & Supports(x,y) => SupportsTopLevelGoal(x)ParentOf (x,y) & Supports(y,z) => Supports (x,z)ParentOf (x,y) & Supports(y,z) => Supports (x,z)Supports (x,x)

  • Explaining Learning by DemonstrationGeneral MotivationLAPDOG (Learning Assistant Procedures from Demonstration, Observation, and Generalization) generalizes the users demonstration to learn a procedureWhile LAPDOGs generalization process is designed to produce reasonable procedures, it will occasionally get it wrongSpecifically, it will occasionally over generalizeGeneralize the wrong variables, or too many variablesProduce too general a procedure because of a coarse-grained type hierarchyICEE needs to explain the relevant aspects of the generalization process in a user-friendly formatTo help the user identify and correct over generalizationsTo help the user understand and trust the learned proceduresSpecific elements of LAPDOG reasoning to explainOntology-Based Parameter GeneralizationThe variables (elements of the users demonstration) that LAPDOG chooses to generalizeThe type hierarchy on which the generalization is basedProcedure CompletionThe knowledge-producing actions that were added to the demonstrationThe generalization done on those actionsBackground knowledge that biases the learningE.g., rich information about the email, calendar events, files, web pages, and other objects upon which it executes it actionsPrimarily for future versions of LAPDOG

  • Explaining PreferencesGeneral MotivationPLIANT (Preference Learning through Interactive Advisable Non-intrusive Training) uses user-elicited preferences and past choices to learn user scheduling preferences for PTIME, using a Support Vector Machine.Inconsistent user preferences, over-constrained schedules, and necessity of exploring the preference space result in user confusion about why a schedule is being presented.Lack of user understanding of PLIANTs updates creates confusion, mistrust, and the appearance that preferences are being ignored.ICEE needs to provide justifications of PLIANTs schedule suggestions, in a user-friendly format, without requiring the user to understand SVM learning.Providing Transparency into Preference LearningAugment PLIANT to gather additional meta-information about the SVM itself:Support vectors identified by SVMSupport vectors nearest to the query pointMargin to the query pointAverage margin over all data pointsNon-support vectors nearest to the query pointKernel transformation used, if anyRepresent SVM learning and meta-information as justification in PML, using added SVM rulesDesign abstraction strategies for presenting justification to user as a similarity-based explanation

  • During the demo, notice:User can ask questions at any timeReponses are context-sensitiveDependant on current task processing state and on provenance of underlying processExplanations generated completely automaticallyNo additional work required by user to supply informationFollow-up questions provide additional detail at users discretionAvoids needless distraction

  • Example Usage:Live Demo and/or Video Clip

  • Future DirectionsBroaden explanation of learning and CALO integration Explain learning by demonstration, integrating initially with CALO component LAPDOGExplain preference learning, integrating initially with CALO component PTIMEInvestigate explanation of conflicts. Explore this as a driver to initiate learning procedure modifications or learning new procedures.Expand dialogue-based interaction and presentation of explanations, expanding our integration with TowelWrite up and distribute trust study (using our interviews with 10 year 3 CLP subjects). Use trust study results to prioritize provenance, strategy, and dialogue work.Potentially exploit our work on IW Trust - a method for representing, propagating, and presenting trust within the CALO setting already have results in intelligence analyst tools, integration with text analytics, Wikipedia, likely to be used in IL, etc.Continue discussions with:Tom Garvey about transition opportunities to CPOFTom Dietterich about explanation-directed learning and provenanceAdam Cheyer about explaining parts of the OPIE environment

  • How PML WorksJustification Trace IWBase

    NodeSet foo:ns1(hasConclusion ) Query foo:query1

    Question foo:question1 MappingNodeSet foo:ns2(hasConclusion ) SourceUsage

    hasAnswerhasAntecendentfromQueryfromAnswerisQueryForInferenceEngine InferenceRule hasVariableMappinghasInferencEnginehasRuleInferenceStepLanguage hasLanguageInferenceStepSource isConsequentOfhasSourceUsagehasSource isConsequentOfusageTime

  • Future DirectionsWe will leverage results from our trust study to focus and prioritize our strategies explaining cognitive assistants e.g., learning specific provenanceWe will expand our explanations of learning to augment learning by instruction and design and implement explanation of learning by demonstration (initially focusing on LAPDOG).We will expand our initial design of explaining preferences in PTIME Write up and distribute user trust study to CALO participantsConsider using conflicts to drive learning and explanations I have not finished because x has not completed.Advanced dialogues exploiting TOWEL and other CALO componentsPotentially exploit our work on IW Trust - a method for representing, propagating, and presenting trust within the CALO setting already have results in intelligence analyst tools, integration with text analytics, Wikipedia, likely to be used in IL, etc.

  • Sample Task Hierarchy:Purchase equipmentPurchase equipmentCollect requirementsGet quotesDo researchChoose set of quotesPick single itemGet approvalPlace order

  • Sample Task Hierarchy:Get travel authorizationGet travel authorizationCollect requirementsGet approval, if necessaryNote: this conditional step was added to the original procedure through learning by instructionSubmit travel paperwork

  • PML in Swoop

  • Explaining Extracted EntitiesSource: fbi_01.txtSource Usage: span from 01 to 78 This extractor decided that Person_fbi-01.txt_46 is a Person and not OccupationSame conclusion from multiple extractors conflicting conclusion from one extractor

    As web applications proliferate, more users (both people and agents) find themselves faced with decisions about when and why to trust application advice. In order to trust information obtained from arbitrary applications, users need to understand how the information was obtained and what it depended upon. Particularly in web applications that may use question answering systems that may be heuristic or incomplete or data that is either of unknown origin or may be out of date, it becomes more important to have information about how answers were obtained. Emerging web systems will return answers augmented with Meta information about how answers were obtained. In this talk, Deborah McGuinness will describe an approach that can improve trust in answers generated from web applications by making the answer process more transparent. The added information is aimed to provide users (humans or agents) with answers to questions of trust, reliability, recency, and applicability. While this is an area of active research, there are technologies and implementations that can be used today to increase application trustability. The talk will include descriptions of a few representative applications using this approach.Dr. Deborah McGuinness a leading expert in ontology-based tools and applications, knowledge representation and reasoning languages. She is co-editor of the Ontology Web Language. Deborah runs the Stanford Inference Web (IW) effort, which provides a framework for explaining answers from heterogeneous web applications.Inference Web is joint work with Pinheiro da Silva, Fikes, Chang, Glass, Ding, Deshwal, Narayanan, Miller, Zeng, Jenkins, Millar, Bhaowal, User can ask questions at any timeReponses are context-sensitive; Dependant on current task processing state and on provenance of underlying processExplanations generated completely automatically; No additional work required by user to supply informationFollow-up questions provide additional detail at users discretion; Avoids needless distraction

    Salience. The wrapper should obtain information about an agents processing that is likely to address some possible user information needs.Reusability. The wrapper should obtain information that is also useful in other cognitive agent activities that require reasoning about actionfor example, state estimation and procedure learning.Generality. The schema should represent action information in as general a way as possible, covering the action reasoning of blackboard systems, production systems, and other agent architectures.CLP == Critical Learning Period. This was the 2-week data-gathering exercise that was the basis for the "with learning" portion (as opposed to the no-learning baseline) of the year-end test. Learning Assistant Procedures from Demonstration, Observation, and Generalization". PLIANT == Preference Learning through Interactive Advisable Nonintrusive Training