28
Analytical Learning

Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Embed Size (px)

Citation preview

Page 1: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Analytical Learning

Page 2: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Content Introduction

Difference between Inductive and Analytical Learning Learning with Perfect Domain Theories: PROLOG-EBG Remarks on Explanation-Based Learning Summary

Page 3: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Introduction

Explanation is used to distinguish the relevant features of the training examples from the irrelevant ones, so that the examples can be generalised

Prior knowledge and deductive reasoning is used to augment the information provided by the training examples

Prior knowledge is used to reduce the complexity of hypothesis space

Assumption: learner's prior knowledge is correct and complete

Page 4: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Introduction 2

Example: Learn to recognise important classes of games

Goal: Recognise chessboard positions in which black will lose its queen within two moves

Induction can be employed <=>Problem: thousands of training examples similar to this one are needed

Suggested target hypothesis:board position in which the black king and queen are simultaneously attacked

Not suggested: board position in which four white pawns are still in their original location

Page 5: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Introduction 3

Explanations of human beings provide the information needed to rationally generalise from details

Prior knowledge: e.g. knowledge about the rules of chess: legal moves, how is the game won, ...

Given just this prior knowledge it is possible in principle to calculate the optimal chess move for any board position <=>in practice it will be frustratingly complex

Goal: learning algorithm that automatically constructs and learns a move from such explanations

Page 6: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Difference between Inductive and Analytical Learning

Analytical learning methods seek a hypothesis that fits the learner's prior knowledge and covers the training examples

Explanation based learning is a form of analytical learning in which the learner processes each new training example by

Explaining the observed target value for this example in terms of the domain theory Analysing this explanation to determine the general conditions under which the

explanation holds Refining its hypothesis to incorporate these general conditions

Page 7: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Difference between Inductive and Analytical Learning (2)

Difference: They assume two different formulations of the learning problem:

Inductive learning:input: hypothesis space H + set of training examplesoutput: hypothesis h, that is consistent with these training examples

Analytical learning:input: hypothesis space H + set of training examples + domain theory B consisting of background knowledge (used to explain the training examples)output: hypothesis h, that is consistent with both the training examples D and the domain theory B

1 1 n nD = x ,f x ,..., x ,f x

1 1 n nD = x ,f x ,..., x ,f x

Page 8: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Difference between Inductive and Analytical Learning (3)

Illustration: is True if is a situation in which black will lose its queen within two moves and False otherwiseH: set of Horn-clauses where predicates used by the rules refer to the position or relative position of specific pieces B: formalisation of the rules of chess

ix if x

Page 9: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

New Example Given:

Instance space X: Each instance describes a pair of objects represented by the predicates Type, Color, Volume, Owner, Material, Density and On.

Hypothesis space H: Each hypothesis is a set of Horn clauses.The head of each clause is a literal of the target predicate SafeToStack. The body of each Horn clause is a conjunction of literals based on the same predicates used to describethe instances + LessThan|Equal|GreaterThan + function: plus|minus|times

Target concept:

Training examples: On(Obj1, Obj2) Owner (Obj1, Fred)Type(Obj1, Box) Owner (Obj2, Louise)Type(Obj2, Endtable) Density(Obj1, 0.3)Color(Obj1, red) Material (Obj1, Carboard)Color(Obj2, Blue) Material (Obj1, Wood)Volume(Obj1, 2)

SafeToStack x, y Volume x,vx Volume y,vy LessThan vx,vy

Page 10: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

New Example 2

Domain Theory B:

Determine: A hypothesis from H consistent with the training examples and the

domain theory

SafeToStack x, y ¬Fragile y SafeToStack x, y Lighter x, y

Lighter x, y Weight x, wx Weight y,wy LessThan wx,wy

Weight x,w Volume x,v Density x,d Equal w, v,d

Weight x,5 Type x,Endtable

Fragile x Material x,Glass

Page 11: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Content

Introduction➔ Learning with Perfect Domain Theories: PROLOG-EBG

An Illustrative Trace Remarks on Explanation-Based Learning Explanation-Based Learning of Search Control Knowledge Summary

Page 12: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Learning with Perfect Domain Theories: PROLOG-EBG

A domain theory is said to be correct if each of its assertions is a truthful statement about the world

A domain theory is complete with respect to a given target concept and instance space, if the domain theory covers every positive example in the instance space.

It is not required that the domain theory is able to prove that negative examples do not satisfy the target concept.

Page 13: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Learning with Perfect Domain Theories: PROLOG-EBG (2)

Question: The learner had a perfect domain theory, why would it need to learn?

Answer: There are cases in which it is feasible to provide a perfect domain theory It is unreasonable to assure that a perfect domain theory is available.

A realistic assumption is that plausible explanations based on imperfect domain theories must be used, rather than exact proofs based on perfect knowledge.

Page 14: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Learning with Perfect Domain Theories: PROLOG-EBG (3)

PROLOG-EBG (Kedar-Cabelli and McCarthy 1987) Sequential covering algorithm When given a complete and correct domain theory, the method is

guaranteed to output a hypothesis (set of rules) that is correct and that covers the observed positive training examples

Output: set of logically sufficient conditions for the target concept, according the domain theory

Page 15: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Learning with Perfect Domain Theories: PROLOG-EBG (4)

Repeatedly: Domain theory is correct and complete this explanation constitutes a proof that the training examples satisfy the target concept

PROLOG-EBG(TargetConcept, TrainingExamples, Domain Theory) LearnedRules Pos the positive examples from TrainingExamples for each PositiveExample in Pos that is not covered by LearnedRules do

Explain Explanation an explanation (proof) in terms of the DomainTheory that PositiveExample satisfies the TargetConcept

Analyse Sufficient Condition the most general set of features of PositiveExample sufficient to satisfy the TargetConcept according to the Explanation

Refine LearnedRules LearnedRules + NewHornClause, where NewHornClause is of the form TargetConcept SufficientConditions

In general there may be multiple possible explanations Any or all of the explanations may be used. Explanation is generated using backward chaining search as performed by PROLOG.

Page 16: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

An Illustrative Trace (2)

The imprtant question in the generalising-process:Of the many features that happen to be true of the current training example, which ones are generally relevant to the target concept?

Explanation constructs the answer: Precisely the features mentioned in the explanation

Page 17: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

An Illustrative Trace (3)

General rule justified by the domain theory:

leaf node in the proof tree expects Equal(0.6,times(2,03)and LessThan(0.6,5)

SafeToStack x, y Volume x,2 Density x,0.3 Type y,Endtable

Page 18: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

An Illustrative Trace (4) Explanation of the training example forms the proof for the correctness of this rule PROLOG-EBG computes the most general rule that can be justified by the

explanation, by computing the weakest preimage of the explanation Definition: The weakest preimage of a conclusion C with respect to a proof P is the

most general set of initial assertions A, such that A entails C according to P. Example:

PROLOG_EBG computes the weakest preimage of the target concept with respect to the explanation, using a general procedure called regression

Regression: go iteratively backward through the explanation, first computing the weakest preimage of the target concept with respect to the final proof

step in the explanation Computing the weakest preimage of the resulting expressions with respect to the

proceeding step and so on

SafeToStack x, y Volume x,vx Density x,dx Equal wx, times vx,dx LessThan wx,5 Type y,Endtable

Page 19: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

An Illustrative Trace (5)

Page 20: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

An Illustrative Trace 5

Page 21: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Content

Introduction Learning with Perfect Domain Theories: PROLOG-EBG➔ Remarks on Explanation-Based Learning

Discovering new features Summary

Page 22: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Remarks on Explanation-Based Learning Key properties:

PROLOG-EBG produces justified general hypotheses by using prior knowledge to analyse individual examples

The explanation about the way how an example satisfies the target concept determines which example attributes are relevant: the ones mentioned by the explanation

Regressing the target concept to determine its weakest preimage with respect to the explanation allows deriving more general constraints on the values of relevant features

Each learned Horn clause corresponds to a sufficient condition for satisfying the target concept

The generality of the learned Horn clauses will depend on the formulation of the domain theory and on the sequence in which the training examples are considered

Implicitly assumes that the domain theory is correct and complete

Page 23: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Remarks Explanation-Based Learning 2

Related perspectives to help to understand its capabilities and limitations:

EBL as theory guided generalisation of examples: Rational generalisation from examples allows to avoid the bounds on sample complexity that occured in pure inductive learning

EBL as example guided reformulation of theories:Method for reformulating the domain theory into more operational form: Creating rules that:

Deductively follow the domain theory Classify the observed training examples in a single inference step

Page 24: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Remarks Explanation-Based Learning 3

Related perspectives to help to understand its capabilities and limitations:

EBL is „just“ restating what the learner already „knows“:In what sense does this quality help to learn then?

Knowledge reformulation: In many tasks the difference between what one knows in principle and what one can efficiently compute in practice may be great

Situation: Complete perfect domain theory is already known to the (human) learner, and further learning is „simple“! So it's a matter of reformulating this knowledge into a form in which it can be used more effectively to select appropriate moves.

Page 25: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Remarks Explanation-Based Learning 4

Knowledge Compilation: EBL involves reformulating the domain theory to produce general rules that classify examples in a single inference step

Page 26: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Discovering new features Interesting capability: Ability to formulate new features that are

not explicitly in the description of the training examples but that are needed to describe the general rule underlying the training examples

This „feature“ is similarly represented by the hidden units of neural networks

Like the BACKPROPAGATION algorithm, PROLOG_EBG automatically formulates such features in its attempt to fit the training data

BUT: In neural networks it's developed in a statistical process PROLOG-EBG it's derived in an analytical process

Example: derives the feature Volume Density > 5

Page 27: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Content

Introduction Learning with Perfect Domain Theories: PROLOG-EBG Remarks on Explanation-Based Learning➔ Summary

Page 28: Analytical Learning. Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning Content Introduction Difference between Inductive and Analytical Learning

Lehrstuhl für Informatik 2

Gabriella Kókai: Maschine Learning

Summary

PROLOG-EBG Uses first order Horn clauses in its domain theory and in its learned hypotheses The explanation is a PROLOG proof The hypothesis extracted from the explanation is the weakest preimage of this proof

Analytical learning methods construct useful intermediate features as a side effect of analysing individual training examples.

Other deductive learning procedures can extend the deductive closure of their domain.

PRODIGY and SOAR have demonstrated the utility of explanation based learning methods for automatically acquiring effective search control knowledge that speeds up problem solving

Disadvantage: purely deductive implementations such as PROLOG-EBG produce a correct output if the domain theory is also correct