155
1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Embed Size (px)

Citation preview

Page 1: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

1

Decision Tree Learning

Soongsil University, SeoulGun Ho Lee

Page 2: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

2

Decision Tree Learning

• Introduction• Decision Tree Representation• Appropriate Problems for Decision Tree Learni

ng• Basic Algorithm• Hypothesis Space Search in Decision Tree Lea

rning• Inductive Bias in Decision Tree Learning• Issues in Decision Tree Learning• Summary

Page 3: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

3

Tree leaning Task

Apply

Model

Induction

Deduction

Learn

Model (tree)

Model(Tree)

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

Learningalgorithm

Training Set

Page 4: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

4

Example of a Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

Training Data Model: Decision Tree

Page 5: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

5

Another Example of Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

classMarSt

Refund

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

There could be more than one tree that fits the same data!

Page 6: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

6

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Page 7: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

7

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test DataStart from the root of tree.

Page 8: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

8

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 9: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

9

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 10: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

10

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 11: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

11

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Page 12: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

12

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Assign Cheat to “No”

Page 13: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

13

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Page 14: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

14

Overview• One of the most widely used and practical methods for

inductive inference over supervised data

• It approximates discrete-valued functions (as opposed to continuous)

• It is robust to noisy data

• Decision trees can represent any discrete function on discrete features

• It is also efficient for processing large amounts of data, so is often used in data mining applications

• Decision Tree Learners’ bias typically prefers small tree over larger ones

Page 15: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

15

Decision Trees

• Tree-based classifiers for instances represented as feature-vectors. Nodes test features, there is one branch for each value of the feature, and leaves specify the category.

• Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors.

• Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF).– red circle → pos– red circle → A blue → B; red square → B green → C; red triangle → C

color

red blue green

shape

circlesquare triangle

neg pos

pos neg neg

color

red bluegreen

shape

circle square triangle

B C

A B C

Page 16: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

16

Properties of Decision Tree Learning

• Continuous (real-valued) features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold (e.g. length < 3 and length 3)

• Classification trees have discrete class labels at the leaves, regression trees allow real-valued outputs at the leaves.

• Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining tasks.

• Methods developed for handling noisy training data (both class and feature noise).

• Methods developed for handling missing feature values.

Page 17: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

17

What makes a good tree?

• Not too small – need to include enough attributes to handle possibly subtle distinctions in data

• Not too big

- computational efficiency (avoid redundant, spurious attributes)

- avoid over-fitting training examples (noisy, scarce data)

Occam’s Razor: find simplest hypothesis (tree) that is consistent with all observations

inductive bias – small trees, with informative nodes near root

Page 18: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

18

Basic Algorithm

• 가능한 모든 decision trees space 에서의 top-down, greedy search

• Training examples 를 가장 잘 분류할 수 있는 attribute 를 루트에 둔다 .

• Entropy, Information gain

Page 19: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

19

Top-Down Decision Tree Induction

• Recursively build a tree top-down by divide and conquer.

Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>:

<big, red, circle>: + <small, red, circle>: +<small, red, square>:

color

red bluegreen

Page 20: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

20

shape

circlesquare

triangle

Top-Down Decision Tree Induction

• Recursively build a tree top-down by divide and conquer.

<big, red, circle>: + <small, red, circle>: +<small, red, square>:

color

red bluegreen

<big, red, circle>: + <small, red, circle>: +

pos

<small, red, square>:

negpos

<big, blue, circle>: neg neg

Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>:

Page 21: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

21

Decision Tree Induction Pseudocode

DTree(examples, features) returns a tree If all examples are in one category, return a leaf node with that category label. Else if the set of features is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature F and create a node R for it For each possible value vi of F: Let examplesi be the subset of examples that have value vi for F

Add an out-going edge E to node R labeled with the value vi.

If examplesi is empty then attach a leaf node to edge E labeled with the category that is the most common in examples. else call DTree(examplesi , features – {F}) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R.

Page 22: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

22

The Basic Decision Tree Learning Algorithm

Top-Down Induction of Decision Trees

. This approach is exemplified by the ID3 algorithm and its successor C4.5

Page 23: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

23

Tree Induction

• Greedy strategy.– Split the records based on an attribute test that

optimizes certain criterion.

• Issues– Determine how to split the records

• How to specify the attribute test condition?• How to determine the best split?

– Determine when to stop splitting

Page 24: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

24

How to determine the Best Split

Income Age

>=10k<10k young old

Customers

fair customersGood customers

Page 25: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

25

How to determine the Best Split

• Greedy approach: – Nodes with homogeneous class distribution are

preferred

• Need a measure of node impurity:

High degreeof impurity

Low degreeof impurity

pure

50% red50% green

75% red25% green

100% red0% green

Page 26: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

26

Split Selection Method

• Numerical or ordered attributes: Find a split point that separates the (two) classes

(Yes: No: )

30 35

Age

Age < 33

Page 27: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

27

Split Selection Method (Contd.)

• Categorical attributes: How to group?

Sport: Truck: Minivan:

(Sport, Truck) -- (Minivan)

(Sport) --- (Truck, Minivan)

(Sport, Minivan) --- (Truck)

Car

Sport, Truck

Minivan

Page 28: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

28

Decision Tree Induction

• Many Algorithms:– Hunt’s Algorithm (1960’s, one of the earliest)– ID3(Quinlan 1979), C4.5(Quinlan 1993)– CART– SLIQ (EDBT’96 — Mehta et al.)

• builds an index for each attribute and only class list and the current attribute list reside in memory

– SPRINT (VLDB’96 — J. Shafer et al.)• constructs an attribute list data structure

– RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)• separates the scalability aspects from the criteria that

determine the quality of the tree• builds an AVC-list (attribute, value, class label)

– BOAT • Uses bootstrapping to create several small

samples

Page 29: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

29

History of Decision-Tree Research

• Hunt and colleagues use exhaustive search decision-tree methods (CLS) to model human concept learning in the 1960’s.

• In the late 70’s, Quinlan developed ID3 with the information gain heuristic to learn expert systems from examples.

• Simulataneously, Breiman and Friedman and colleagues develop CART (Classification and Regression Trees), similar to ID3.

• In the 1980’s a variety of improvements are introduced to handle noise, continuous features, missing features, and improved splitting criteria. Various expert-system development tools results.

• Quinlan’s updated decision-tree package (C4.5) released in 1993.

• Weka includes Java version of C4.5 called J48.

Page 30: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

30

General Structure of Hunt’s Algorithm

• Let Dt be the set of training records that reach a node t

• General Procedure:– If Dt contains records that

belong the same class yt, then t is a leaf node labeled as yt

– If Dt is an empty set, then t is a leaf node labeled by the default class, yd

– If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.

Tid Refund Marital Status

Taxable Income Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes 10

Dt

?

Page 31: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

31

Hunt’s Algorithm

Cheat=No

Refund

Cheat=No Cheat=No

Yes No

Refund

Cheat=No

Yes No

MaritalStatus

Cheat=No

Cheat

Single,Divorced

Married

TaxableIncome

Cheat=No

< 80K >= 80K

Refund

Cheat=No

Yes No

MaritalStatus

Cheat=NoCheat

Single,Divorced

Married

Page 32: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

32

What is ID3(Interactive Dichotomizer 3)

• A mathematical algorithm for building the decision tree.

• Invented by J. Ross Quinlan in 1979.• Uses Information Theory invented by Shannon

in 1948.• Information Gain is used to select the most

useful attribute for classification.• Builds the tree from the top down, with no

backtracking.

Page 33: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

33

ID3

• ID3 is designed for that case:– many attributes – training set contains many objects– a reasonably good decision tree is required

without much computation– It is generally found to construct simple decision

tree. But it cannot guarantee that the trees is always best.

Page 34: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

34

ID3 Algorithm Overview

• Step 1: Choose a random subset of the training set• Step 2: Form a decision tree that correctly classifies all the objects

in the window

• Step 3:

IF the tree gives the correct answer for all the objects in the training

set,

Then the process terminates;

Else a selection of the incorrectly

classified objects is added to the window and go to Step 2

Page 35: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

35

ID3 Algorithm

Page 36: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

36

Picking a Good Split Feature

• Goal is to have the resulting tree be as small as possible, per Occam’s razor.

• Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard optimization problem.

• Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest.– General lesson in ML: “Greed is good.”

• Want to pick a feature that creates subsets of examples that are relatively “pure” in a single class so they are “closer” to being leaf nodes.

• There are a variety of heuristics for picking a good test, a popular one is based on information gain that originated with the ID3 system of Quinlan (1979).

Page 37: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

37

Entropy

• Minimum number of bits of information needed to encode the classification of an arbitrary member of S

– entropy = 0, if all members in the same class– entropy = 1, if |positive examples|=|negative examples|

)(log)(log)( 020121 ppppSEntropy

Page 38: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

38

Example – Information Needed

• n = 5• p = 9• The information needed to generate a decision

tree from this window is:

• E(p,n) = = 0.940 bits14

5log

14

5

14

9log

14

922

Page 39: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

39

Entropy

• Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is:

where p1 is the fraction of positive examples in S and p0 is the fraction of negatives.

• If all examples are in one category, entropy is zero (we define 0log(0)=0)

• If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1.

• For multi-class problems with c categories, entropy generalizes to:

)(log)(log)( 020121 ppppSEntropy

c

iii ppSEntropy

12 )(log)(

Page 40: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

40

Entropy Plot for Binary Classification

Page 41: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

41

Information Gain

Parent Node, p is split into k partitions;

ni is number of records in partition I

– Measures Reduction in Entropy achieved because of the split.

– Choose the split that achieves maximum GAIN value !!

– Used in ID3 and C4.5

– Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

k

i

isplit iE

n

npEntropyGAIN

1

)()(

Page 42: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

42

Information Gain

• Example:– <big, red, circle>: + <small, red, circle>: +– <small, red, square>: <big, blue, circle>:

2+, 2 : E=1 size

big small

1+,1 1+,1

E=1 E=1

Gain=1(0.51 + 0.51) = 0

2+, 2 : E=1 color

red blue

2+,1 0+,1

E=0.918 E=0

Gain=1(0.750.918 +0.250) = 0.311

2+, 2 : E=1 shape

circle square

2+,1 0+,1

E=0.918 E=0

Gain =1(0.750.918+0.250) = 0.311

Page 43: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

43

Training Examples for PlayTennis

Day Outlook 온도 Humidity Wind PlayTennis

D1 Sunny Hot High Weak No

D2 Sunny Hot High Strong No

D3 Overcast Hot High Weak Yes

D4 Rain Mild High Weak Yes

D5 Rain Cool Normal Weak Yes

D6 Rain Cool Normal Strong No

D7 Overcast Cool Normal Strong Yes

D8 Sunny Mild High Weak No

D9 Sunny Cool Normal Weak Yes

D10 Rain Mild Normal Weak Yes

D11 Sunny Mild Normal Strong Yes

D12 Overcast Mild High Strong Yes

D13 Overcast Hot Normal Weak Yes

D14 Rain Mild High Strong No

Page 44: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

44

ID3

playdon’t play

pno = 5/14

pyes = 9/14

Impurity = - pyes log2 pyes - pno log2 pno

= - 9/14 log2 9/14 - 5/14 log2 5/14

= 0.94 bits

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE noovercast hot high FALSE yesrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE noovercast cool normal TRUE yessunny mild high FALSE nosunny cool normal FALSE yesrainy mild normal FALSE yessunny mild normal TRUE yesovercast mild high TRUE yesovercast hot normal FALSE yesrainy mild high TRUE no

Page 45: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

45

Selecting the Next Attribute : which attribute is the best classifier?

ID3

Page 46: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

46

Example – Branch on outlook

E(outlook) = 0.694

Gain(outlook)=0.940-E(outlook)=0.246bits

Outlook

sunny overcast rain

PTrueNormalMild11

PFalseNormalCool9

NFalseHighMild8

NTrueHighHot2

NFalseHighHot1

PFalseNormalHot13

PTrueHighMild12

PtrueNormalCool7

PFalseHighHot3

NTrueHighMild14

PFalseNormalMild10

NTrueNormalCool6

PFalseNormalCool5

PFalseHighMild4

),(1 45),(1 4

4),(1 45

332211 npEnpEnpE

p3=3,

n3=2,

E(p3, n3)=0.971

p2=4,

n2=0,

E(p2, n2)=0

p1=2,

n1=3,

E(p1, n1)=0.971

Page 47: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

47

ID3 play

don’t play

amount of information required to specify class of an example given that it reaches node

0.94 bits

0.0 bits* 4/14

0.97 bits* 5/14

0.97 bits* 5/14

0.98 bits* 7/14

0.59 bits* 7/14

0.92 bits* 6/14

0.81 bits* 4/14

0.81 bits* 8/14

1.0 bits* 4/14

1.0 bits* 6/14

outlook

sunny overcast rainy

+= 0.69 bits

gain: 0.25 bits

+= 0.79 bits

+= 0.91 bits

+= 0.89 bits

gain: 0.15 bits gain: 0.03 bits gain: 0.05 bits

play don't playsunny 2 3

overcast 4 0rainy 3 2

humidity temperature windy

high normal hot mild cool false true

play don't playhot 2 2mild 4 2cool 3 1

play don't playhigh 3 4

normal 6 1

play don't playFALSE 6 2TRUE 3 3

maximal

information

gain

Page 48: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

48

ID3 play

don’t playoutlook

sunny overcast rainy

maximal

information

gain

0.97 bits

0.0 bits* 3/5

humidity temperature windy

high normal hot mild cool false true

+= 0.0 bits

gain: 0.97 bits

+= 0.40 bits

gain: 0.57 bits

+= 0.95 bits

gain: 0.02 bits

0.0 bits* 2/5

0.0 bits* 2/5

1.0 bits* 2/5

0.0 bits* 1/5

0.92 bits* 3/5

1.0 bits* 2/5

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE nosunny mild high FALSE nosunny cool normal FALSE yessunny mild normal TRUE yes

Page 49: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

49

ID3 play

don’t playoutlook

sunny overcast rainy

humidity

high normal

outlook temperature humidity windy playrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE norainy mild normal FALSE yesrainy mild high TRUE no

1.0 bits*2/5

temperature windy

hot mild cool false true

+= 0.95 bits

gain: 0.02 bits

+= 0.95 bits

gain: 0.02 bits

+= 0.0 bits

gain: 0.97 bits

humidity

high normal

0.92 bits* 3/5

0.92 bits* 3/5

1.0 bits* 2/5

0.0 bits* 3/5

0.0 bits* 2/5

0.97 bits

Page 50: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

50

ID3

play

don’t playoutlook

sunny overcast rainy

windy

false true

humidityhigh

normal

outlook temperature humidity windy playsunny hot high FALSE nosunny hot high TRUE noovercast hot high FALSE yesrainy mild high FALSE yesrainy cool normal FALSE yesrainy cool normal TRUE noovercast cool normal TRUE yessunny mild high FALSE nosunny cool normal FALSE yesrainy mild normal FALSE yessunny mild normal TRUE yesovercast mild high TRUE yesovercast hot normal FALSE yesrainy mild high TRUE no

Yes

NoNo Yes Yes

Page 51: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

51

Hypothesis Space Search in Decision Tree Learning

• ID3 can be characterized as searching a space of hypothesis for one that fits the training examples

• The hypothesis space searched by ID3 is the set of possible decisions

• ID3 performs a simple-to-complex, hill-climbing search through this hypothesis space, locally-optimal solution.

Page 52: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

52

Hypothesis Space Search in Decision Tree Learning

• Hypothesis space of all decision tree is a complete space of finite discrete-valued functions, relative to the available attributes

• Outputs a single hypothesis

• No Back Tracking– Local minima…

• Inductive bias : approximate “prefer shortest tree”– Information-gain gives a bias for trees with minimal depth

Page 53: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

53

Hypothesis Space Search

• Performs batch learning that processes all training instances at once rather than incremental learning that updates a hypothesis after each example.

• Guaranteed to find a tree consistent with any conflict-free training set – i.e. identical feature vectors always assigned the same class,

Page 54: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

54

Occam’s Razor

• Occam’s Razor: Prefer the simplest hypothesis that fits the data

William of Ockham (AD 1285? – 1347?)

Page 55: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

55

Complex DT : Simple DT

temperature

cool mild hot

outlook outlook outlook

sunny o’cast rain

P N windy

true false

N P

sunny

o’cast rain

windy P humid

true false

P N

high normal

windy P

true false

N P

true false

N humid

high normal

Poutlook

sunny o’cast rain

N P null

outlook

sunny overcast rain

humidity P windy

high normal

N P

true false

N P

Page 56: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

56

Inductive Bias in Decision Tree Learning

Occam’s Razor

Page 57: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

57

Decision Tree Representation

Decision Trees

Page 58: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

58

Disadvantages

• Only allow 2 classes, – This limitation is usually removed in most later

systems.

• Not guaranteed to find the simplest tree.• Not incremental.

– Additional training data can not be considered without rebuilt the the whole tree with all the former data.

Page 59: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

59

Advantages

• A reasonably good decision tree without much computation.

• Iterative method usually is found more quickly to build a tree than build on the whole training set.

• ID3 is linear to the difficulty of the problem

Page 60: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

60

C4.5 History

• ID3, CHAID – 1960s• C4.5 innovations (Quinlan):

– permit numeric attributes– deal sensibly with missing values– pruning to deal with for noisy data

• C4.5 - one of best-known and most widely-used learning algorithms– Last research version: C4.8, implemented in Weka as

J4.8 (Java)– Commercial successor: C5.0 (available from

Rulequest)

Page 61: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

61

C4.5

• ID3 favors attributes with large number of divisions– Lead to overfitting

• Improved version of ID3:– Missing Data– Continuous Data– Pruning

• Subtree replacement by leaf node• Subtree raising

– Automated rule generation– GainRatio: take into account the cardinality of each attribute

values

Page 62: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

62

Weakness of ID3: Highly-branching attributes

• Problematic: attributes with a large number of values (extreme case: ID code)• Subsets are more likely to be pure if there is a large number of

values

⇒ Information gain is biased towards choosing attributes with a large number of values

⇒ This may result in overfitting (selection of an attribute that is non-optimal for prediction)

Page 63: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

63

Weakness of ID3: Split for ID Code Attribute

• Entropy of split = 0 (since each leaf node is “pure”, having only one case.

• Information gain is maximal for ID code

Day Outlook 온도 Humidity Wind PlayTennis

D1 Sunny Hot High Weak No

D2 Sunny Hot High Strong No

D3 Overcast Hot High Weak Yes

D4 Rain Mild High Weak Yes

D5 Rain Cool Normal Weak Yes

D6 Rain Cool Normal Strong No

D7 Overcast Cool Normal Strong Yes

D8 Sunny Mild High Weak No

D9 Sunny Cool Normal Weak Yes

D10 Rain Mild Normal Weak Yes

D11 Sunny Mild Normal Strong Yes

D12 Overcast Mild High Strong Yes

D13 Overcast Hot Normal Weak Yes

D14 Rain Mild High Strong No

ID code

No Yes NoNo Yes

D1 D2 D3 … D13 D14

Page 64: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

64

C4.5• Gain Ratio:

Parent Node, p is split into k partitions

ni is the number of records in partition i

– Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized!

– Used in C4.5– Designed to overcome the disadvantage of Information Gain– Split information is sensitive to how broadly and uniformly the

attribute splits the data

SplitINFO

GAINGainRATIO Split

split

k

i

ii

nn

nn

SplitINFO1

log

Page 65: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

65

Numeric attributes

• Standard method: binary splits– E.g. temp < 45

• Unlike nominal(or, categorical) attributes,every attribute has many possible split points

• Solution is straightforward extension: – Evaluate info gain (or other measure)

for every possible split point of attribute– Choose “best” split point– Info gain for best split point is info gain for attribute

• Computationally more demanding

witten & eibe

Page 66: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

66

Example

• Split on temperature attribute:

– E.g. temperature 71.5: yes/4, no/2temperature 71.5: yes/5, no/3

– Info([4,2],[5,3])= 6/14 info([4,2]) + 8/14 info([5,3]) = 0.939 bits

• Place split points halfway between values• Can evaluate all split points in one pass!

64 65 68 69 70 71 72 72 75 75 80 81 83 85

Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No

witten & eibe

Page 67: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

67

Avoid repeated sorting!

• Sort instances by the values of the numeric attribute– Time complexity for sorting: O (n log n)

• Q. Does this have to be repeated at each node of the tree?

• A: No! Sort order for children can be derived from sort order for parent– Time complexity of derivation: O (n)

– Drawback: need to create and store an array of sorted indices

for each numeric attribute

witten & eibe

Page 68: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

68

Weather data – nominal values

Outlook Temperature Humidity Windy Play

Sunny Hot High False No

Sunny Hot High True No

Overcast Hot High False Yes

Rainy Mild Normal False Yes

… … … … …

If outlook = sunny and humidity = high then play = no

If outlook = rainy and windy = true then play = no

If outlook = overcast then play = yes

If humidity = normal then play = yes

If none of the above then play = yes

witten & eibe

Page 69: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

69

More speeding up

• Entropy only needs to be evaluated between points of different classes (Fayyad & Irani, 1992)

64 65 68 69 70 71 72 72 75 75 80 81 83 85

Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No

Potential optimal breakpoints

Breakpoints between values of the same class cannotbe optimal

valueclass

X

Page 70: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

70

Continuous Attributes: Computing Gini Index...

• For efficient computation: for each attribute,– Sort the attribute on values– Linearly scan these values, each time updating the count matrix and

computing gini index– Choose the split position that has the least gini index

Cheat No No No Yes Yes Yes No No No No

Taxable Income

60 70 75 85 90 95 100 120 125 220

55 65 72 80 87 92 97 110 122 172 230

<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >

Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420

Split Positions

Sorted Values

Page 71: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

71

Splitting Based on Nominal Attributes

• Multi-way split: Use as many partitions as distinct values.

• Binary split: Divides values into two subsets. Need to find optimal partitioning.

CarTypeFamily

Sports

Luxury

CarType{Family, Luxury} {Sports}

CarType{Sports, Luxury} {Family} OR

Size{Small, Large} {Medium}

Page 72: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

72

Splitting Based on Continuous Attributes

TaxableIncome> 80K?

Yes No

TaxableIncome?

(i) Binary split (ii) Multi-way split

< 10K

[10K,25K) [25K,50K) [50K,80K)

> 80K

Page 73: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

73

Missing as a separate value

• Missing value denoted “?” in C4.X• Simple idea: treat missing as a separate value• Q: When this is not appropriate?• A: When values are missing due to different

reasons – Example: field IsPregnant=missing for a male patient

should be treated differently (no) than for a female patient of age 25 (unknown)

Page 74: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

74

Handling Missing Attribute Values

• Missing values affect decision tree construction in three different ways:– Affects how impurity measures are computed– Affects how to distribute instance with missing

value to child nodes– Affects how a test instance with missing value

is classified

Page 75: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

75

Computing Impurity Measure

Tid Refund Marital Status

Taxable Income Class

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 ? Single 90K Yes 10

Class = Yes

Class = No

Refund=Yes 0 3

Refund=No 2 4

Refund=? 1 0

Split on Refund:

Entropy(Refund=Yes)

-(0)log(0/3) – (3/3)log(3/3) = 0

Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183

Entropy(Children) = 3/10 (0) + 6/10 (0.9183) = 0.551

Gain = 0.8813 – 0.551 = 0.3303

Missing value

Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813

Refund

yes no

Page 76: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

76

Handling Missing Attribute Values

• Missing values affect decision tree construction in three different ways:– Affects how impurity measures are computed– Affects how to distribute instance with missing value to

child nodes– Affects how a test instance with missing value is

classified

Page 77: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

77

Distribute InstancesTid Refund Marital

Status Taxable Income Class

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No 10

RefundYes No

Class=Yes 0

Class=No 3

Cheat=Yes 2

Cheat=No 4

RefundYes

Tid Refund Marital Status

Taxable Income Class

10 ? Single 90K Yes 10

No

Class=Yes 2 + 6/ 9

Class=No 4

Probability that Refund=Yes is 3/9

Probability that Refund=No is 6/9

Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9

Class=Yes 0 + 3/ 9

Class=No 3

Page 78: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

78

Handling Missing Attribute Values

• Missing values affect decision tree construction in three different ways:– Affects how impurity measures are computed– Affects how to distribute instance with missing value to

child nodes– Affects how a test instance with missing value is

classified

Page 79: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

79

Classify Instances

Refund

MarSt

TaxInc

YESNO

NO

NO

YesNo

Married Single,

Divorced

< 80K > 80K

Married Single Divorced Total

Class=No 3 1 0 4

Class=Yes 6/9 1 1 2.67

Total 3.67 2 1 6.67

Tid Refund Marital Status

Taxable Income Class

11 No ? 85K ? 10

New record:

Probability that Marital Status = Married is 3.67/6.67

Probability that Marital Status ={Single,Divorced} is 3/6.67

Page 80: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

80

From trees to rules – how?

• How can we produce a set of rules from a decision tree?

Page 81: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

81

From trees to rules – simple• Simple way: one rule for each leaf• C4.5rules: greedily prune conditions from each rule

if this reduces its estimated error– Can produce duplicate rules– Check for this at the end

• Then– look at each class in turn– consider the rules for that class– find a “good” subset (guided by MDL)

• Then rank the subsets to avoid conflicts• Finally, remove rules (greedily) if this decreases

error on the training data

witten & eibe

Page 82: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

82

C4.5rules: choices and options

• C4.5rules slow for large and noisy datasets• Commercial version C5.0rules uses a different

technique– Much faster and a bit more accurate

witten & eibe

Page 83: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

83

CART Split Selection Method

Motivation: We need a way to choose quantitatively between different splitting predicates– Idea: Quantify the impurity of a node– Method: Select splitting predicate that

generates children nodes with minimum impurity from a space of possible splitting predicates

Page 84: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

84

CART

• If a data set D contains examples from n classes, gini index, gini(D) is defined as

where pj is the relative frequency of class j in D

• If a data set D is split on A into two subsets D1 and D2, the gini

index gini(D) is defined as

• Reduction in Impurity:

• The attribute provides the smallest ginisplit(D) (or the largest

reduction in impurity) is chosen to split the node

n

jp jDgini

1

21)(

)(||||)(

||||)( 2

21

1 DginiDD

DginiDDDginiA

)()()( DginiDginiAginiA

Page 85: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

85

Measure of Impurity: GINI

– Maximum (0.5) when records are equally distributed among all classes, implying least interesting information

– Minimum (0.0) when all records belong to one class, implying most interesting information

C1 0C2 6

Gini=0.000

C1 2C2 4

Gini=0.444

C1 3C2 3

Gini=0.500

C1 1C2 5

Gini=0.278

n

jp jDgini

1

21)(

Page 86: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

86

Examples for computing GINI

C1 0 C2 6

C1 2 C2 4

C1 1 C2 5

P(C1) = 0/6 = 0 P(C2) = 6/6 = 1

Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

P(C1) = 1/6 P(C2) = 5/6

Gini = 1 – (1/6)2 – (5/6)2 = 0.278

P(C1) = 2/6 P(C2) = 4/6

Gini = 1 – (2/6)2 – (4/6)2 = 0.444

n

jp jDgini

1

21)(

Page 87: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

87

Comparison among Splitting Criteria

For a 2-class problem:

Page 88: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

88

CART

• Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”

• Suppose the attribute income partitions D into 10 in D1: {low, medium}

and 4 in D2

but gini{medium,high} is 0.30 and thus the best since it is the lowest

• All attributes are assumed continuous-valued

• Can be modified for categorical attributes

459.014

5

14

91)(

22

Dgini

)(14

4)(

14

10)( 11},{ DGiniDGiniDgini mediumlowincome

Page 89: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

89

Comparing Attribute Selection Measures

• The three measures, in general, return good results but– Information gain:

• biased towards multivalued attributes

– Gain ratio: • tends to prefer unbalanced splits in which one partition is

much smaller than the others

– Gini index: • biased to multivalued attributes• has difficulty when # of classes is large• tends to favor tests that result in equal-sized partitions

and purity in both partitions

Page 90: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

90

Issues in Decision Tree Learning

Overfitting in Decision Trees

Page 91: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Underfitting and Overfitting

Overfitting

Underfitting: when model is too simple, both training and test errors are large

91

Page 92: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

92

Issues in Decision Tree Learning

Overfitting in Decision Treesh

h’

Page 93: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Overfitting due to Noise

Decision boundary is distorted by noise point

93

Page 94: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

94

Overfitting

• Learning a tree that classifies the training data perfectly may not lead to the tree with the best generalization to unseen data.– There may be noise in the training data that the tree is

erroneously fitting.– The algorithm may be making poor decisions towards the

leaves of the tree that are based on very little data and may not reflect reliable trends.

hypothesis complexity

accu

racy

on training data

on test data

Page 95: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

95

Overfitting Example

voltage (V)

curr

ent (

I)

Testing Ohms Law: V = IR (I = (1/R)V)

Ohm was wrong, we have found a more accurate function!

Perfect fit to training data with an 9th degree polynomial(can fit n points exactly with an n-1 degree polynomial)

Experimentallymeasure 10 points

Fit a curve to theResulting data.

Page 96: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

96

Overfitting Example

voltage (V)

curr

ent (

I)

Testing Ohms Law: V = IR (I = (1/R)V)

Better generalization with a linear functionthat fits training data less accurately.

Page 97: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

97

Overfitting Noise in Decision Trees

• Category or feature noise can easily cause overfitting.– Add noisy instance <medium, blue, circle>: pos (but really neg)

shape

circle square triangle

color

red bluegreen

pos neg pos

neg neg

Page 98: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

98

Overfitting Noise in Decision Trees

• Category or feature noise can easily cause overfitting.– Add noisy instance <medium, blue, circle>: pos (but really neg)

shape

circle square triangle

color

red bluegreen

pos neg pos

neg

<medium, blue, circle>: +<big, blue, circle>:

small med big

posneg neg

Page 99: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

99

Overfitting Prevention (Pruning) Methods

• Two basic approaches for decision trees– Prepruning: Halt tree construction early—do not split a node

if this would result in the goodness measure falling below a thresholdDifficult to choose an appropriate threshold

– Postpruning: Remove branches from a “fully grown” tree —get a sequence of progressively pruned treesUse a set of data different from the training data to

decide which is the “best pruned tree”

Page 100: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

100

Overfitting Prevention (Pruning) Methods

• Label leaf resulting from pruning with the majority class of the remaining data, or a class probability distribution.

• Method for determining which subtrees to prune:– Cross-validation: Reserve some training data as a hold-out set

(validation set) to evaluate utility of subtrees.– Statistical test: Use a statistical test on the training data to

determine if any observed regularity can be dismissed as likely due to random chance.

– Minimum description length (MDL): Determine if the additional complexity of the hypothesis is less complex than just explicitly remembering any exceptions resulting from pruning.

Page 101: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Minimum Description Length (MDL)

• Cost(Model,Data) = Cost(Data|Model) + Cost(Model)– Cost is the number of bits needed for encoding.– Search for the least costly model.

• Cost(Data|Model) encodes the misclassification errors.• Cost(Model) = node encoding(number of children) + splitting condition encoding

A B

A?

B?

C?

10

0

1

Yes No

B1 B2

C1 C2

X yX1 1X2 0X3 0X4 1

… …Xn 1

X yX1 ?X2 ?X3 ?X4 ?

… …Xn ?

101

Page 102: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

102

Pruning• Goal: Prevent overfitting to noise in the data• Two strategies for “pruning” the decision tree:

– (Stop earlier / Forward pruning): Stop growing the tree earlier – extra stopping conditions, e.g.• Stop if all instances belong to the same class• Stop if all the attribute values are the same• Stop if number of instances < some user-specified threshold• Stop if expanding the current node does not improve impurity

measures (e.g., Gini or Gain).

– (Post-pruning): Allow overfit and then post-prune the tree.• Estimation of errors and tree size to decide which subtree should be pruned.

• Postpruning preferred in practice—prepruning can “stop too early”

Page 103: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

103

Early stopping

• Pre-pruning may stop the growth process prematurely: early stopping

• But: XOR-type problems rare in practice

• And: pre-pruning faster than post-pruning

witten & eibe

Page 104: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

104

Post-pruning

• First, build full tree• Then, prune it

– Fully-grown tree shows all attribute interactions

• Two pruning operations: – Subtree replacement– Subtree raising

• Possible strategies:– error estimation– significance testing– MDL principle

witten & eibe

Page 105: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

105

Post-pruning: Subtree replacement, 1

• Bottom-up• Consider replacing a tree

only after considering all its subtrees

• Ex: labor negotiations

witten & eibe

Page 106: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

106

Post-pruning: Subtree replacement, 2

What subtree can we replace?

Page 107: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

107

Subtreereplacement, 3

• Bottom-up• Consider replacing a tree

only after considering all its subtrees

witten & eibe

Page 108: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

108

*Subtree raising• Delete node

• Redistribute instances

• Slower than subtree replacement

• (Worthwhile?)

witten & eibe

X

Page 109: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

109

Estimating error rates

• Prune only if it reduces the estimated error• Error on the training data is NOT a useful estimator

Q: Why it would result in very little pruning?• Use hold-out set for pruning

(“reduced-error pruning”)• C4.5’s method

– Derive confidence interval from training data– Use a heuristic limit, derived from this, for pruning– Standard Bernoulli-process-based method– Shaky statistical assumptions (based on training data)

witten & eibe

Page 110: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Estimating error rates

• Binomial확률 변수 성공 확률이 p 인 베르누이 시행을 n 번 독립적으로 반복했을 때의 성공 횟수 , X independently and identically distribution (iid) 이항 분포 X 의 확률 분포를 시행횟수 n, 성공률 p 를 갖는 이항 분포라 한다 .

확률 질량 함수 평균 E(X)=np

분산 Var(X)=np(1-p)

nxppx

nxXP xnx ,,2,1,0,)1()(

0419.0

)1(5.020

50)20( 205020

pXP

50 번 시행 중 Head 가 20 번 나올 확률 ?

110

Page 111: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

111

*Mean and variance

• Mean and variance for a Bernoulli trial: p, p (1–p)

• Expected error(or success) rate f=S/N

• Mean and variance for f : p, p (1–p)/N

• For large enough N, f follows a Normal distribution

• c% confidence interval [–z T z] for random variable with

0 mean is given by:

• With a symmetric distribution:

czTz ]Pr[

]Pr[21]Pr[ zTzTz

witten & eibe

Page 112: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

112

*Confidence limits• Confidence limits for the normal distribution with 0 mean and a

variance of 1:

• Thus:

• To use this we have to reduce our random variable f to have 0 mean and unit variance

Pr[X z] z

0.1% 3.09

0.5% 2.58

1% 2.33

5% 1.65

10% 1.28

20% 0.84

25% 0.69

40% 0.25

%90

]65.1Pr[*21]65.165.1Pr[

XX

–1 0 1 1.65

Page 113: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Confidence Interval on the Mean of aNormal Distribution, Variance Known

113

Npppf

/)1(

Mean p and variance p (1–p)/N for f(error rate) of sample

We may standardize f by f – p and divide by Npp /)1(

Page 114: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

114

*Transforming f

• Transformed random variable for f :

(i.e. subtract the mean and divide by the standard deviation)

• Resulting equation of confidence limits:

• Solving for mean error rate p from a normal distribution for f :

Npppf

/)1(

czNpp

pfz

/)1(Pr

Nz

Nz

Nf

Nf

zN

zfp

2

2

222

142

Page 115: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

115

C4.5’s method

• Error estimate for subtree is weighted sum of error estimates for all its leaves

• Error estimate for a node (upper bound):

• If c = 25% then z = 0.69 (from normal distribution)• f is the error on the training data• N is the number of instances covered by the leaf

witten & eibe

Nz

Nz

Nf

Nf

zN

zfp

2

2

222

142

Page 116: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

116

Example

f=0.33 p=0.47

f=0.5 p=0.72

f=0.33 p=0.47

f = 5/14 p = 0.46p < 0.51so prune!

Combined using ratios 6:2:6 e=0.51witten & eibe

Example

Page 117: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

117

Issues in Decision Tree Learning

Effect of Reduced-Error Pruning

Page 118: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

118

Issues in Decision Tree Learning

Rule Post-Pruning

Page 119: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

119

Issues in Decision Tree Learning

Converting A Tree to Rules

Page 120: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Model Evaluation

• Metrics for Performance Evaluation– How to evaluate the performance of a model?

• Methods for Performance Evaluation– How to obtain reliable estimates?

• Methods for Model Comparison– How to compare the relative performance among competing

models?

120

Page 121: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Metrics for Performance Evaluation

• Focus on the predictive capability of a model– Rather than how fast it takes to classify or build models,

scalability, etc.

• Confusion Matrix:

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a b

Class=No c d

a: TP (true positive)

b: FN (false negative)

c: FP (false positive)

d: TN (true negative)

121

Page 122: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Metrics for Performance Evaluation…

• Most widely-used metric:

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a(TP)

b(FN)

Class=No c(FP)

d(TN)

FNFPTNTPTNTP

dcbada

Accuracy

122

Page 123: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Limitation of Accuracy

• Consider a 2-class problem– Number of Class 0 examples = 9990– Number of Class 1 examples = 10

• If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 %– Accuracy is misleading because model does not

detect any class 1 example

123

Page 124: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Cost Matrix

PREDICTED CLASS

ACTUALCLASS

C(i|j) Class=Yes Class=No

Class=Yes C(Yes|Yes) C(No|Yes)

Class=No C(Yes|No) C(No|No)

C(i|j): Cost of misclassifying class j example as class i

124

Page 125: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Computing Cost of Classification

Cost Matrix

PREDICTED CLASS

ACTUALCLASS

C(i|j) + -

+ -1 100

- 1 0

Model M1 PREDICTED CLASS

ACTUALCLASS

+ -

+ 150 40

- 60 250

Model M2 PREDICTED CLASS

ACTUALCLASS

+ -

+ 250 45

- 5 200

Accuracy = 80%

Cost = 3910

Accuracy = 90%

Cost = 4255

125

Page 126: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Cost vs Accuracy

Count PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a b

Class=No c d

Cost PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes p q

Class=No q p

N = a + b + c + d

Accuracy = (a + d)/N

Cost = p (a + d) + q (b + c)

= p (a + d) + q (N – a – d)

= q N – (q – p)(a + d)

= N [q – (q-p) Accuracy]

Accuracy is proportional to cost if1. C(Yes|No)=C(No|Yes) = q 2. C(Yes|Yes)=C(No|No) = p

126

Page 127: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Cost-Sensitive Measures

cbaa

prrp

baa

caa

222

(F) measure-F

(r) Recall

(p)Precision

Precision is biased towards C(Yes|Yes) & C(Yes|No) Recall is biased towards C(Yes|Yes) & C(No|Yes) F-measure is biased towards all except C(No|No)

dwcwbwawdwaw

4321

41Accuracy Weighted

127

Page 128: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Model Evaluation

• Metrics for Performance Evaluation– How to evaluate the performance of a model?

• Methods for Performance Evaluation– How to obtain reliable estimates?

• Methods for Model Comparison– How to compare the relative performance among

competing models?

128

Page 129: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Methods for Performance Evaluation

• How to obtain a reliable estimate of performance?

• Performance of a model may depend on other factors besides the learning algorithm:– Class distribution– Cost of misclassification– Size of training and test sets

129

Page 130: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Learning Curve

Learning curve shows how accuracy changes with varying sample size

Effect of small sample size:

- Bias in the estimate

- Variance of estimate

130

target

Bias

Variance

Page 131: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

131

Issues with Reduced Error Pruning

• The problem with this approach is that it potentially “wastes” training data on the validation set.

• Severity of this problem depends where we are on the learning curve:

test

acc

urac

y

number of training examples

Page 132: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Model Evaluation

• Metrics for Performance Evaluation– How to evaluate the performance of a model?

• Methods for Performance Evaluation– How to obtain reliable estimates?

• Methods for Model Comparison– How to compare the relative performance among

competing models?

132

Page 133: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

ROC (Receiver Operating Characteristic)

• Developed in 1950s for signal detection theory to analyze noisy signals – Characterize the trade-off between positive hits and false alarms

• ROC curve plots TP (on the y-axis) against FP (on the x-axis)

• Performance of each classifier represented as a point on the ROC curve– changing the threshold of algorithm, sample distribution or cost

matrix changes the location of the point

133

Page 134: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

How to Construct an ROC curve

Instance P(+|A) True Class

1 0.95 +

2 0.93 +

3 0.87 -

4 0.85 -

5 0.85 -

6 0.85 +

7 0.76 -

8 0.53 +

9 0.43 -

10 0.25 +

• Use classifier that produces posterior probability for each test instance P(+|A)

• Sort the instances according to P(+|A) in decreasing order

• Apply threshold at each unique value of P(+|A)

• Count the number of TP, FP, TN, FN at each threshold

• TP rate, TPR = TP/(TP+FN)

• FP rate, FPR = FP/(FP + TN)

134

Page 135: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

How to construct an ROC curve

Class + - + - - - + - + +

P 0.25 0.43 0.53 0.76 0.85 0.85 0.85 0.87 0.93 0.95 1.00

TP 5 4 4 3 3 3 3 2 2 1 0

FP 5 5 4 4 3 2 1 1 0 0 0

TN 0 0 1 1 2 3 4 4 5 5 5

FN 0 1 1 2 2 2 2 3 3 4 5

TPR 1 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.2 0

FPR 1 1 0.8 0.8 0.6 0.4 0.2 0.2 0 0 0

Threshold

ROC Curve: TP rate, TPR = TP/(TP+FN)

FP rate, FPR = FP/(FP + TN)

(1,1)

(1, 0.8)(0.8, 0.8)

P(+|A): A 가 발생 할때의 + 의 조건부 확률

Example

Actual Predicate(t ≤ )

+ 0.43≤0.25 FN

- 0.43≤0.43 FP

+ 0.43≤0.53 TP

- 0.43≤0.76 FP

- 0.43≤0.85 FP

- 0.43≤0.85 FP

+ 0.43≤0.85 TP

- 0.43≤0.87 FP

+ 0.43≤0.93 TP

+ 0.43≤0.95 TP

135

Page 136: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

ROC Curve

At threshold t:

TP=0.5, FN=0.5, FP=0.12, TN=0.88

- 1-dimensional data set containing 2 classes (positive and negative)

- any points located at x >= t is classified as positive, P(+| x >= t ):

TP rate, TPR = TP/(TP+FN) = 0.5/ (0.5+0.5) = 0.5

FP rate, FPR = FP/(FP + TN) = 0.5 / (0.12+0.88) = 0.5 136

Page 137: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

ROC Curve

(TP,FP):• (0,0): declare everything

to be negative class• (1,1): declare everything

to be positive class• (1,0): ideal

• Diagonal line:– Random guessing– Below diagonal line:

• prediction is opposite of the true class

ideal

everythingto be negative class

everythingto be positive class

137

Page 138: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Using ROC for Model Comparison

No model consistently outperform the other

M1 is better for small FPR

M2 is better for large FPR

Area Under the ROC curve Ideal:

Area = 1 Random guess:

Area = 0.5

138

Page 139: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Test of Significance

• Given two models:– Model M1: accuracy = 85%, tested on 30 instances– Model M2: accuracy = 75%, tested on 5000 instances

• Can we say M1 is better than M2?– How much confidence can we place on accuracy of M1 and M2?– Can the difference in performance measure be explained as a

result of random fluctuations in the test set?

139

Page 140: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Confidence Interval for Accuracy

• Prediction can be regarded as a Bernoulli trial– A Bernoulli trial has 2 possible outcomes

– Possible outcomes for prediction: correct or wrong

– Collection of Bernoulli trials has a Binomial distribution:

• x Bin(N, p) x: number of correct predictions

• e.g: Toss a fair coin 50 times, how many heads would turn up? Expected number of heads = Np = 50 0.5 = 25

• Given x (# of correct predictions) or equivalently, acc=x/N, and N (# of test instances),

Can we predict p (true accuracy of model)?

140

Page 141: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Confidence Interval for Accuracy

• For large test sets (N > 30), – f(success rate) has a normal distribution

with mean p and variance p(1-p)/N

• Confidence Interval for p:

1

)/)1(

( 2/12/ ZNpp

pfZP

Area = 1 -

Z/2 Z1- /2

141

)(2

4422

2/

222/2/

22/

ZN

fNfNZZZfNp

Page 142: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Confidence Interval for Accuracy

• Consider a model that produces an accuracy of 80% when evaluated on 100 test instances:– N=100, acc = 0.8

– Let 1- = 0.95 (95% confidence)

– From probability table, Z/2=1.96 1- Z

0.99 2.58

0.98 2.33

0.95 1.96

0.90 1.65

N 50 100 500 1000 5000

p(lower) 0.670 0.711 0.763 0.774 0.789

p(upper) 0.888 0.866 0.833 0.824 0.811

)(2

4422

2/

222/2/

22/

ZN

accNaccNZZZaccNp

142

Page 143: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Comparing Performance of 2 Algorithms

• Each learning algorithm may produce k models:– L1 may produce M11 , M12, …, M1k– L2 may produce M21 , M22, …, M2k

• If models are generated on the same test sets D1,D2, …, Dk (e.g., via cross-validation)– For each set: compute dj = e1j – e2j

– dj has mean dt and variance t

– Estimate:

tkt

k

jj

t

zdd

kk

dd

ˆ

)1(

)(ˆ

1,1

1

2

2

143

Page 144: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Comparing Performance of 2 Models

• Given two models, say M1 and M2, which is better?– M1 is tested on D1 (size=n1), found error rate = e1

– M2 is tested on D2 (size=n2), found error rate = e2

– Assume D1 and D2 are independent

– If n1 and n2 are sufficiently large, then

– Approximate:

222

111

,~

,~

Ne

Ne

i

ii

i nee )1(

ˆ

144

Page 145: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Comparing Performance of 2 Models

• To test if performance difference is statistically significant: d = e1 – e2

– d ~ NN(dt,t) where dt is the true difference

– Since D1 and D2 are independent, their variance adds up:

– At (1-) confidence level,

2

22

1

11

22

21

22

21

2

)1()1(

ˆˆ

n

ee

n

eet

ttZdd

ˆ

2/

145

Page 146: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

An Illustrative Example

• Given: M1: n1 = 30, e1 = 0.15 M2: n2 = 5000, e2 = 0.25

• d = |e2 – e1| = 0.1 (2-sided test)

• At 95% confidence level, Z/2=1.96

Interval contains difference may not be statistically significant

0043.05000

)25.01(25.030

)15.01(15.0ˆ

d

128.0100.00043.096.1100.0 t

d

146

Page 147: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

147

Issues in Decision Tree Learning

• Weakness of decision trees

– Not Always sufficient to learn complex concepts (e.g., weighted evaluation function)

– Can be hard to understand. Real problems can produce deep trees with a large branching factor

– Some problems with continuously-valued attributes or classes may not be easily discretized

– Methods for handling missing attribute values are somewhat clumsy

Page 148: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

148

Issues in Decision Tree Learning

• Better splitting criteria– Information gain prefers features with many values.

• Continuous features• Predicting a real-valued function (regression trees)• Missing feature values• Features with costs• Misclassification costs• Mining large databases that do not fit in main memory

Page 149: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Issues in Decision Tree Learning

• Data Fragmentation

• Search Strategy

• Expressiveness

• Tree Replication

149

Page 150: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Issues in Decision Tree Learning

• Number of instances gets smaller as you traverse down the tree

• Number of instances at the leaf nodes could be too small to make any statistically significant decision

150

Data Fragmentation

Page 151: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Search Strategy

• Finding an optimal decision tree is NP-hard

• The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution

• Other strategies?– Bottom-up– Bi-directional

151

Issues in Decision Tree Learning

Page 152: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Expressiveness

• Decision tree provides expressive representation for learning discrete-valued function– But they do not generalize well to certain types of Boolean functions

• Example: parity function: – Class = 1 if there is an even number of Boolean attributes with truth value = True– Class = 0 if there is an odd number of Boolean attributes with truth value = True

• For accurate modeling, must have a complete tree

• Not expressive enough for modeling continuous variables– Particularly when test condition involves only a single attribute at-a-time

152

Issues in Decision Tree Learning

Page 153: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Decision Boundary

y < 0.33?

: 0 : 3

: 4 : 0

y < 0.47?

: 4 : 0

: 0 : 4

x < 0.43?

Yes

Yes

No

No Yes No

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

• Border line between two neighboring regions of different classes is known as decision boundary

• Decision boundary is parallel to axes because test condition involves a single attribute at-a-time

153

Issues in Decision Tree Learning

Page 154: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Oblique Decision Trees

x + y < 1

Class = + Class =

• Test condition may involve multiple attributes

• More expressive representation

• Finding optimal test condition is computationally expensive

154

Issues in Decision Tree Learning

Page 155: 1 Decision Tree Learning Soongsil University, Seoul Gun Ho Lee

Tree Replication

P

Q R

S 0 1

0 1

Q

S 0

0 1

• Same subtree appears in multiple branches

155

Issues in Decision Tree Learning