40
Decision Trees Decision Trees Decision Trees Decision Trees

Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Decision TreesDecision TreesDecision TreesDecision Trees

Page 2: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Nominal Data So far we consider patterns to be

represented by feature vectors of real or p yinteger values

Easy to come up with a distance (similarity) y p ( y)measure by using a variety of mathematical norms

What happens if features are not numbers?May not have a numerical representationDistance measures might not make sense

2PR , ANN, & ML

Page 3: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Examples Colors of fruit

G f i i il d f iGreen fruits are no more similar to red fruits then black fruits to red ones

S ll Smell Usefulness Interesting Etc Etc.

3PR , ANN, & ML

Page 4: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Examples (cont.) Instead of a feature vector of numbers, we

have a feature list of anythinghave a feature list of anything Fruit {color, texture, taste, size} DNA as a string of four letters

AGCTTCGA, etc.

4PR , ANN, & ML

Page 5: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Classification Visualizing using n-dimensional space might be

difficult How to map, say, smell, onto an axis? There might only be few discrete values (an article is

highly interesting somewhat interesting nothighly interesting, somewhat interesting, not interesting, etc.)

Even though that helps, do remember you cannot take distance measure in that space (e.g., Euclidean distance in r-g-b color space does not

correspond to human perception of color)correspond to human perception of color)

5PR , ANN, & ML

Page 6: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Decision Trees

A classification based on a sequence of questions onq q A particular feature (E.g., is the fruit sweet or not?) or A particular set of features (E.g., is this article relevant and

interesting?)

Answer can be either/ Yes/no

Choice (relevant & interesting, interesting but not relevant, relevant but not interesting, etc.)relevant but not interesting, etc.)

Usual a finite number of discrete values

6PR , ANN, & ML

Page 7: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Use of a Decision Tree Relatively simple

Ask question represented at the nodeAsk question represented at the nodeTraverse down the right branch until a leaf

node is reachednode is reachedUse the label at that leaf for the sample

W k if Work ifLinks are mutually distinct and exhaustive (may

i l d b h f d f lt N/A D/N t )include branches for default, N/A, D/N, etc.)

7PR , ANN, & ML

Page 8: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Use of a Decision Tree (cont.)

8PR , ANN, & ML

Page 9: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Creation of a Decision Tree Use supervised learning

Samples with tagged label (just like before) Samples with tagged label (just like before) Process

Number of splitsNumber of splitsQuery selectionRule for stopping splitting and pruningRule for stopping splitting and pruningRule for labeling the leavesVariable combination and missing dataVariable combination and missing data

Decision tree CAN be used with metric data (even though we motivate it using nonmetric data)

9PR , ANN, & ML

Page 10: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Number of Splits Binary vs. Multi-way

Can always make a multi way split into binaryCan always make a multi-way split into binary splits

Color?Color=green

green

yes noColor=yellow

green yellow redgreen

yes no

yellow red

10PR , ANN, & ML

yellow

Page 11: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Number of Splits (cont.)

11PR , ANN, & ML

Page 12: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Test Rules If a feature is an ordered variable, we might ask is

x>c, for some cx c, for some c If a feature is a category, we might ask is x in a

particular categoryp g y Yes sends samples to left and no sends samples to

right Simple rectangular partitions of the feature space More complicated ones: is x>0.5 & y<0.3p y

12PR , ANN, & ML

Page 13: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Test Rules (cont.)

13PR , ANN, & ML

Page 14: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Criteria for Splitting Intuitively,to make the populations of the samples

in the two children nodes purer than the parent node

What do you mean by pure? General formulation

At node n, with k classes Impurity depends on probabilities of samples at that Impurity depends on probabilities of samples at that

node being in a certain class

1)|( kinwP ))|(,),|(),|(()(

,,1)|(21 nwPnwPnwPfni

kinwPk

i

14PR , ANN, & ML

Page 15: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Possibilities

Entropy impurity jj wPwPni )(log)()( 2

Variance (Gini) impurity

jjj

wPwPwPni 2)(1)()()( Misclassification impurity

j

jji

ji wPwPwPni )(1)()()(

)(max1)( jwPni )()( j

15PR , ANN, & ML

Page 16: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Entropy A measure of “randomness” or

“unpredictability” In information theory, the number of bits

that are needed to code the transmission

16PR , ANN, & ML

Page 17: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Split Decision Before split – fixed impurity After split – impurity depends on decisionsp p y p The goal is maximize the drop in impurity Difference between Difference between

Impurity at rootI i hild ( i h d b l i ) Impurity at children (weighted by population)

)(i )(ni

LP RP 1 LR PP)()()()( RRLL NiPNiPnini

17PR , ANN, & ML)( Lni )( Rni

Page 18: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Split Decision We might also just minimize The reason to use delta is because if entropy

)()( RRLL NiPNiP

The reason to use delta is because if entropy impurity is used, the delta is information gaingain

Usually, a single feature is selected from ll i i ( bi ti famong all remaining ones (combination of

features is possible, but very expensive)

18PR , ANN, & ML

Page 19: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

ExampleDay Outlook Temperature Humidity Wind Play tennis

D1 Sunny Hot High Weak No

D2 Sunny Host High Strong No

D3 Overcast Hot High Weak Yes

D4 Rain Mild High Weak Yes

D5 Rain Cool Normal Weak Yes

D6 Rain Cold Normal Strong No

D7 Overcast Cool Normal Strong Yes

D8 Sunny Mild High Weak Noy g

D9 Sunny Cool Normal Weak Yes

D10 Rain Mild Normal Weak Yes

D11 Sunny Mild Normal Strong YesD11 Sunny Mild Normal Strong Yes

D12 Overcast Mild High Strong Yes

D13 Overcast Hot Normal Weak Yes

19PR , ANN, & ML

D14 Rain Mild High Strong No

Page 20: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Example

Humidity Wind

94.0)14/5log()14/5()14/9log()14/9(:)5,9(: Eroot

Humidity Wind

High Normal Weak Strong

985.0)7/4log()7/4()7/3log()7/3(:)4,3(

E:)16(

811.0:)2,6(

E 1:)3,3(

E

592.0)7/1log()7/1()7/6log()7/6(:)1,6(

E

:Gain :Gain

151.592).14/7(985).14/7(940.

:

Gain

048.0.1)14/6(811).14/8(940.

20PR , ANN, & ML

Page 21: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Example

Outlook

Sunny RainOvercast

YHumidity WindYes

High Normal Strong Weak

No Yes No Yes

21PR , ANN, & ML

Page 22: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Multiway Splits

In general, more splits allow impurity to dropS li d h l i h b h Splits reduce the samples in each branch

With few samples, it is likely that one sample i ht d i t (1 l i it 0 2 lmight dominate (1 sample, impurity=0, 2 samples,

50% chance impurity=0) P li f h f i it Proper scaling of change of impurity

Large split is penalized

B

kkk NiPnini

1)()()(

B

kk

B

PP

nini2log

)()(

22PR , ANN, & ML

k 1

Large entropy -> bad split

Page 23: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Caveats Decisions are made locally, one steps at a

time (greedy)time (greedy) Search in a “tree space” with all possible treesTh i t th t th t i i t bThere is no guarantee that the tree is going to be

optimal (shortest height)Other techniques (e g DP) can guarantee theOther techniques (e.g., DP) can guarantee the

optimal solution, but are more expensive

23PR , ANN, & ML

Page 24: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Caveats (cont.) Some impurity may not be as good as others E.g., 90 of w1 and 10 of w2 at a node, x of w1 and

f l f d d ill d iy of w2 go left and x > y and w1 still dominate at both children

Using misclassification impurity Using misclassification impurity

1.0)10010,

10090max(1)( ni90w1,10w2

100100

)100

901(100

100)1(100

)()(

xyxxyx

niPniP RRLL

x w1, y w2 90-x w1,

yxxni L

1)(yx

xni R

100

901)(1.0

)100

(100

)(100

yxyx10-y w2

24PR , ANN, & ML

yx yx 100

Page 25: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Caveats (cont.) E.g., 90 of w1 and 10 of w2 at a node, 70 of

w and 0 of w go leftw1 and 0 of w2 go left Using misclassification impurity

1.0)10010,

10090max(1)( ni

099.033.0*3.00)()( RRLL niPniP70 w1, 20 w1,

0)( Lni33.0

1020201)( Rni

,0 w2

,10 w2

25PR , ANN, & ML

1020)(

R

Page 26: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Caveats (cont.) It may also be good to split based on classes

(in a multi class binary tree) instead of(in a multi-class binary tree) instead of based on individual samples (twoingcriterion)criterion)

C = {c1,c2,…, cn} breaks intoC1 { i1 i2 ik}C1 = {ci1,ci2,…cik}

C2 = C –C1

26PR , ANN, & ML

Page 27: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

When to Stop Split? Keep on splitting you might end up with a

single sample per leaf (overfitting)single sample per leaf (overfitting) Not doing enough you might not be able to

t d l b l (it i ht b lget good labels (it might be an apple, or an orange, or an …)

27PR , ANN, & ML

Page 28: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

When to Stop Split? (cont.) Thresholding: if split results in small

impurity reduction don’t do itimpurity reduction, don t do itWhat should the threshold be?

Si li it if th d t i t f Size limit: if the node contains too few samples, don’t split anymore If region is sparse, don’t split

Combination of above (too small, stop)

leaves

nisize )(

28PR , ANN, & ML

leaves

Page 29: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Pruning (bottom-up) Splitting is a top-down process We can also do bottom up by splitting all the ways p y p g y

down, followed by a merge (bottom-up process) Adjacent nodes whose merge induces a small

i i i it b j i dincrease in impurity can be joined Avoid horizontal effect

Hard to decide when to stop splitting the arbitrary Hard to decide when to stop splitting, the arbitrary threshold set an artificial “floor”

Can be expensive (explore more branches that might eventually be thrown away and merged)

29PR , ANN, & ML

Page 30: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Assigning Labels If a leaf is of zero impurity – no brainer

Oth i t k th l b l f th d i t Otherwise, take the label of the dominant samples (similar to k-nearest neighbor l ifi ) i b iclassifier) – again, no brainer

30PR , ANN, & ML

Page 31: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

mpl

eEx

amp

E

31PR , ANN, & ML

Page 32: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Missing Attributes A sample can miss some attributes

I t i i In trainingDon’t use that sample at all – no brainerUse that sample for all available attributes, but

don’t use it for the missing attributes – again no brainerbrainer

32PR , ANN, & ML

Page 33: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Missing Attributes (cont.) In classification

What happens if the sample to be classified is missing What happens if the sample to be classified is missing some attributes?

Use surrogate splits Define more than one split (primary + surrogates) at

nonterminal nodes The surrogates should maximize predictive association (e.g.,

surrogates send the same number of patterns to the left and right as the primary)

Expensive

Use virtual values The most likely value for the missing attribute (e.g., the

average value for the missing attribute of all training data that

33PR , ANN, & ML

average value for the missing attribute of all training data that end up at that node)

Page 34: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Missing Attributes (cont.)

34PR , ANN, & ML

Page 35: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Feature Choice No magic here – if feature distribution does

not line up with the axes the decisionnot line up with the axes, the decision boundary is going to zigzag, which implies trees of a greater depthtrees of a greater depth

Principal component analysis might be f l t fi d d i t i di ti fuseful to find dominant axis directions for

cleaner partitions

35PR , ANN, & ML

Page 36: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

)co

nt.)

ice

(cC

hoi

atur

e Fe

a

36PR , ANN, & ML

Page 37: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

cont

)ic

e (c

Cho

iat

ure

Fea

37PR , ANN, & ML

Page 38: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

Other Alternatives What we described is the CART

(classification and regression tree)(classification and regression tree) ID3 (interactive dichotomizer)

Nominal dataNominal dataMulti-way split at a node # level = # variables # level # variables

C4.5No surrogate split (save space)No surrogate split (save space)

38PR , ANN, & ML

Page 39: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

A Pseudo Code Algorithm ID3(Examples, Attributes)

Create a Root node If ll E l iti t R ith l b l + If all Examples are positive, return Root with label = +

If all Examples are negative, return Root with label = - If no attributes are available for splitting return Root If no attributes are available for splitting, return Root

with label = most common values (+ or -) in Examples

39PR , ANN, & ML

Page 40: Decision TreesDecision Treesyfwang/courses/cs281b/pdf/tree.pdf · Nominal Data So far we consider patterns to be reppyresented by feature vectors of real or integer values Easyyp

OtherwiseCh b t tt ib t (A) t lit (b dChoose a best attribute (A) to split (based on infogain)

The decision attribute for Root = A The decision attribute for Root A For each possible value vi of A Add a new branch below Root for A=vi Examplevi = all Examples with A=vi If Examplevi is empty

• Add a leaf node under this branch with label=most common values in Example

Else Else • Add a leaf node under this branch with label

= ID3(Examplevi,Attributes-A)

40PR , ANN, & ML