34
1 Naïve Bayes Classification CS 6243 Machine Learning Modified from the slides by Dr. Raymond J. Mooney http://www.cs.utexas.edu/~mooney/cs391L/

Naïve Bayes Classification

  • Upload
    tola

  • View
    62

  • Download
    0

Embed Size (px)

DESCRIPTION

Naïve Bayes Classification. CS 6243 Machine Learning Modified from the slides by Dr. Raymond J. Mooney http://www.cs.utexas.edu/~mooney/cs391L/. Probability Basics. Definition (informal) - PowerPoint PPT Presentation

Citation preview

Page 1: Naïve Bayes Classification

1

Naïve Bayes Classification

CS 6243 Machine Learning

Modified from the slides by Dr. Raymond J. Mooney http://www.cs.utexas.edu/~mooney/cs391L/

Page 2: Naïve Bayes Classification

2

Probability Basics

• Definition (informal)– Probabilities are numbers assigned to events

that indicate “how likely” it is that the event will occur when a random experiment is performed

– A probability law for a random experiment is a rule that assigns probabilities to the events in the experiment

– The sample space S of a random experiment is the set of all possible outcomes

Page 3: Naïve Bayes Classification

3

Probabilistic Calculus

• All probabilities between 0 and 1

• If A, B are mutually exclusive:– P(A B) = P(A) + P(B)

• Thus: P(not(A)) = P(Ac) = 1 – P(A)

A B

1)(0 AP

S

Page 4: Naïve Bayes Classification

4

Conditional probability

• The joint probability of two events A and B P(AB), or simply P(A, B) is the probability that event A and B occur at the same time.

• The conditional probability of P(A|B) is the probability that A occurs given B occurred.

P(A | B) = P(A B) / P(B)

<=> P(A B) = P(A | B) P(B)

<=> P(A B) = P(B|A) P(A)

Page 5: Naïve Bayes Classification

5

Example

• Roll a die– If I tell you the number is less than 4– What is the probability of an even number?

• P(d = even | d < 4) = P(d = even d < 4) / P(d < 4)

• P(d = 2) / P(d = 1, 2, or 3) = (1/6) / (3/6) = 1/3

Page 6: Naïve Bayes Classification

6

Independence

• A and B are independent iff:

• Therefore, if A and B are independent:

)()|( APBAP

)()|( BPABP

)()(

)()|( AP

BP

BAPBAP

)()()( BPAPBAP

These two constraints are logically equivalent

Page 7: Naïve Bayes Classification

7

Examples

• Are P(d = even) and P(d < 4) independent?– P(d = even and d < 4) = 1/6– P(d = even) = ½– P(d < 4) = ½– ½ * ½ > 1/6

• If your die actually has 8 faces, will P(d = even) and P(d < 5) be independent?

• Are P(even in first roll) and P(even in second roll) independent?

• Playing card, are the suit and rank independent?

Page 8: Naïve Bayes Classification

8

Theorem of total probability

• Let B1, B2, …, BN be mutually exclusive events whose union equals the sample space S. We refer to these sets as a partition of S.

• An event A can be represented as:

• Since B1, B2, …, BN are mutually exclusive, then

P(A) = P(A B1) + P(A B2) + … + P(A BN)

• And therefore

P(A) = P(A|B1)*P(B1) + P(A|B2)*P(B2) + … + P(A|BN)*P(BN)

= i P(A | Bi) * P(Bi)Exhaustive conditionalization

Marginalization

Page 9: Naïve Bayes Classification

9

Example

• A loaded die: – P(6) = 0.5– P(1) = … = P(5) = 0.1

• Prob of even number? P(even) = P(even | d < 6) * P (d<6) +

P(even | d = 6) * P (d=6)= 2/5 * 0.5 + 1 * 0.5= 0.7

Page 10: Naïve Bayes Classification

10

Another example

• A box of dice:– 99% fair– 1% loaded

• P(6) = 0.5.

• P(1) = … = P(5) = 0.1

– Randomly pick a die and roll, P(6)?

• P(6) = P(6 | F) * P(F) + P(6 | L) * P(L)– 1/6 * 0.99 + 0.5 * 0.01 = 0.17

Page 11: Naïve Bayes Classification

11

Bayes theorem

• P(A B) = P(B) * P(A | B) = P(A) * P(B | A)

AP

BPABP

()

()(|) ==>

Posterior probability Prior of A (Normalizing constant)

BAP (|) Prior of B

Conditional probability(likelihood)

This is known as Bayes Theorem or Bayes Rule, and is (one of) the most useful relations in probability and statistics

Bayes Theorem is definitely the fundamental relation in Statistical Pattern Recognition

Page 12: Naïve Bayes Classification

12

Bayes theorem (cont’d)

• Given B1, B2, …, BN, a partition of the sample space S. Suppose that event A occurs; what is the probability of event Bj?

• P(Bj | A) = P(A | Bj) * P(Bj) / P(A)

= P(A | Bj) * P(Bj) / jP(A | Bj)*P(Bj)

Bj: different models / hypotheses

In the observation of A, should you choose a model that maximizes P(Bj | A) or P(A | Bj)? Depending on how much you know about Bj !

Posterior probabilityLikelihood Prior of Bj

Normalizing constant

(theorem of total probabilities)

Page 13: Naïve Bayes Classification

13

Example

• A test for a rare disease claims that it will report positive for 99.5% of people with disease, and negative 99.9% of time for those without.

• The disease is present in the population at 1 in 100,000

• What is P(disease | positive test)?– P(D|+) = P(+|D)P(D)/P(+) = 0.01

• What is P(disease | negative test)?– P(D|-) = P(-|D)P(D)/P(-) = 5e-8

Page 14: Naïve Bayes Classification

14

Joint Distribution

• The joint probability distribution for a set of random variables, X1,…,Xn gives the probability of every combination of values (an n-dimensional array with vn values if all variables are discrete with v values, all vn values must sum to 1): P(X1,…,Xn)

• The probability of all possible conjunctions (assignments of values to some subset of variables) can be calculated by summing the appropriate subset of values from the joint distribution.

• Therefore, all conditional probabilities can also be calculated.

circle square

red 0.20 0.02

blue 0.02 0.01

circle square

red 0.05 0.30

blue 0.20 0.20

positive negative

25.005.020.0)( circleredP

80.025.0

20.0

)(

)()|(

circleredP

circleredpositivePcircleredpositiveP

57.03.005.002.020.0)( redP

Page 15: Naïve Bayes Classification

15

Probabilistic Classification

• Let Y be the random variable for the class which takes values {y1,y2,…ym}.

• Let X be the random variable describing an instance consisting of a vector of values for n features <X1,X2…Xn>, let xk be a possible value for X and xij a possible value for Xi.

• For classification, we need to compute P(Y=yi | X=xk) for i=1…m• However, given no other assumptions, this requires a table giving

the probability of each category for each possible instance in the instance space, which is impossible to accurately estimate from a reasonably-sized training set.– Assuming Y and all Xi are binary, we need 2n entries to specify P(Y=pos |

X=xk) for each of the 2n possible xk’s since P(Y=neg | X=xk) = 1 – P(Y=pos | X=xk)

– Compared to 2n+1 – 1 entries for the joint distribution P(Y,X1,X2…Xn)

Page 16: Naïve Bayes Classification

16

Bayesian Categorization

• Determine category of xk by determining for each yi

• P(X=xk) can be determined since categories are complete and disjoint.

)(

)()|()|(

k

iikki xXP

yYPyYxXPxXyYP

m

i k

iikm

iki xXP

yYPyYxXPxXyYP

11

1)(

)()|()|(

m

iiikk yYPyYxXPxXP

1

)()|()(

Page 17: Naïve Bayes Classification

17

Bayesian Categorization (cont.)

• Need to know:– Priors: P(Y=yi)

– Conditionals: P(X=xk | Y=yi)

• P(Y=yi) are easily estimated from data.

– If ni of the examples in D are in yi then P(Y=yi) = ni / |D|

• Too many possible instances (e.g. 2n for binary features) to estimate all P(X=xk | Y=yi).

• Need to make some sort of independence assumptions about the features to make learning tractable.

Page 18: Naïve Bayes Classification

18

Conditional independence

• Chain rule:

P(x1, x2, x3) = P(x1, x2, x3) / P(x2, x3)

* P(x2, x3) / P(x3)

* P(x3)

= P(x1 | x2, x3) P(x2 | x3) P(x3)

• P(x1, x2, x3 | Y)

= P(x1 | x2, x3, Y) P(x2 | x3 , Y) P(x3 | Y)

= P(x1 | Y) P(x2 | Y) P(x3 , Y) assuming conditional independence

Page 19: Naïve Bayes Classification

20

Naïve Bayes Generative Model

Size Color Shape Size Color Shape

Positive Negative

posnegpos

pospos neg

neg

sm

medlg

lg

medsm

smmed

lg

red

redredred red

blue

bluegrn

circcirc

circ

circ

sqr

tri tri

circ sqrtri

sm

lg

medsm

lgmed

lgsmblue

red

grnblue

grnred

grnblue

circ

sqr tricirc

sqrcirc

tri

Category

Page 20: Naïve Bayes Classification

21

Naïve Bayes Inference Problem

Size Color Shape Size Color Shape

Positive Negative

posnegpos

pospos neg

neg

sm

medlg

lg

medsm

smmed

lg

red

redredred red

blue

bluegrn

circcirc

circ

circ

sqr

tri tri

circ sqrtri

sm

lg

medsm

lgmed

lgsmblue

red

grnblue

grnred

grnblue

circ

sqr tricirc

sqrcirc

tri

Category

lg red circ ?? ??

Page 21: Naïve Bayes Classification

22

Naïve Bayesian Categorization

• If we assume features of an instance are independent given the category (conditionally independent).

• Therefore, we then only need to know P(Xi | Y) for each possible pair of a feature-value and a category.

• If Y and all Xi and binary, this requires specifying only 2n parameters:– P(Xi=true | Y=true) and P(Xi=true | Y=false) for each Xi

– P(Xi=false | Y) = 1 – P(Xi=true | Y)

• Compared to specifying 2n parameters without any independence assumptions.

)|()|,,()|(1

21

n

iin YXPYXXXPYXP

Page 22: Naïve Bayes Classification

23

Naïve Bayes Example

Probability positive negative

P(Y) 0.5 0.5

P(small | Y) 0.4 0.4

P(medium | Y) 0.1 0.2

P(large | Y) 0.5 0.4

P(red | Y) 0.9 0.3

P(blue | Y) 0.05 0.3

P(green | Y) 0.05 0.4

P(square | Y) 0.05 0.4

P(triangle | Y) 0.05 0.3

P(circle | Y) 0.9 0.3

Test Instance:<medium ,red, circle>

Page 23: Naïve Bayes Classification

24

Naïve Bayes Example

Probability positive negative

P(Y) 0.5 0.5

P(medium | Y) 0.1 0.2

P(red | Y) 0.9 0.3

P(circle | Y) 0.9 0.3

P(positive | X) = P(positive)*P(medium | positive)*P(red | positive)*P(circle | positive) / P(X) 0.5 * 0.1 * 0.9 * 0.9 = 0.0405 / P(X)

P(negative | X) = P(negative)*P(medium | negative)*P(red | negative)*P(circle | negative) / P(X) 0.5 * 0.2 * 0.3 * 0.3 = 0.009 / P(X)

P(positive | X) + P(negative | X) = 0.0405 / P(X) + 0.009 / P(X) = 1

P(X) = (0.0405 + 0.009) = 0.0495

= 0.0405 / 0.0495 = 0.8181

= 0.009 / 0.0495 = 0.1818

Test Instance:<medium ,red, circle>

Page 24: Naïve Bayes Classification

25

Estimating Probabilities

• Normally, probabilities are estimated based on observed frequencies in the training data.

• If D contains nk examples in category yk, and nijk of these nk examples have the jth value for feature Xi, xij, then:

• However, estimating such probabilities from small training sets is error-prone.

• If due only to chance, a rare feature, Xi, is always false in the training data, yk :P(Xi=true | Y=yk) = 0.

• If Xi=true then occurs in a test example, X, the result is that yk: P(X | Y=yk) = 0 and yk: P(Y=yk | X) = 0

k

ijkkiji n

nyYxXP )|(

Page 25: Naïve Bayes Classification

26

Probability Estimation Example

Probability positive negative

P(Y) 0.5 0.5

P(small | Y) 0.5 0.5

P(medium | Y) 0.0 0.0

P(large | Y) 0.5 0.5

P(red | Y) 1.0 0.5

P(blue | Y) 0.0 0.5

P(green | Y) 0.0 0.0

P(square | Y) 0.0 0.0

P(triangle | Y) 0.0 0.5

P(circle | Y) 1.0 0.5

Ex Size Color Shape Category

1 small red circle positive

2 large red circle positive

3 small red triangle negitive

4 large blue circle negitive

Test Instance X:<medium, red, circle>

P(positive | X) = 0.5 * 0.0 * 1.0 * 1.0 / P(X) = 0

P(negative | X) = 0.5 * 0.0 * 0.5 * 0.5 / P(X) = 0

Page 26: Naïve Bayes Classification

27

Smoothing

• To account for estimation from small samples, probability estimates are adjusted or smoothed.

• Laplace smoothing using an m-estimate assumes that each feature is given a prior probability, p, that is assumed to have been previously observed in a “virtual” sample of size m.

• For binary features, p is simply assumed to be 0.5.

mn

mpnyYxXP

k

ijkkiji

)|(

Page 27: Naïve Bayes Classification

28

Laplace Smothing Example

• Assume training set contains 10 positive examples:– 4: small

– 0: medium

– 6: large

• Estimate parameters as follows (if m=1, p=1/3)– P(small | positive) = (4 + 1/3) / (10 + 1) = 0.394

– P(medium | positive) = (0 + 1/3) / (10 + 1) = 0.03

– P(large | positive) = (6 + 1/3) / (10 + 1) = 0.576

– P(small or medium or large | positive) = 1.0

Page 28: Naïve Bayes Classification

29

Missing values

• Training: instance is not included in frequency count for attribute value-class combination

• Classification: attribute will be omitted from calculation

• Example: Outlook Temp. Humidity Windy Play

? Cool High True ?

Likelihood of “yes” = 3/9 3/9 3/9 9/14 = 0.0238

Likelihood of “no” = 1/5 4/5 3/5 5/14 = 0.0343

P(“yes”) = 0.0238 / (0.0238 + 0.0343) = 41%

P(“no”) = 0.0343 / (0.0238 + 0.0343) = 59%

witten&eibe

Page 29: Naïve Bayes Classification

30

Continuous Attributes

• If Xi is a continuous feature rather than a discrete one, need another way to calculate P(Xi | Y).

• Assume that Xi has a Gaussian distribution whose mean and variance depends on Y.

• During training, for each combination of a continuous feature Xi and a class value for Y, yk, estimate a mean, μik , and standard deviation σik based on the values of feature Xi in class yk in the training data.

• During testing, estimate P(Xi | Y=yk) for a given example, using the Gaussian distribution defined by μik and σik .

2

2

2

)(exp

2

1)|(

ik

iki

ik

ki

XyYXP

Page 30: Naïve Bayes Classification

31

Statistics forweather data

• Example density value:

0340.02.62

1)|66(

2

2

2.62

)7366(

eyesetemperaturf

Outlook Temperature Humidity Windy Play

Yes No Yes No Yes No Yes No Yes No

Sunny 2 3 64, 68, 65, 71,

65, 70, 70, 85, False 6 2 9 5

Overcast 4 0 69, 70, 72, 80,

70, 75, 90, 91, True 3 3

Rainy 3 2 72, … 85, …

80, … 95, …

Sunny 2/9 3/5 =73 =75 =79 =86 False 6/9 2/5 9/14 5/14

Overcast 4/9 0/5 =6.2 =7.9 =10.2 =9.7 True 3/9 3/5

Rainy 3/9 2/5

witten&eibe

Page 31: Naïve Bayes Classification

32

Classifying a new day

• A new day:

• Missing values during training are not included in calculation of mean and standard deviation

Outlook Temp. Humidity Windy Play

Sunny 66 90 true ?

Likelihood of “yes” = 2/9 0.0340 0.0221 3/9 9/14 = 0.000036

Likelihood of “no” = 3/5 0.0291 0.0380 3/5 5/14 = 0.000136

P(“yes”) = 0.000036 / (0.000036 + 0. 000136) = 20.9%

P(“no”) = 0.000136 / (0.000036 + 0. 000136) = 79.1%

witten&eibe

Page 32: Naïve Bayes Classification

33

*Probability densities

• Relationship between probability and density:

• But: this doesn’t change calculation of a posteriori probabilities because cancels out

• Exact relationship:

)(]22

Pr[ cfcxc

b

a

dttfbxa )(]Pr[

witten&eibe

Page 33: Naïve Bayes Classification

34

Naïve Bayes: discussion

• Naïve Bayes works surprisingly well (even if independence assumption is clearly violated)– Experiments show it to be quite competitive with other

classification methods on standard UCI datasets.• Why? Because classification doesn’t require accurate probability

estimates as long as maximum probability is assigned to correct class• However: adding too many redundant attributes will cause problems

(e.g. identical attributes)• Note also: many numeric attributes are not normally distributed (

kernel density estimators)• Does not perform any search of the hypothesis space. Directly

constructs a hypothesis from parameter estimates that are easily calculated from the training data.

• Typically handles noise well since it does not even focus on completely fitting the training data.

witten&eibe

Page 34: Naïve Bayes Classification

35

Naïve Bayes Extensions

• Improvements:– select best attributes (e.g. with greedy search)– often works as well or better with just a

fraction of all attributes

• Bayesian Networks

witten&eibe