20
Bayesian Learning Provides practical learning algorithms Naïve Bayes learning Bayesian belief network learning Combine prior knowledge (prior probabilities) Provides foundations for machine learning Evaluating learning algorithms Guiding the design of new algorithms Learning from models : meta learning

Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Embed Size (px)

Citation preview

Page 1: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayesian Learning

• Provides practical learning algorithms – Naïve Bayes learning

– Bayesian belief network learning

– Combine prior knowledge (prior probabilities)

• Provides foundations for machine learning– Evaluating learning algorithms

– Guiding the design of new algorithms

– Learning from models : meta learning

Page 2: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Basic Approach

Bayes Rule:

)(

)()|()|(

DP

hPhDPDhP

• P(h) = prior probability of hypothesis h

• P(D) = prior probability of training data D

• P(h|D) = probability of h given D (posterior density )

• P(D|h) = probability of D given h (likelihood of D given h)

The Goal of Bayesian Learning: the most probable hypothesis given the training data (Maximum A Posteriori hypothesis )maph

)()|(max

)(

)()|(max

)|(max

hPhDP

DP

hPhDP

DhP

h

Hh

Hh

Hh

map

Page 3: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Basic Formulas for Probabilities

• Product Rule : probability P(AB) of a conjunction of two events A and B:

•Sum Rule: probability of a disjunction of two events A and B:

•Theorem of Total Probability : if events A1, …., An are mutually exclusive with

)()|()()|(),( APABPBPBAPBAP

)()()()( ABPBPAPBAP

)()|()(1

i

n

ii APABPBP

Page 4: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

An Example

Does patient have cancer or not? A patient takes a lab test and the result comes back positive. The test returns a correct positive result in only 98% of the cases in which the disease is actually present, and a correct negative result in only 97% of the cases in which the disease is not present. Furthermore, .008 of the entire population have this cancer.

)(

)()|()|(

)(

)()|()|(

97.)|(,03.)|(

02.)|(,98.)|(

992.)(,008.)(

P

cancerPcancerPcancerP

P

cancerPcancerPcancerP

cancerPcancerP

cancerPcancerP

cancerPcancerP

Page 5: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

MAP Learner

For each hypothesis h in H, calculate the posterior probability

)(

)()|()|(

DP

hPhDPDhP

Output the hypothesis hmap with the highest posterior probability

)|(max DhPhHh

map

Comments:Computational intensiveProviding a standard for judging the performance of learning algorithmsChoosing P(h) and P(D|h) reflects our prior knowledge about the learning task

Page 6: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Minimum Description Length Principle

• Occam’s razor : “ choose the shortest explanation for the observed data: short trees; minimal rules….’’

• MML: An abstraction of syntactical simplicity:– Given a data set D

– Given a set of hypothesis {h1, h2, …, hn} that describe D

– MML Principle : always choose Hi for which that quantity:

Mlength (hi) + Mlength (D | hi) is minimised

– MML: Best hypothesis is one which allows for most compact encoding of given data

Example:

•Represent the set {2, 4, 6, ….., 100}•Solution 1: list all 49 integers•Solution 2: “All even numbers <= 100 and >= 2•MML: Solution2 is worth somet5hing•As the size of set increases, the cost of solution 1 increases much more rapidly than •cost of solution 1•If the set is random then the cost of solution 1 is minimum

Page 7: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayesian Interpretation of MML

From Information Theory:The optimal (shortest expected coding length) code for an object (event) with probability p is -log2p bitsSo:

map

Hh

Hh

Hh

Hh

h

hPhDP

hPhDP

hPhDP

hDMlengthDMlength

)()|(max

))(log)|((logmax

))(log)|(log(min

))|()((min

22

22

Think about the decision tree example: prefer the hypothesis that minimizes: mlength(h) + mlength (misclassification)

Page 8: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayes Optimal Classifier• Question: Given new instance x, what is its most probable

classification?

• Hmap(x) is not the most probable classification!

Example: Let P(h1|D) = .4, P(h2|D) = .3, P(h3 |D) =.3 Given new data x, we have h1(x)=+, h2(x) = -, h3(x) = -

What is the most probable classification of x ?

Bayes optimal classification:) | ( ) | ( maxD h P h v Pi

H hji j

V vj

Example:

P(h1| D) =.4, P(-|h1)=0, P(+|h1)=1P(h2|D) =.3, P(-|h2)=1, P(+|h2)=0P(h3|D)=.3, P(-|h3)=1, P(+|h3)=0 6.)|()|(

4.)|()|(

DhPhP

DhPhP

iHhi

i

iHhi

i

Page 9: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Naïve Bayes Learner

Assume target function f: X-> V, where each instance x described by attributes <a1, a2, …., an>. Most probable value of f(x) is:

)()|....,(max

)....,(

)()|....,(max

)....,|(max

21

21

21

21

jjnVvj

n

jjn

Vvj

njVvj

vPvaaaP

aaaP

vPvaaaP

aaavPv

Naïve Bayes assumption:

)|()|....,( 21 ji

ijn vaPvaaaP (attributes are conditionally independent)

Page 10: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Example : Naïve BayesPredict playing tennis in the day with the condition <sunny, cool, high, strong> (P(v| o=sunny, t= cool, h=high w=strong)) using the following training data:

Day Outlook Temperature Humidity Wind Play Tennis1 Sunny Hot High Weak No2 Sunny Hot High Strong No3 Overcast Hot High Weak Yes4 Rain Mild High Weak Yes5 Rain Cool Normal Weak Yes6 Rain Cool Normal Strong No7 Overcast Cool Normal Strong Yes8 Sunny Mild High Weak No9 Sunny Cool Normal Weak Yes10 Rain Mild Normal Weak Yes11 Sunny Mild Normal Strong Yes12 Overcast Mild High Strong Yes13 Overcast Hot Normal Weak Yes14 Rain Mild High Strong No

we have :

021.)|()|()|()|()(

005.)|()|()|()|()(

nstrongpnhighpncoolpnsunpnp

ystrongpyhighpycoolpysunpyptenniseplayingofdays

windstrongwithtenniseplayingofdays

#

#

Page 11: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Naïve Bayes Algorithm

Naïve_Bayes_Learn (examples) for each target value vj estimate P(vj) for each attribute value ai of each attribute a estimate P(ai | vj )

Classify_New_Instance (x)

) | ( ) ( maxjx a

iV vj

jv a P v P vi

Typical estimation of P(ai | vj)

mn

mpnvaP c

ji

)|( Where n: examples with v=v; p is prior estimate for P(ai|vj)nc: examples with a=ai, m is the weight to prior

Page 12: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayesian Belief Networks

• Naïve Bayes assumption of conditional independence too restrictive

• But it is intractable without some such assumptions

• Bayesian Belief network (Bayesian net) describe conditional independence among subsets of variables (attributes): combining prior knowledge about dependencies among variables with observed training data.

• Bayesian Net– Node = variables

– Arc = dependency

– DAG, with direction on arc representing causality

Page 13: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayesian Networks: Multi-variables with Dependency

• Bayesian Belief network (Bayesian net) describe conditional independence among subsets of variables (attributes): combining prior knowledge about dependencies among variables with observed training data.

• Bayesian Net– Node = variables and each variable has a finite set of mutually exclusive

states – Arc = dependency– DAG, with direction on arc representing causality– To each variables A with parents B1, …., Bn there is attached a conditional

probability table P (A | B1, …., Bn)

Page 14: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayesian Belief Networks

•Age, Occupation and Income determine if customer will buy this product.•Given that customer buys product, whether there is interest in insurance is now independent of Age, Occupation, Income.•P(Age, Occ, Inc, Buy, Ins ) = P(Age)P(Occ)P(Inc)P(Buy|Age,Occ,Inc)P(Int|Buy)

Current State-of-the Art: Given structure and probabilities, existing algorithms can handle inference with categorical values and limited representation of numerical values

AgeOcc

Income

Buy X

Interested in Insurance

Page 15: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

General Product Rule

),|()|,....(1

1 MPaxPMxxP ii

n

in

)( ii xparentPa

Page 16: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Nodes as Functions

•input: parents state values•output: a distribution over its own value

A

B

a

b

ab ~ab a~b ~a~b0.10.30.6

0.70.20.1

0.40.40.2

X

0.20.50.3

0.10.30.6

P(X|A=a, B=b)

A node in BN is a conditional distribution function

lmh

lmh

Page 17: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Special Case : Naïve Bayes

h

e1 e2 en………….

P(e1, e2, ……en, h ) = P(h) P(e1 | h) …….P(en | h)

Page 18: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Inference in Bayesian Networks

Age Income

HouseOwner

EUVoting Pattern

NewspaperPreference

LivingLocation

How likely are elderly rich people to buy Sun?

P( paper = Sun | Age>60, Income > 60k)

Page 19: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Inference in Bayesian Networks

Age Income

HouseOwner

EUVoting Pattern

NewspaperPreference

LivingLocation

How likely are elderly rich people who voted labour to buy Daily Mail?

P( paper = DM | Age>60, Income > 60k, v = labour)

Page 20: Bayesian Learning Provides practical learning algorithms –Naïve Bayes learning –Bayesian belief network learning –Combine prior knowledge (prior probabilities)

Bayesian Learning

B E A C N

~b e a c nb ~e ~a ~c n………………...

Burglary Earthquake

Alarm

Call

Newscast

Input : fully or partially observable data casesOutput : parameters AND also structure

Learning Methods:EM (Expectation Maximisation)

using current approximation of parameters to estimate filled in datausing filled in data to update parameters (ML)

Gradient Ascent Training Gibbs Sampling (MCMC)