57
Bayesian Networks Bayesian Networks Russell and Norvig: Chapter 14 CMCS424 Fall 2003 based on material from Jean-Claude Latombe, Daphne Koller and Nir Friedman

Bayesian Networks Russell and Norvig: Chapter 14 CMCS424 Fall 2003 based on material from Jean-Claude Latombe, Daphne Koller and Nir Friedman

  • View
    241

  • Download
    3

Embed Size (px)

Citation preview

Bayesian NetworksBayesian Networks

Russell and Norvig: Chapter 14

CMCS424 Fall 2003

based on material from Jean-ClaudeLatombe, Daphne Koller and Nir Friedman

Probabilistic AgentProbabilistic Agent

environmentagent

?

sensors

actuatorsI believe that the sunwill still exist tomorrowwith probability 0.999999and that it will be a sunnywith probability 0.6

ProblemProblem

At a certain time t, the KB of an agent is some collection of beliefsAt time t the agent’s sensors make an observation that changes the strength of one of its beliefsHow should the agent update the strength of its other beliefs?

Purpose of Bayesian Purpose of Bayesian NetworksNetworks

Facilitate the description of a collection of beliefs by making explicit causality relations and conditional independence among beliefsProvide a more efficient way (than by using joint distribution tables) to update belief strengths when new evidence is observed

Other NamesOther Names

Belief networks Probabilistic networks Causal networks

Bayesian Networks

A simple, graphical notation for conditional independence assertions resulting in a compact representation for the full joint distribution

Syntax:

a set of nodes, one per variable

a directed, acyclic graph (link = ‘direct influences’)

a conditional distribution for each node given its parents: P(Xi|Parents(Xi))

Example

Cavity

Toothache Catch

Weather

Topology of network encodes conditional independence assertions:

Weather is independent of other variablesToothache and Catch are independent given Cavity

ExampleI’m at work, neighbor John calls to say my alarm isringing, but neighbor Mary doesn’t call. Sometime it’s set off by a minor earthquake. Is there a burglar?

Network topology reflects “causal” knowledge:- A burglar can set the alarm off- An earthquake can set the alarm off- The alarm can cause Mary to call- The alarm can cause John to call

Variables: Burglar, Earthquake, Alarm, JohnCalls, MaryCalls

A Simple Belief NetworkA Simple Belief Network

Burglary Earthquake

Alarm

MaryCallsJohnCalls

causes

effects

Directed acyclicgraph (DAG)

Intuitive meaning of arrowfrom x to y: “x has direct influence on y”

Nodes are random variables

Assigning Probabilities to Assigning Probabilities to RootsRoots

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)

0.001

P(E)

0.002

Conditional Probability TablesConditional Probability Tables

B E P(A|B,E)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)

0.001

P(E)

0.002

Size of the CPT for a node with k parents: ?

Conditional Probability TablesConditional Probability Tables

B E P(A|B,E)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)

0.001

P(E)

0.002

A P(J|A)

TF

0.900.05

A P(M|A)

TF

0.700.01

What the BN MeansWhat the BN Means

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)

0.001

P(E)

0.002

A P(J|A)

TF

0.900.05

A P(M|A)

TF

0.700.01

P(x1,x2,…,xn) = i=1,…,nP(xi|

Parents(Xi))

Calculation of Joint Calculation of Joint ProbabilityProbability

B E P(A|…)

TTFF

TFTF

0.950.940.290.001

Burglary Earthquake

Alarm

MaryCallsJohnCalls

P(B)

0.001

P(E)

0.002

A P(J|…)

TF

0.900.05

A P(M|…)

TF

0.700.01

P(JMABE)= P(J|A)P(M|A)P(A|B,E)P(B)P(E)= 0.9 x 0.7 x 0.001 x 0.999 x 0.998= 0.00062

What The BN EncodesWhat The BN Encodes

Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or Alarm

The beliefs JohnCalls and MaryCalls are independent given Alarm or Alarm

Burglary Earthquake

Alarm

MaryCallsJohnCalls

For example, John doesnot observe any burglariesdirectly

What The BN EncodesWhat The BN Encodes

Each of the beliefs JohnCalls and MaryCalls is independent of Burglary and Earthquake given Alarm or Alarm

The beliefs JohnCalls and MaryCalls are independent given Alarm or Alarm

Burglary Earthquake

Alarm

MaryCallsJohnCalls

For instance, the reasons why John and Mary may not call if there is an alarm are unrelated

Note that these reasons couldbe other beliefs in the network.The probabilities summarize thesenon-explicit beliefs

Structure of BNStructure of BN

The relation:

P(x1,x2,…,xn) = i=1,…,nP(xi|Parents(Xi))

means that each belief is independent of its predecessors in the BN given its parentsSaid otherwise, the parents of a belief Xi are all the beliefs that “directly influence” Xi

Usually (but not always) the parents of Xi are its causes and Xi is the effect of these causes

E.g., JohnCalls is influenced by Burglary, but not directly. JohnCalls is directly influenced by Alarm

Construction of BNConstruction of BN

Choose the relevant sentences (random variables) that describe the domainSelect an ordering X1,…,Xn, so that all the beliefs that directly influence Xi are before Xi

For j=1,…,n do: Add a node in the network labeled by Xj

Connect the node of its parents to Xj

Define the CPT of Xj

• The ordering guarantees that the BN will have no cycles

Markov Assumption

We now make this independence assumption more precise for directed acyclic graphs (DAGs)

Each random variable X, is independent of its non-descendents, given its parents Pa(X)Formally,I(X; NonDesc(X) | Pa(X))

Descendent

Ancestor

Parent

Non-descendent

X

Y1 Y2

Non-descendent

Set E of evidence variables that are observed, e.g., {JohnCalls,MaryCalls}Query variable X, e.g., Burglary, for which we would like to know the posterior probability distribution P(X|E)

Distribution conditional to the observations made

Inference In BNInference In BN

?TT

P(B|…)MJ

Inference PatternsInference PatternsBurglary Earthquake

Alarm

MaryCallsJohnCalls

Diagnostic

Burglary Earthquake

Alarm

MaryCallsJohnCalls

Causal

Burglary Earthquake

Alarm

MaryCallsJohnCalls

Intercausal

Burglary Earthquake

Alarm

MaryCallsJohnCalls

Mixed

• Basic use of a BN: Given newobservations, compute the newstrengths of some (or all) beliefs

• Other use: Given the strength ofa belief, which observation shouldwe gather to make the greatestchange in this belief’s strength

Singly Connected BNSingly Connected BN

A BN is singly connected if there is at most one undirected path between any two nodes

Burglary Earthquake

Alarm

MaryCallsJohnCalls

is singly connected

is not singly connected

is singly connected

Types Of Nodes On A PathTypes Of Nodes On A Path

Radio

Battery

SparkPlugs

Starts

Gas

Moves

linear

converging

diverging

Independence Relations In Independence Relations In BNBN

Radio

Battery

SparkPlugs

Starts

Gas

Moves

linear

converging

diverging

Given a set E of evidence nodes, two beliefs connected by an undirected path are independent if one of the following three conditions holds:1. A node on the path is linear and in E2. A node on the path is diverging and in E3. A node on the path is converging and neither this node, nor any descendant is in E

Independence Relations In Independence Relations In BNBN

Radio

Battery

SparkPlugs

Starts

Gas

Moves

linear

converging

diverging

Given a set E of evidence nodes, two beliefs connected by an undirected path are independent if one of the following three conditions holds:1. A node on the path is linear and in E2. A node on the path is diverging and in E3. A node on the path is converging and neither this node, nor any descendant is in E

Gas and Radio are independent given evidence on SparkPlugs

Independence Relations In Independence Relations In BNBN

Radio

Battery

SparkPlugs

Starts

Gas

Moves

linear

converging

diverging

Given a set E of evidence nodes, two beliefs connected by an undirected path are independent if one of the following three conditions holds:1. A node on the path is linear and in E2. A node on the path is diverging and in E3. A node on the path is converging and neither this node, nor any descendant is in E

Gas and Radio are independent given evidence on Battery

Independence Relations In Independence Relations In BNBN

Radio

Battery

SparkPlugs

Starts

Gas

Moves

linear

converging

diverging

Given a set E of evidence nodes, two beliefs connected by an undirected path are independent if one of the following three conditions holds:1. A node on the path is linear and in E2. A node on the path is diverging and in E3. A node on the path is converging and neither this node, nor any descendant is in E

Gas and Radio are independent given no evidence, but they aredependent given evidence on

Starts or Moves

BN Inference

Simplest Case:

BA

P(B) = P(a)P(B|a) + P(~a)P(B|~a)

A

)A|B(P)A(P)B(P

BA C

P(C) = ???

BN Inference

Chain:

X2X1 Xn…

What is time complexity to compute P(Xn)?

What is time complexity if we computed the full joint?

Inference Ex. 2Inference Ex. 2

RainSprinkler

Cloudy

WetGrass

c,s,r

)c(P)c|s(P)c|r(P)s,r|w(P)w(P

s,r c

)c(P)c|s(P)c|r(P)s,r|w(P

s,r

1 )s,r(f)s,r|w(P )s,r(f1

Algorithm is computing not individualprobabilities, but entire tables•Two ideas crucial to avoiding exponential blowup:• because of the structure of the BN, somesubexpression in the joint depend only on a small numberof variable•By computing them once and caching the result, wecan avoid generating them exponentially many times

Variable Elimination

General idea:Write query in the form

Iteratively Move all irrelevant terms outside of innermost

sum Perform innermost sum, getting a new term Insert the new term into the product

kx x x i

iin paxPXP3 2

)|(),( e

A More Complex Example

Visit to Asia

Smoking

Lung CancerTuberculosis

Abnormalityin Chest

Bronchitis

X-Ray Dyspnea

“Asia” network:

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: v,s,x,t,l,a,b

Initial factors

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: v,s,x,t,l,a,b

Initial factors

Eliminate: v

Note: fv(t) = P(t)In general, result of elimination is not necessarily a probability term

Compute: v

v vtPvPtf )|()()(

),|()|(),|()|()|()()( badPaxPltaPsbPslPsPtfv

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: s,x,t,l,a,b

Initial factors

Eliminate: s

Summing on s results in a factor with two arguments fs(b,l)In general, result of elimination may be a function of several variables

Compute: s

s slPsbPsPlbf )|()|()(),(

),|()|(),|()|()|()()( badPaxPltaPsbPslPsPtfv

),|()|(),|(),()( badPaxPltaPlbftf sv

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: x,t,l,a,b

Initial factors

Eliminate: x

Note: fx(a) = 1 for all values of a !!

Compute: x

x axPaf )|()(

),|()|(),|()|()|()()( badPaxPltaPsbPslPsPtfv

),|()|(),|(),()( badPaxPltaPlbftf sv

),|(),|()(),()( badPltaPaflbftf xsv

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: t,l,a,b

Initial factors

Eliminate: t

Compute: t

vt ltaPtflaf ),|()(),(

),|()|(),|()|()|()()( badPaxPltaPsbPslPsPtfv

),|()|(),|(),()( badPaxPltaPlbftf sv

),|(),|()(),()( badPltaPaflbftf xsv

),|(),()(),( badPlafaflbf txs

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: l,a,b

Initial factors

Eliminate: l

Compute: l

tsl laflbfbaf ),(),(),(

),|()|(),|()|()|()()( badPaxPltaPsbPslPsPtfv

),|()|(),|(),()( badPaxPltaPlbftf sv

),|(),|()(),()( badPltaPaflbftf xsv

),|(),()(),( badPlafaflbf txs

),|()(),( badPafbaf xl

V S

LT

A B

X D

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

We want to compute P(d)Need to eliminate: b

Initial factors

Eliminate: a,bCompute:

b

aba

xla dbfdfbadpafbafdbf ),()(),|()(),(),(

),|()|(),|()|()|()()( badPaxPltaPsbPslPsPtfv

),|()|(),|(),()( badPaxPltaPlbftf sv

),|(),|()(),()( badPltaPaflbftf xsv

),|()(),( badPafbaf xl),|(),()(),( badPlafaflbf txs

)(),( dfdbf ba

Variable Elimination

We now understand variable elimination as a sequence of rewriting operations

Actual computation is done in elimination step

Computation depends on order of elimination

Dealing with evidence

How do we deal with evidence?

Suppose get evidence V = t, S = f, D = tWe want to compute P(L, V = t, S = f, D = t)

V S

LT

A B

X D

Dealing with Evidence

We start by writing the factors:

Since we know that V = t, we don’t need to eliminate VInstead, we can replace the factors P(V) and P(T|V) with

These “select” the appropriate parts of the original factors given the evidenceNote that fp(V) is a constant, and thus does not appear in elimination of other variables

),|()|(),|()|()|()|()()( badPaxPltaPsbPslPvtPsPvP

V S

LT

A B

X D

)|()()( )|()( tVTPTftVPf VTpVP

Dealing with Evidence Given evidence V = t, S = f, D = tCompute P(L, V = t, S = f, D = t )Initial factors, after setting evidence:

),()|(),|()()()( ),|()|()|()|()()( bafaxPltaPbflftfff badPsbPslPvtPsPvP

V S

LT

A B

X D

Given evidence V = t, S = f, D = tCompute P(L, V = t, S = f, D = t )Initial factors, after setting evidence:

Eliminating x, we get

),()|(),|()()()( ),|()|()|()|()()( bafaxPltaPbflftfff badPsbPslPvtPsPvP

V S

LT

A B

X D

),()(),|()()()( ),|()|()|()|()()( bafafltaPbflftfff badPxsbPslPvtPsPvP

Dealing with Evidence

Dealing with Evidence Given evidence V = t, S = f, D = tCompute P(L, V = t, S = f, D = t )Initial factors, after setting evidence:

Eliminating x, we get

Eliminating t, we get

),()|(),|()()()( ),|()|()|()|()()( bafaxPltaPbflftfff badPsbPslPvtPsPvP

V S

LT

A B

X D

),()(),|()()()( ),|()|()|()|()()( bafafltaPbflftfff badPxsbPslPvtPsPvP

),()(),()()( ),|()|()|()()( bafaflafbflfff badPxtsbPslPsPvP

Dealing with Evidence Given evidence V = t, S = f, D = tCompute P(L, V = t, S = f, D = t )Initial factors, after setting evidence:

Eliminating x, we get

Eliminating t, we get

Eliminating a, we get

),()|(),|()()()( ),|()|()|()|()()( bafaxPltaPbflftfff badPsbPslPvtPsPvP

V S

LT

A B

X D

),()(),|()()()( ),|()|()|()|()()( bafafltaPbflftfff badPxsbPslPvtPsPvP

),()(),()()( ),|()|()|()()( bafaflafbflfff badPxtsbPslPsPvP

),()()( )|()|()()( lbfbflfff asbPslPsPvP

Dealing with Evidence Given evidence V = t, S = f, D = tCompute P(L, V = t, S = f, D = t )Initial factors, after setting evidence:

Eliminating x, we get

Eliminating t, we get

Eliminating a, we get

Eliminating b, we get

),()|(),|()()()( ),|()|()|()|()()( bafaxPltaPbflftfff badPsbPslPvtPsPvP

V S

LT

A B

X D

),()(),|()()()( ),|()|()|()|()()( bafafltaPbflftfff badPxsbPslPvtPsPvP

),()(),()()( ),|()|()|()()( bafaflafbflfff badPxtsbPslPsPvP

),()()( )|()|()()( lbfbflfff asbPslPsPvP

)()()|()()( lflfff bslPsPvP

Variable Elimination Algorithm

Let X1,…, Xm be an ordering on the non-query variables

For I = m, …, 1 Leave in the summation for Xi only factors mentioning Xi Multiply the factors, getting a factor that contains a

number for each value of the variables mentioned, including Xi

Sum out Xi, getting a factor f that contains a number for each value of the variables mentioned, not including Xi

Replace the multiplied factor in the summation

j

jjX XX

))X(Parents|X(P...1 m2

x

kxkx yyxfyyf ),,,('),,( 11

m

ilikx i

yyxfyyxf1

,1,1,11 ),,(),,,('

Complexity of variable elimination

Suppose in one elimination step we compute

This requires multiplications

For each value for x, y1, …, yk, we do m multiplications

additions

For each value of y1, …, yk , we do |Val(X)| additions

Complexity is exponential in number of variables in the intermediate factor!

i

iYXm )Val()Val(

i

iYX )Val()Val(

Understanding Variable Elimination

We want to select “good” elimination orderings that reduce complexity

This can be done be examining a graph theoretic property of the “induced” graph; we will not cover this in class.

This reduces the problem of finding good ordering to graph-theoretic operation that is well-understood—unfortunately computing it is NP-hard!

Approaches to inference

Exact inference Inference in Simple Chains Variable elimination Clustering / join tree algorithms

Approximate inference Stochastic simulation / sampling

methods Markov chain Monte Carlo methods

Stochastic simulation - direct

Suppose you are given values for some subset of the variables, G, and want to infer values for unknown variables, URandomly generate a very large number of instantiations from the BN

Generate instantiations for all variables – start at root variables and work your way “forward”

Rejection Sampling: keep those instantiations that are consistent with the values for GUse the frequency of values for U to get estimated probabilitiesAccuracy of the results depends on the size of the sample (asymptotically approaches exact results)

Direct Stochastic Direct Stochastic SimulationSimulation

RainSprinkler

Cloudy

WetGrass1. Repeat N times: 1.1. Guess Cloudy at random 1.2. For each guess of Cloudy, guess Sprinkler and Rain, then WetGrass

2. Compute the ratio of the # runs where WetGrass and Cloudy are True over the # runs where Cloudy is True

P(WetGrass|Cloudy)?

P(WetGrass|Cloudy) = P(WetGrass Cloudy) / P(Cloudy)

Exercise: Direct sampling

smart study

prepared fair

pass

p(smart)=.8

p(study)=.6

p(fair)=.9

p(prep|…) smart

smart

study .9 .7

study .5 .1

p(pass|…)smart smart

prep prep prep prep

fair .9 .7 .7 .2

fair .1 .1 .1 .1

Topological order = …?Random number generator: .35, .76, .51, .44, .08, .28, .03, .92, .02, .42

Likelihood weighting

Idea: Don’t generate samples that need to be rejected in the first place!Sample only from the unknown variables ZWeight each sample according to the likelihood that it would occur, given the evidence E

Markov chain Monte Carlo algorithm

So called because Markov chain – each instance generated in the sample is

dependent on the previous instance Monte Carlo – statistical sampling method

Perform a random walk through variable assignment space, collecting statistics as you go

Start with a random instantiation, consistent with evidence variables

At each step, for some nonevidence variable, randomly sample its value, consistent with the other current assignments

Given enough samples, MCMC gives an accurate estimate of the true distribution of values

ApplicationsApplications

http://excalibur.brc.uconn.edu/~baynet/researchApps.htmlMedical diagnosis, e.g., lymph-node deseasesFraud/uncollectible debt detection

Troubleshooting of hardware/software systems