Probabilistic Planning (goal-oriented)

Preview:

DESCRIPTION

Probabilistic Planning (goal-oriented). Left Outcomes are more likely. Action. Maximize Goal Achievement. I. Probabilistic Outcome. A1. A2. Time 1. A1. A1. A1. A1. A2. A2. A2. A2. Time 2. Dead End. Action. Goal State. State. 1. FF-Replan. Simple replanner - PowerPoint PPT Presentation

Citation preview

Probabilistic Planning(goal-oriented)

Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

1

ActionState

Maximize Goal Achievement

Dead End

A1 A2

I

A1 A2 A1 A2 A1 A2 A1 A2

Left Outcomes are more

likely

FF-Replan

• Simple replanner• Determinizes the probabilistic problem• Solves for a plan in the determinized

problem

S Ga1 a2 a3 a4

a2a3

a4G

a5

All Outcome Replanning (FFRA)

Action

Effect 1

Effect 2

Probability1

Probability2

Action1

Effect 1

Action2

Effect 2

ICAPS-07

3

Probabilistic PlanningAll Outcome Determinization Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

4

ActionState

Find Goal

Dead End

A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

A1-1 A1-2 A2-1 A2-2

A1-1 A1-2 A2-1 A2-2 A1-1 A1-2 A2-1 A2-2 A1-1 A1-2 A2-1 A2-2 A1-1 A1-2 A2-1 A2-2

Probabilistic PlanningAll Outcome Determinization Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

5

ActionState

Find Goal

Dead End

A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

A1-1 A1-2 A2-1 A2-2

A1-1 A1-2 A2-1 A2-2 A1-1 A1-2 A2-1 A2-2 A1-1 A1-2 A2-1 A2-2 A1-1 A1-2 A2-1 A2-2

Problems of FF-Replan and better alternative sampling

6

FF-Replan’s Static Determinizations don’t respect probabilities.

We need “Probabilistic and Dynamic Determinization”

Sample Future Outcomes and

Determinization in HindsightEach Future Sample Becomes a

Known-Future Deterministic Problem

Hindsight Optimization

• Probabilistic Planning via Determinization in Hindsight

• Adds some probabilistic intelligence• A kind of dynamic determinization of FF-

Replan

Implementation FF-Hindsight

Constructs a set of futures• Solves the planning problem using the

H-horizon futures using FF• Sums the rewards of each of the plans• Chooses action with largest Qhs value

Probabilistic Planning(goal-oriented) Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

9

ActionState

Maximize Goal Achievement

Dead End

Left Outcomes are more

likely A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

10

Start Sampling

Note. Sampling will reveal which is betterA1? Or A2 at state I

Sample Time!

Hindsight Sample 1 Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

11

ActionState

Maximize Goal Achievement

Dead EndA1: 1A2: 0

Left Outcomes are more

likely A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

Hindsight Sample 2 Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

12

ActionState

Maximize Goal Achievement

Dead End

Left Outcomes are more

likely

A1: 2A2: 1

A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

Hindsight Sample 3 Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

13

ActionState

Maximize Goal Achievement

Dead End

Left Outcomes are more

likely

A1: 2A2: 1

A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

Hindsight Sample Action

ProbabilisticOutcome

Time 1

Time 2

Goal State

14

ActionState

Maximize Goal Achievement

Dead End

Left Outcomes are more

likely

A1: 3A2: 1

A1 A2

A1 A2 A1 A2 A1 A2 A1 A2

I

Action Selection

• We can now choose the action with the greatest Qhs value (A1)

A1: 3A2: 1

• Better action selection than FF-Replan–Reflects probabilistic outcomes of the

actions

Constraints on FF-Hop

• Number of futures limits exploration• Many plans need to be solved per

action in action selection• Max depth of search is static and limited

(horizon)

Improving Hindsight Optimization

• Scaling Hindsight Optimization for Probabilistic Planning– Uses three methods to improve FF-Hop

• Zero-step look ahead (Useful action detection, sample and plan reuse)

• Exploits determinism• All-outcome determinization

– Significantly improves the scalability of FF-Hop by reducing the number of plans solved by FF

Deterministic Techniques for Stochastic Planning

No longer the Rodney Dangerfield of Stochastic Planning?

rao
This stuff has been around for a long time of course--starting with envelope extension methodsWhat we are finding more recently is that they also scale well..

Solving stochastic planning problems via determinizations

• Quite an old idea (e.g. envelope extension methods)

• What is new is that there is increasing realization that determinizing approaches provide state-of-the-art performance– Even for probabilistically interesting

domains • Should be a happy occasion..

Ways of using deterministic planning

• To compute the conditional branches – Robinson et al.

• To seed/approximate the value function– ReTraSE,Peng Dai, McLUG/POND, FF-Hop

• Use single determinization– FF-replan– ReTrASE (use diverse plans for a single determinization)

• Use sampled determinizations – FF-hop [AAAI 2008; with Yoon et al]– Use Relaxed solutions (for sampled determinizations)

• Peng Dai’s paper• McLug [AIJ 2008; with Bryce et al]

Would be good to understand the tradeoffs…

Determinization = Sampling evolution of the world

Comparing approaches..• ReTrASE and FF-Hop seem closely related

– ReTrASE uses diverse deterministic plans for a single determinization; FF-HOP computes deterministic plans for sampled determinizations

– Is there any guarantee that syntactic (action) diversity is actually related to likely sample worlds?

• Cost of generating deterministic plans isn’t exactly too cheap..– Relaxed reachability style approaches can

compute multiple plans (for samples of the worlds)• Would relaxation of samples’ plans be better or worse in

convergence terms..?

Mathematical Summary of the Algorithm

• H-horizon future FH for M = [S,A,T,R]– Mapping of state, action and time (h<H) to a state– S × A × h → S

• Value of a policy π for FH – R(s,FH, π)

• VHS(s,H) = EFH [maxπ R(s,FH,π)]

• Compare this and the real value• V*(s,H) = maxπ EF

H [ R(s,FH,π) ]• VFFRa(s) = maxF V(s,F) ≥ VHS(s,H) ≥ V*(s,H)• Q(s,a,H) = (R(a) + EF

H-1 [maxπ R(a(s),FH-1,π)] )– In our proposal, computation of maxπ R(s,FH-1,π) is

approximately done by FF [Hoffmann and Nebel ’01]29

Done by FF

Each Future is aDeterministicProblem

Recommended