25
Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini, Monia Marchetti. Giovanni Barosi]

Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Embed Size (px)

Citation preview

Page 1: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Deciding when to intervene: A Markov Decision Process approach

Xiangjin Zou(Rho)Department of Computer ScienceRice University

[Paolo Magni, Silvana Quaglini, Monia Marchetti. Giovanni Barosi]

Page 2: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Goals

• Propose the dynamic approach based on MDP applying to the prophylactic surgery in mild HS( hereditary spherocytosis )

• Point out the difference between static and dynamic approaches to choosing the optimal time for medical intervention

Page 3: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Motivation

• When dealing with chronic diseases, physicians postpone the decision up to a critical point where sufficient information has been gained from evolving clinical scenario

• Meaning in Math: Aim to optimize the intervention time, so as to maximize the net benefit versus the risks posed by intervention ( fatal death, infection, low quality of life etc.)

Page 4: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Static Approach Example 1

Observation: The decision model solves the problem at any decision time(age) as the only decision time, without taking into account that at other decision time points that the decision might be reconsidered later

Page 5: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Static Approach Example 2

Observation: Only rough approximation of other possibly better strategies, such as reconsidering the

decision every year for the for the following X years (if you compare

immediate intervention and intervention X years later)

Page 6: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

MDP - IVs

• MDP – IVs is the proposed tool to solve the dynamic medical problems involving the intervention time acessment.

• The MDP dynamic approach could capture the best time for intervention while static approach makes decision based on fixed time points, always leading to suboptimal strategy

• IVs is a powerful instrument to overcome the obstacle of defining the transition probabilities of the MDPs in the field of medicine and provide a ‘view’ on a generic transition of MDPs.

Page 7: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Markov Decision Process

Page 8: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV – Influence View

[IV]: a [Influence View] is a direct acyclic graph providing a representation of a single transition of MDP and include following nodes:

1.State Nodes: Each state node represents a variable obtained from the factorization of MDP space state. Each state node appears twice in IV to express the transition in different time epochs, so there are initial and final state nodes.

2.Numerical Nodes: Introduced as the description of numeric parameters of the model. It only has numerical nodes in its ancestral set. (Example would be age or sex of the patients on some events that might have probabilities vary on such parameters)

Page 9: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV – Influence View (continued)

3. Event Nodes: Each event node represents an event variable placed between initial and final state nodes.

•Context Node: A context variable that has no state nodes in its ancestral set and is used to represent the immutable variables during decision making process (e.g. sex of patient)

•Transition Node: A transition variable that located in the path between initial and final node. It specifies the probability distribution during time transition and represents the causal relationship of events.

4. Utility Node: Express the utility(cost) function of MDP on a single transition.

Page 10: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV Example

Example of influence view: Death and Interv are two state nodes; Age is a numerical node; NatDeath, Disease and DisDeath are event nodes. In particular, NatDeath is a context node while both Disease and DisDeath are transition nodes.

Page 11: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Solving MDP-IVs

• For each action computes the transition matrix P of MDP by propagating the probabilities distribution on IVs network.

• Then the MDP can be solved by using classical algorithms such as dynamic programing or value iteration. (e.g. provided DT-Planner)

• IVs also offers the advantage of specifying “local knowledge” about conditional dependencies of associated events rather than just giving “global” transition probabilities.

Page 12: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Medical Problem: Prophylactic surgery in mild HS

Aim to build a plan, in according to patients' conditions, that specifies when to use splenectomy and/or cholecystecomy to maximize the life of patient

The decision model:

o Fix the Markov cycle at 1 year

o For IVs for each possible choice: no surgery, C-surgery, S-surgery and SC-suregery

o State space factored into two state variables: Gallstones and Spleen. Gallstone represents the pathologic state (and the presence of) gallbladder and spleen represents presence of spleen.

Page 13: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Gallstone history

Page 14: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Gallstone history model

Page 15: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Event nodes in IV

Page 16: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV for no surgery

Page 17: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV for C-Surgery

Page 18: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV for S-Surgery

Page 19: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

IV for CS-Surgery

Page 20: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Resulting Policy

• The utility function is based on QALD(quality adjusted life days)

• The time horizon is the patient's whole life

• For patient over 6, since the sepsis is risk-free, in the state node Spleen the levels (removed 1 year, removed 2 year...) all summarized as Absent.

Page 21: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Static Approach Strategy

• Without Gallstones: Do the splenectomy until age of 35; If spleen has been removed no cholecystectomy is required until gallstones appear

• With Asymptomatic Gallstones: Do both splenectomy and cholecystectomy until age of 45; cholecystectomy alone is not worthwhile

• With occasional colics: Do both splenectomy and cholecystectomy until age of 53; if spleen has been removed cholecystectomy alone is always required

• With recurrent colics: Do both splenectomy and cholecystectomy until age of 53;

Page 22: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Dynamic Approach Result

Page 23: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Difference between Static and Dynamic Approach

Page 24: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Difference between Static and Dynamic Approach

Considering 6-year-old male without gallstones and with spleen

•Static approach do splenectomy before age of 35

•Dynamic approach postpone surgery until 15 if no gallstones found before

•Difference is substantial since the adoption of surgery at the young age has relevant social and psychological implications

Page 25: Deciding when to intervene: A Markov Decision Process approach Xiangjin Zou(Rho) Department of Computer Science Rice University [Paolo Magni, Silvana Quaglini,

Conclusions

• MDP-IVs tackles the health intervention involving time-dependent events by performing postpone decisions and more complex strategy at each fixed decision time

• MDP allows us to reconsider between immediate intervention and delay the intervention to maximize the utility

• IVs allow us to represent the problem clearly and efficiently by specifying the underlying local relationship of the events derived from “domain knowledge”