23
Sampling-based Approximation Algorithms for Multi- stage Stochastic Optimization Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University

Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

  • Upload
    aldon

  • View
    37

  • Download
    0

Embed Size (px)

DESCRIPTION

Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization. Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University. Stochastic Optimization. Way of modeling uncertainty . - PowerPoint PPT Presentation

Citation preview

Page 1: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Sampling-based Approximation Algorithms for Multi-stage Stochastic

OptimizationChaitanya Swamy

Caltech and U. Waterloo

Joint work with David Shmoys Cornell University

Page 2: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Stochastic Optimization

• Way of modeling uncertainty.

• Exact data is unavailable or expensive – data is uncertain, specified by a probability distribution.

Want to make the best decisions given this uncertainty in the data.

• Applications in logistics, transportation models, financial instruments, network design, production planning, …

• Dates back to 1950’s and the work of Dantzig.

Page 3: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Stochastic Recourse Models

Given : Probability distribution over inputs.

Stage I : Make some advance decisions – plan ahead or hedge against uncertainty.

Uncertainty evolves through various stages.

Learn new information in each stage.

Can take recourse actions in each stage – can augment earlier solution paying a recourse cost.

Choose initial (stage I) decisions to minimize

(stage I cost) + (expected recourse cost).

Page 4: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

2-stage problem 2 decision points

0.2

0.020.3 0.1

stage I

stage II scenarios

0.5

0.20.3

0.4

stage I

stage II

scenarios in stage k

k-stage problem k decision points

Page 5: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

2-stage problem 2 decision points

Choose stage I decisions to minimize expected total cost =

(stage I cost) + Eall scenarios [cost of stages 2 … k].

0.2

0.020.3 0.1

stage I

stage II scenarios

0.5

0.2 0.4

stage I

stage II

k-stage problem k decision points

0.3

scenarios in stage k

Page 6: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Stochastic Set Cover (SSC)Universe U = {e1, …, en }, subsets S1, S2, …, Sm U, set S has weight wS.

Deterministic problem: Pick a minimum weight collection of sets that covers each element.Stochastic version: Set of elements to be covered is given by a probability distribution.

– subset A U to be covered (scenario) is revealed after k stages

– choose some sets initially – stage I

– can pick additional sets in each stage

paying recourse cost.

stage I

A1

UAk

UMinimize Expected Total cost =

Escenarios AU [cost of sets picked for scenario A in stages 1, … k].

Page 7: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Stochastic Set Cover (SSC)

Universe U = {e1, …, en }, subsets S1, S2, …, Sm U, set S has weight wS.

Deterministic problem: Pick a minimum weight collection of sets that covers each element.Stochastic version: Set of elements to be covered is given by a probability distribution.

How is the probability distribution on subsets specified?

•A short (polynomial) list of possible scenarios

•Independent probabilities that each element exists

•A black box that can be sampled.

Page 8: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Approximation Algorithm

Hard to solve the problem exactly. Even special cases are #P-hard.Settle for approximate solutions. Give polytime algorithm that always finds near-optimal solutions.

A is a -approximation algorithm if,

•A runs in polynomial time.

•A(I) ≤ .OPT(I) on all instances I,

is called the approximation ratio of A.

Page 9: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Previous Models Considered

•2-stage problems– polynomial scenario model: Dye, Stougie &

Tomasgard; Ravi & Sinha; Immorlica, Karger, Minkoff & Mirrokni.

– Immorlica et al.: also consider independent activation model

proportional costs: (stage II cost) = (stage I cost),

e.g., wSA = .wS for each set S, in

each scenario A.

– Gupta, Pál, Ravi & Sinha: black-box model but also with proportional costs.

– Shmoys, S (SS04): black-box model with arbitrary costs.

gave an approximation scheme for 2-stage LPs +

rounding procedure that “reduces” stochastic problems to their deterministic versions.

Page 10: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Previous Models (contd.)

•Multi-stage problems

– Hayrapetyan, S & Tardos: 2k-approximation algorithm for k-stage Steiner tree.

– Gupta, Pál, Ravi & Sinha: also other k-stage problems.

2k-approximation algorithm for Steiner tree factors exponential in k for vertex cover, facility location.

Both only consider proportional, scenario-dependent costs.

Page 11: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Results from S, Shmoys ’05

• Give the first fully polynomial approximation scheme (FPAS) for a large class of k-stage stochastic linear programs for any fixed k. – black-box model: arbitrary distribution. – no assumptions on costs. – algorithm is the Sample Average Approximation (SAA)

method. First proof that SAA works for (a class of) k-stage LPs with poly-bounded sample size.

Shapiro ’05: k-stage programs but with independent stages Kleywegt, Shapiro & Homem De-Mello ’01: bounds for 2-stage programs Charikar, Chekuri & Pál ’05: another proof that SAA works for

(a class of) 2-stage programs.

Page 12: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Results (contd.)

• FPAS + rounding technique of SS04 gives approximation algorithms for k-stage stochastic integer programs.– no assumptions on distribution or costs– improve upon various results obtained in

more restricted models: e.g., O(k)-approx. for k-stage vertex cover (VC) , facility location. Munagala has improved factor for k-stage VC to 2.

Page 13: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

A Linear Program for 2-stage SSC

xS : 1 if set S is picked in stage I yA,S : 1 if S is picked in scenario AMinimize ∑S SxS + ∑AU pA ∑S WSyA,S

s.t. ∑S:eS xS + ∑S:eS yA,S ≥ 1 for each A

U, eA

xS, yA,S ≥ 0 for each S, A

Exponentially many variables and constraints.

0.20.02

0.3

stage I

stage II scenario A U

pA : probability of scenario A U. Let cost wS

A = WS for each set S, scenario A.

S = stage I cost of set S

Equivalent compact, convex program:Minimize h(x) = ∑S SxS + ∑AU pAfA(x) s.t. 0 ≤ xS ≤ 1 for each S

fA(x) = min {∑S WSyA,S : ∑S:eS yA,S ≥ 1 – ∑S:eS xS for each eA yA,S ≥ 0 for each

S}

Page 14: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Sample Average Approximation

Sample Average Approximation (SAA) method:– Sample some N times from distribution

– Estimate pA by qA = frequency of occurrence of scenario A = nA/N.

True problem: minxP (h(x) = .x + ∑AU pA fA(x))(P)

Sample average problem: minxP (h'(x) = .x + ∑AU qA

fA(x)) (SA-P)

Size of (SA-P) as an LP depends on N – how large should N be?

Page 15: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Wanted result: With polynomial N, x solves (SA-P) h(x) ≈ OPT.

Sample Average Approximation

Sample Average Approximation (SAA) method:– Sample some N times from distribution

– Estimate pA by qA = frequency of occurrence of scenario A = nA/N.

True problem: minxP (h(x) = .x + ∑AU pA fA(x))(P)

Sample average problem: minxP (h'(x) = .x + ∑AU qA

fA(x)) (SA-P)

Size of (SA-P) as an LP depends on N – how large should N be?

Possible approach: Try to show that h'(.) and h(.) take similar values.Problem: Rare scenarios can significantly influence value of h(.), but will almost never be sampled.Key insight: Rare scenarios do not much affect the optimal solution x*

instead of function value, look at how function varies with x show that “slopes” of h'(.) and h(.) are “close” to each other

Page 16: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

True problem: minxP (h(x) = .x + ∑AU pA fA(x)) (P)

Sample average problem: minxP (h'(x)= .x + ∑AU qA

fA(x)) (SA-P)

Slope subgradient

Closeness-in-subgradients

Closeness-in-subgradients: At “many” points u in P, vector d'u s.t.(*) d'u is a subgradient of h'(.) at u, AND an -subgradient of h(.) at u.

True with high probability for h(.) and h'(.).

Lemma: For any convex functions g(.), g'(.), if (*) holds then, x solves minxP g'(x) x is a near-optimal solution to minxP g(x).

dm is a subgradient of h(.) at u, if v, h(v) – h(u)

≥ d.(v–u).d is an -subgradient of h(.) at u, if vP,

h(v) – h(u) ≥ d.(v–u) – .h(v) – .h(u).

Page 17: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

algorithm will return x that is near-optimal for both problems.

Closeness-in-subgradients

Closeness-in-subgradients: At “many” points u in P, vector d'u s.t.(*) d'u is a subgradient of h'(.) at u, AND an -subgradient of h(.) at u.

Lemma: For any convex functions g(.), g'(.), if (*) holds then,

x solves minxP g'(x) x is a near-optimal solution to minxP g(x).

P

ug(x) ≤ g(u)

du

Intuition:•subgradient determines minimizer of convex

function.

•ellipsoid-based algorithm of SS04 for convex minimization only uses (-) subgradients: uses (-) subgradient to cut ellipsoid at a feasible point u in P(*) can run algorithm on both minxP g(x) and minxP g'(x) using same vector d'u at uP

d is a subgradient of h(.) at u, if v, h(v) – h(u) ≥ d.(v–u).d is an -subgradient of h(.) at u, if vP,

h(v) – h(u) ≥ d.(v–u) – .h(v) – .h(u).

Page 18: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Proof for 2-stage SSCTrue problem: minxP (h(x) =

.x + ∑AU pA fA(x)) (P)

Sample average problem: minxP (h'(x) =

.x + ∑AU qA fA(x)) (SA-P)

Let = maxS WS /S, zA optimal solution to dual of fA(x) at point x=u P.

Facts from SS04:A.vector du = {du,S} with du,S = S – ∑ApA ∑eAS zA is subgradient

of h(.) at u; can write du,S = E[XS] where XS = S – ∑eAS zA in scenario A

B.XS [–WS, S] Var[XS] ≤ WS2 for every set S

C. if d' = {d'S} is a vector such that |d'S – du,S| ≤ .S for every set S then,d' is an -subgradient of h(.) at u.

A vector d'u with components d'u,S = S – ∑AqA ∑eAS zA = Eq[XS] is a subgradient of h'(.) at u

B, C with poly(2/2.log(1/)) samples, d'u is an -subgradient of h(.) at u with probability ≥ 1– polynomial samples ensure that with high probability, at “many” points uP, d'u is an -subgradient of h(.) at u

property (*)

Page 19: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

3-stage SSCTrue

distribution

pA

TA

stage I

stage II

stage III scenario (A,B) specifies set of elements to cover

A

pA,B

Sampled distributio

nqA

TA

stage I

stage II

stage III scenario (A,B) specifies set of elements to cover

A

qA,B

True distribution {pA,B} in TA is only estimated by distribution {qA,B}

True and sample average problems solve different recourse problems for a given scenario ATrue problem: minxP (h(x) = .x + ∑A pA fA(x))(3-P)

Sample avg. problem: minxP (h'(x) = .x + ∑A qA

gA(x)) (3SA-P)

fA(x), gA(x) 2-stage set-cover problems specified by tree TA

Page 20: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Proof sketch for 3-stage SSC

True problem: minxP (h(x) = .x + ∑A pA fA(x))(3-P)

Sample avg. problem: minxP (h'(x) = .x + ∑A qA

gA(x)) (3SA-P)

Want to show that h(.) and h'(.) are close in subgradients.main difficulty: h(.) and h'(.) solve different recourse problemsSubgradient of h(.) at u is du ; du,S = S – ∑A pA(dual soln. to fA(u))

Subgradient of h'(.) at u is d'u ; d'u,S = S – ∑A qA(dual soln. to gA(u))

To show d' is an -subgradient of h(.) need that: (dual soln. to gA(u)) is a near-optimal (dual soln. to

fA(u))

This is a Sample average theorem for the dual of a 2-stage problem!

Page 21: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Proof sketch for 3-stage SSC

True problem: minxP (h(x) = .x + ∑A pA fA(x)) (3-P)

Sample average problem: minxP (h'(x) = .x + ∑A qA

gA(x)) (3SA-P)Subgradient of h(.) at u is du with du,S = S – ∑A pA(dual soln. to fA(u))

Subgradient of h'(.) at u is d'u with d'u,S = S – ∑A qA(dual soln. to gA(u))

To show d'u is an -subgradient of h(.) need that: (dual soln. to gA(u)) is a near-optimal (dual soln. to fA(u))

Idea: Show that the two dual objective f’ns. are close in subgradients

Problem: Cannot get closeness-in-subgradients by looking at standard exponential size LP-dual of fA(x), gA(x)

– formulate a new compact non-linear dual of polynomial size.

– (approximate) subgradient of dual objective function comes from(near-) optimal solution to a 2-stage primal LP: use earlier SAA result.

Recursively apply this idea to solve k-stage stochastic LPs.

Page 22: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Summary of Results•Give the first approximation scheme to solve a broad

class of k-stage stochastic linear programs for any fixed k.– prove that Sample Average Approximation method

works for our class of k-stage programs.

•Obtain approximation algorithms for k-stage stochastic integer problems – no assumptions on costs or distribution.– k.log n-approx. for k-stage set cover.

– O(k)-approx. for k-stage vertex cover, multicut on trees, uncapacitated facility location (FL), some other FL variants.

– (1+)-approx. for multicommodity flow.

Results generalize and/or improve previous results obtained in restricted k-stage models.

Page 23: Sampling-based Approximation Algorithms for Multi-stage Stochastic Optimization

Thank You.