prev

next

of 32

Published on

16-Jan-2016View

37Download

4

Embed Size (px)

DESCRIPTION

Approximation Algorithms for Stochastic Optimization. Chaitanya Swamy Caltech and U. Waterloo Joint work with David Shmoys Cornell University. Stochastic Optimization. Way of modeling uncertainty . - PowerPoint PPT Presentation

Transcript

Approximation Algorithms for Stochastic Optimization

Chaitanya SwamyCaltech and U. Waterloo

Joint work with David Shmoys Cornell University

Stochastic OptimizationWay of modeling uncertainty. Exact data is unavailable or expensive data is uncertain, specified by a probability distribution.Want to make the best decisions given this uncertainty in the data.Applications in logistics, transportation models, financial instruments, network design, production planning, Dates back to 1950s and the work of Dantzig.

Stochastic Recourse ModelsGiven:Probability distribution over inputs.Stage I:Make some advance decisions plan ahead or hedge against uncertainty.Uncertainty evolves through various stages.Learn new information in each stage.Can take recourse actions in each stage can augment earlier solution paying a recourse cost.Choose initial (stage I) decisions to minimize (stage I cost) + (expected recourse cost).

2-stage problem 2 decision points0.20.020.30.1stage Istage II scenarios

2-stage problem 2 decision pointsChoose stage I decisions to minimize expected total cost = (stage I cost) + Eall scenarios [cost of stages 2 k]. 0.20.020.30.1stage Istage II scenarios0.50.20.4stage Istage IIk-stage problem k decision points0.3scenarios in stage k

2-Stage Stochastic Facility LocationDistribution over clients gives the set of clients to serve.client set Dfacility

2-Stage Stochastic Facility LocationDistribution over clients gives the set of clients to serve.facilityStage I: Open some facilities in advance; pay cost fi for facility i.Stage I cost = (i opened) fi .stage I facility

Want to decide which facilities to open in stage I.Goal: Minimize Total Cost = (stage I cost) + EA D [stage II cost for A].How is the probability distribution specified?A short (polynomial) list of possible scenariosIndependent probabilities that each client existsA black box that can be sampled.

Approximation AlgorithmHard to solve the problem exactly. Even special cases are #P-hard.Settle for approximate solutions. Give polytime algorithm that always finds near-optimal solutions.A is a a-approximation algorithm if,A runs in polynomial time.A(I) a.OPT(I) on all instances I, a is called the approximation ratio of A.

Overview of Previous Workpolynomial scenario model: Dye, Stougie & Tomasgard;Ravi & Sinha; Immorlica, Karger, Minkoff & Mirrokni.Immorlica et al.: also consider independent activation modelproportional costs: (stage II cost) = l(stage I cost), e.g., fiA = l.fi for each facility i, in each scenario A.Gupta, Pl, Ravi & Sinha (GPRS04): black-box model but also with proportional costs.Shmoys, S (SS04): black-box model with arbitrary costs. approximation scheme for 2-stage LPs + rounding procedure reduces stochastic problems to their deterministic versions.for some problems improve upon previous results.

Boosted Sampling (GPRS04)Proportional costs: (stage II cost) = l(stage I cost)Note: l is same as s in previous talk.Sample l times from distributionUse suitable algorithm to solve deterministic instance consisting of sampled scenarios (e.g., all sampled clients) determines stage I decisionsAnalysis relies on the existence of cost-shares that can be used to share the stage I cost among sampled scenarios.

Shmoys, S 04 vs. Boosted samplingCan handle arbitrary costs in the two stages.LP rounding: give an algorithm to solve the stochastic LP.Need many more samples to solve stochastic LP.Need proportional costs:(stage II cost) = l(stage I cost)l can depend on scenario.Primal-dual approach: cost-shares obtained by exploiting structure via primal-dual schema.Need only l samples.Both work in the black-box model: arbitrary distributions.

Stochastic Set Cover (SSC)Universe U = {e1, , en }, subsets S1, S2, , Sm U, set S has weight wS. Deterministic problem: Pick a minimum weight collection of sets that covers each element.Stochastic version: Set of elements to be covered is given by a probability distribution.choose some sets initially paying wS for set S subset A U to be covered is revealed can pick additional sets paying wSA for set S. Minimize (w-cost of sets picked in stage I) + EA U [wA-cost of new sets picked for scenario A].

A Linear Program for SSCFor simplicity, consider wSA = WS for every scenario A.wS: stage I weight of set SpA : probability of scenario A U.xS : indicates if set S is picked in stage I.yA,S: indicates if set S is picked in scenario A.Minimize S wSxS + AU pA S WSyA,Ssubject to, S:eS xS + S:eS yA,S 1for each A U, eAxS, yA,S 0for each S, A.Exponential number of variables and exponential number of constraints.

A Rounding TheoremAssume LP can be solved in polynomial time. Suppose for the deterministic problem, we have an a-approximation algorithm wrt. the LP relaxation, i.e., A such that A(I) a.(optimal LP solution for I)for every instance I.e.g., the greedy algorithm for set cover is a log n-approximation algorithm wrt. LP relaxation.Theorem: Can use such an a-approx. algorithm to get a 2a-approximation algorithm for stochastic set cover.

Rounding the LPAssume LP can be solved in polynomial time. Suppose we have an a-approximation algorithm wrt. the LP relaxation for the deterministic problem.Let E = {e : S:eS xS }. So (2x) is a fractional set cover for the set E can round to get an integer set cover S for E of cost SS wS a(S 2wSxS) .S is the first stage decision.Let (x,y) : optimal solution with cost OPT.S:eS xS + S:eS yA,S 1for each A U, eA for every element e, either S:eS xS OR in each scenario A : eA, S:eS yA,S .

Rounding (contd.)SetsElementsSet in SElement in EUsing this to augment S in scenario A, expected cost SS wS + 2a. AU pA (S WSyA,S) 2a.OPT.

Rounding (contd.)An a-approx. algorithm for deterministic problem gives a 2a-approximation guarantee for stochastic problem.In the polynomial-scenario model, gives simple polytime approximation algorithms for covering problems.2log n-approximation for SSC.4-approximation for stochastic vertex cover.4-approximation for stochastic multicut on trees.Ravi & Sinha gave a log n-approximation algorithm for SSC, 2-approximation algorithm for stochastic vertex cover in the polynomial-scenario model.

Rounding the LPLet E = {e : S:eS xS }. So (2x) is a fractional set cover for the set E can round to get an integer set cover S of cost SS wS a(S 2wSxS) .S is the first stage decision.Assume LP can be solved in polynomial time. Suppose we have an a-approximation algorithm wrt. the LP relaxation for the deterministic problem.Let (x,y) : optimal solution with cost OPT.S:eS xS + S:eS yA,S 1for each A U, eA for every element e, either S:eS xS OR in each scenario A : eA, S:eS yA,S .

A Compact Convex ProgrampA : probability of scenario A U.xS : indicates if set S is picked in stage I.Minimize h(x)=S wSxS + AU pAfA(x) s.t. xS 0for each S (SSC-P)where fA(x)= min. S WSyA,S s.t. S:eS yA,S 1 S:eS xS for each eAyA,S 0 for each S.Equivalent to earlier LP.Each fA(x) is convex, so h(x) is a convex function.

The General Strategy1.Get a (1+e)-optimal fractional first-stage solution (x) by solving the convex program. 2.Convert fractional solution (x) to integer solutiondecouple the two stages near-optimallyuse a-approx. algorithm for the deterministic problem to solve subproblems.Obtain a c.a-approximation algorithm for the stochastic integer problem.Many applications: set cover, vertex cover, facility location, multicut on trees,

Solving the Convex ProgramNeed a procedure that at any point y,if yP, returns a violated inequality which shows that yPMinimize h(x) subject to xP.h(.) : convexEllipsoid methodPy

Solving the Convex ProgramNeed a procedure that at any point y,if yP, returns a violated inequality which shows that yPif yP, computes the subgradient of h(.) at ydm is a subgradient of h(.) at u, if "v, h(v)-h(u) d.(v-u).Given such a procedure, ellipsoid runs in polytime and returns points x1, x2, , xkP such that mini=1k h(xi) is close to OPT.Minimize h(x) subject to xP.h(.) : convexEllipsoid methodComputing subgradients is hard. Evaluating h(.) is hard. Pyh(x) h(y)d

Solving the Convex ProgramNeed a procedure that at any point y,if yP, returns a violated inequality which shows that yPif yP, computes an approximate subgradient of h(.) at yd'm is an e-subgradient at u, if "vP, h(v)-h(u) d'.(v-u) e.h(u).

Given such a procedure, can compute point xP such thath(x) OPT/(1-e) + r without ever evaluating h(.)!Minimize h(x) subject to xP.h(.) : convexPyh(x) h(y)d'Ellipsoid methodCan compute e-subgradients by sampling.

Putting it all togetherGet solution x with h(x) close to OPT.Sample initially to detect if OPT is large this allows one to get a (1+e).OPT guarantee.Theorem: (SSC-P) can be solved to within a factor of (1+e) in polynomial time, with high probability. Gives a (2log n+e)-approximation algorithm for the stochastic set cover problem.

A Solvable Class of Stochastic LPsMinimize h(x)=w.x + AU pAfA(x) s.t. x P mwhere fA(x)=min. wA.yA +cA.rAs.t. DA rA + TA yA jA TA xyA m, rA n, yA, rA 0.Theorem: Can get a (1+e)-optimal solution for this class of stochastic programs in polynomial time.Includes covering problems (e.g., set cover, network design, multicut), facility location problems, multicommodity flow. 0

Moral of the StoryEven though the stochastic LP relaxation has exponentially many variables and constraints, we can still obtain near-optimal fractional first-stage decisionsFractional first-stage decisions are sufficient to decouple the two stages near-optimallyMany applications: set cover, vertex cover, facility locn., multicommodity flow, multicut on trees, But we have to solve convex program with many samples (not just l)!

Sample Average ApproximationSample Average Approximation (SAA) method:Sample initially N times from scenario distributionSolve 2-stage problem estimating pA with frequency of occurrence of scenario AHow large should N be to ensure that an optimal solution to sampled problem is a (1+e)-optimal solution to original problem?Kleywegt, Shapiro & Homem De-Mello (KSH01):bound N by variance of a certain quantity need not be polynomially bounded even for our class of programs.S, Shmoys 05 :show using e-subgradients that for our class, N can be poly-bounded. Charikar, Chekuri & Pl 05:give another proof that for a class of 2-stage problems, N can be poly-bounded.

Multi-stage ProblemsGiven:Distribution over inputs.Stage I:Make some advance decisions hedge against uncertainty.Uncertainty evolves in various stages.Learn new information in each stage.Can take recourse actions in each stage can augment earlier solution paying a recourse cost.k-stage problem k decision points0.50.20.4stage Istage IIscenarios in stage k Choose stage I decisions to minimize expected total cost = (stage I cost) + Eall scenarios [cost of stages 2 k]. 0.3

Multi-stage ProblemsFix k = number of stages.LP-rounding: S, Shmoys 05Ellipsoid-based algorithm extendsSAA method also works black-box model, arbitrary costsRounding procedure of SS04 can be easily adapted: lose an O(k)-factor over the deterministic guaranteeO(k)-approx. for k-stage vertex cover, facility location, multicut on trees; k.log n-approx. for k-stage set coverGupta, Pl, Ravi & Sinha 05: boosted sampling extends but with outcome-dependent proportional costs2k-approx. for k-stage Steiner tree (also Hayrapetyan, S & Tardos)factors exponential in k for k-stage vertex cover, facility locationComputing e-subgradients is significantly harder, need several new ideas

Open QuestionsCombinatorial algorithms in the black box model and with general costs. What about strongly polynomial algorithms?Incorporating risk into stochastic models.Obtaining approximation factors independent of k for k-stage problems.Integrality gap for covering problems does not increase. Munagala has obtained a 2-approx. for k-stage VC.Is there a larger class of doubly exponential LPs that one can solve with (more general) techniques?

Thank You.

A(I) = cost