Hedging Uncertainty: Approximation Algorithms for Stochastic Optimization Problems

  • Published on

  • View

  • Download

Embed Size (px)


  • Digital Object Identifier (DOI) 10.1007/s10107-005-0673-5Math. Program., Ser. A 108, 97114 (2006)

    R. Ravi Amitabh Sinha

    Hedging Uncertainty: Approximation Algorithmsfor Stochastic Optimization Problems

    Received: August 28, 2003 / Accepted: September 24, 2005Published online: December 30, 2005 Springer-Verlag 2005

    Abstract. We study two-stage, finite-scenario stochastic versions of several combinatorial optimization prob-lems, and provide nearly tight approximation algorithms for them. Our problems range from the graph-theoretic(shortest path, vertex cover, facility location) to set-theoretic (set cover, bin packing), and contain representa-tives with different approximation ratios.

    The approximation ratio of the stochastic variant of a typical problem is found to be of the same orderof magnitude as its deterministic counterpart. Furthermore, we show that common techniques for designingapproximation algorithms such as LP rounding, the primal-dual method, and the greedy algorithm, can beadapted to obtain these results.

    1. Introduction

    With the increasing success of optimization algorithms in process optimization, thesemethods are making inroads into earlier planning stages of large scale projects. The inher-ent difference between optimization at the planning stage and post-facto optimizationis that in the former, data is not fully available. Yet costly decisions with wide-rangingimplications need to be taken in the face of incomplete data. Nevertheless, quite oftenforecasts of future uncertainty are available that can be used in the planning model.Forecasts, by nature, are imprecise and provide at best a range of possible futures. Thefield of stochastic optimization is an attempt to model such situations. For a detailedintroduction, please consult one of the recent texts on the topic [4, 25, 18].

    In a parallel development, the field of approximation algorithms evolved to counterthe prohibitive resource requirements for the exact solution of NP-hard combinatorialoptimization problems. Informally, these algorithms run in polynomial time and delivera performance ratio on the worst-case quality of the output solution over all instances.As the size of the models being solved increases in scale, this solution approach gains inimportance. For a detailed exposition of this topic, the reader is referred to Vazirani [44]and Ausiello et al. [2].

    However, as approximation algorithms become more sophisticated in scope and tech-nique, the refrain from real-world practitioners who have the need for such algorithms

    R. Ravi: Tepper School of Business, Carnegie Mellon University, Pittsburgh PA 15213, USA.e-mail: ravi@cmu.eduA. Sinha: Ross School of Business, University of Michigan, Ann Arbor MI 48109, USA.e-mail: amitabh@umich.edu

    Mathematics Subject Classification (1991): 20E28, 20G40, 20C20

    Used Distiller 5.0.x Job OptionsThis report was created automatically with help of the Adobe Acrobat Distiller addition "Distiller Secrets v1.0.5" from IMPRESSED GmbH.You can download this startup file for Distiller versions 4.0.5 and 5.0.x for free from http://www.impressed.de.

    GENERAL ----------------------------------------File Options: Compatibility: PDF 1.2 Optimize For Fast Web View: Yes Embed Thumbnails: Yes Auto-Rotate Pages: No Distill From Page: 1 Distill To Page: All Pages Binding: Left Resolution: [ 600 600 ] dpi Paper Size: [ 595 842 ] Point

    COMPRESSION ----------------------------------------Color Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 150 dpi Downsampling For Images Above: 225 dpi Compression: Yes Automatic Selection of Compression Type: Yes JPEG Quality: Medium Bits Per Pixel: As Original BitGrayscale Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 150 dpi Downsampling For Images Above: 225 dpi Compression: Yes Automatic Selection of Compression Type: Yes JPEG Quality: Medium Bits Per Pixel: As Original BitMonochrome Images: Downsampling: Yes Downsample Type: Bicubic Downsampling Downsample Resolution: 600 dpi Downsampling For Images Above: 900 dpi Compression: Yes Compression Type: CCITT CCITT Group: 4 Anti-Alias To Gray: No

    Compress Text and Line Art: Yes

    FONTS ---------------------------------------- Embed All Fonts: Yes Subset Embedded Fonts: No When Embedding Fails: Warn and ContinueEmbedding: Always Embed: [ ] Never Embed: [ ]

    COLOR ----------------------------------------Color Management Policies: Color Conversion Strategy: Convert All Colors to sRGB Intent: DefaultWorking Spaces: Grayscale ICC Profile: RGB ICC Profile: sRGB IEC61966-2.1 CMYK ICC Profile: U.S. Web Coated (SWOP) v2Device-Dependent Data: Preserve Overprint Settings: Yes Preserve Under Color Removal and Black Generation: Yes Transfer Functions: Apply Preserve Halftone Information: Yes

    ADVANCED ----------------------------------------Options: Use Prologue.ps and Epilogue.ps: No Allow PostScript File To Override Job Options: Yes Preserve Level 2 copypage Semantics: Yes Save Portable Job Ticket Inside PDF File: No Illustrator Overprint Mode: Yes Convert Gradients To Smooth Shades: No ASCII Format: NoDocument Structuring Conventions (DSC): Process DSC Comments: No

    OTHERS ---------------------------------------- Distiller Core Version: 5000 Use ZIP Compression: Yes Deactivate Optimization: No Image Memory: 524288 Byte Anti-Alias Color Images: No Anti-Alias Grayscale Images: No Convert Images (< 257 Colors) To Indexed Color Space: Yes sRGB ICC Profile: sRGB IEC61966-2.1

    END OF REPORT ----------------------------------------

    IMPRESSED GmbHBahrenfelder Chaussee 4922761 Hamburg, GermanyTel. +49 40 897189-0Fax +49 40 897189-71Email: info@impressed.deWeb: www.impressed.de

    Adobe Acrobat Distiller 5.0.x Job Option File

    /AutoFilterColorImages true /sRGBProfile (sRGB IEC61966-2.1) /ColorImageDepth -1 /PreserveOverprintSettings true /AutoRotatePages /None /UCRandBGInfo /Preserve /EmbedAllFonts true /CompatibilityLevel 1.2 /StartPage 1 /AntiAliasColorImages false /CreateJobTicket false /ConvertImagesToIndexed true /ColorImageDownsampleType /Bicubic /ColorImageDownsampleThreshold 1.5 /MonoImageDownsampleType /Bicubic /DetectBlends false /GrayImageDownsampleType /Bicubic /PreserveEPSInfo false /GrayACSImageDict > /ColorACSImageDict > /PreserveCopyPage true /EncodeMonoImages true /ColorConversionStrategy /sRGB /PreserveOPIComments false /AntiAliasGrayImages false /GrayImageDepth -1 /ColorImageResolution 150 /EndPage -1 /AutoPositionEPSFiles false /MonoImageDepth -1 /TransferFunctionInfo /Apply /EncodeGrayImages true /DownsampleGrayImages true /DownsampleMonoImages true /DownsampleColorImages true /MonoImageDownsampleThreshold 1.5 /MonoImageDict > /Binding /Left /CalCMYKProfile (U.S. Web Coated (SWOP) v2) /MonoImageResolution 600 /AutoFilterGrayImages true /AlwaysEmbed [ ] /ImageMemory 524288 /SubsetFonts false /DefaultRenderingIntent /Default /OPM 1 /MonoImageFilter /CCITTFaxEncode /GrayImageResolution 150 /ColorImageFilter /DCTEncode /PreserveHalftoneInfo true /ColorImageDict > /ASCII85EncodePages false /LockDistillerParams false>> setdistillerparams> setpagedevice

  • 98 R. Ravi, A. Sinha

    is that the input data is seldom well-defined, thus diminishing the value of the solu-tions and guarantees provided by the algorithm. Conversely, while the field of stochasticoptimization models the uncertainty in data fairly well, the running times of the exactalgorithms developed in the stochastic optimization community often prove prohibitive.This paper combines the best of both worlds, by providing approximation algorithmsfor a stochastic version of several classical optimization problems.

    2. Background

    2.1. Two-stage model

    Among the most popular models in stochastic optimization is the two-stage model withrecourse.At the outset, some data may be known deterministically, whereas the uncertainfuture is characterized only by a probability distribution. The decisions made at this pointare referred to as the first-stage decisions. Subsequently, the actual future is realized, andthen there may be the opportunity to augment the first-stage solution in order to optimizefor the realized scenario. This second stage of decision making is called the recoursestage. The goal is to optimize the first-stage decision variables so as to minimize theexpected cost over both stages, where the expectation is over the probability distributiondefining the second stage.

    2.2. Mathematical formulation

    Formally, a two-stage stochastic optimization problem with recourse is defined as fol-lows. Vector x0 is the set of decision variables which have to be fixed in the first stage,when only partial information is available. Later, full information is made availableand we choose some second-stage (or recourse) variables x1 to augment the first-stagesolution x0. Let represent the random vector which defines the constraint matrix T ,cost vector q and requirement vector h when the full information is made available, andlet A, c, and b define the same for the first stage. Given a vector (or matrix) a, let adenote the transpose of a. Let P denote additional constraints such as non-negativity orintegrality which the components of x0 and x1 need to satisfy. The stochastic programcan be written as:

    min cx0 + EQ(x0, ) (IPS1)s.t. Ax0 = b

    x0 P

    where Q(x0, ) = min q x1s.t. T (x0, x1) = h

    x1 PHere Q(x0, ) represents the optimal cost of the second stage, conditioned on scenario = (q, T , h) having been realized and a first-stage setting of the variables x0. Theexpectation is taken with respect to .

  • Approximation Algorithms for Stochastic Optimization Problems 99

    2.3. Stochastic optimization with finite scenarios

    A popular restriction of the two-stage model is where the second stage is character-ized by a finite set of m scenarios. In scenario k, the constraint matrix, cost vector andrequirement vector take on values T k , qk and hk respectively, and scenario k occurs withprobability pk . In this case, we can write IPS1 in extensive form as follows, where xkrepresents our choice for the variable x1 if scenario k materializes:

    min cx0 +m


    k)xk (IPS2)

    s.t. Ax0 = bT k(x0, xk) = hk k = 1, 2, . . . , m

    (x0, xk) P k = 1, 2, . . . , mThe interested reader may refer to any of the texts cited above for a more complete

    description of models of stochastic optimization and their uses. Schultz, Stougie and Vander Vlerk [38] provide an excellent survey of two-stage stochastic integer programming,while Kong and Schaefer [28] recently provided approximation algorithms for a classof such problems. The relevance of the finite scenario model becomes more pronouncedin light of the recent work on scenario reduction by Heitsch and Romisch [21].

    2.4. Approximation algorithms

    The field of approximation algorithms provides one means of tackling the curse ofdimensionality that afflicts several integer programming and combinatorial optimiza-tion problems, due to their NP-completeness. Since exact algorithms that run in timepolynomial in the size of the input are not known (and unlikely to exist) for NP-completeproblems, approximation algorithms trade-off the precision of the solution with speed,by guaranteeing to run in polynomial time. At the same time, they provide guarantees inthe form of approximation ratios. If C is the set of all possible inputs of a certain min-imization problem and OPT (I) and A(I) respectively denote the value of an optimalsolution and the solution output by algorithm A for that problem, then the approximationratio (A) of algorithm A is defined as follows:

    (A) = maxIC


    OPT (I)

    The interested reader is referred to two recent books [44, 2] for further pointers onapproximation algorithms.

    2.5. Literature review

    While both the areas of approximation algorithms and stochastic optimization have beenextremely active areas of optimization in the past decade (and longer), relatively littlework exists on approximation algorithms for stochastic optimization. Among the earli-est papers which provided an approximation algorithm for stochastic optimization was

  • 100 R. Ravi, A. Sinha

    the work on service provisioning for a telecommunication network by Dye, Stougieand Tomasgard [9]. By its very nature, scheduling algorithms often have to account foruncertainty in the sizes and arrival times of future jobs; some approximation algorithmswhich account for such uncertainty in various models of scheduling include the worksof Mohring, Schulz and Uetz [33], Kleinberg, Tardos and Rabani [27], and Skutella andUetz [43].

    Karger and Minkoff [26] considered a Steiner tree problem with uncertainty in theterminal set, which falls under a slightly different model of stochastic optimization. Kongand Schaefer [28] provided approximation algorithms for a class of matching problemsin a stochastic setting, and their work provided some of the initial framework on whichour work was built. Immorlica, et al. [22] also studied two-stage stochastic versions ofsome combinatorial optimization problems. Their model and results are distinct fromours, and they obtained their results independently of ours.

    Subsequent to the first appearance of this work, other papers have provided furtherapproximation algorithms for stochastic versions of combinatorial optimization prob-lems. Gupta, Ravi and Sinha [16] considered the two-stage finite-scenario version of therooted Steiner tree problem, and provided a constant factor approximation.

    Shmoys and Swamy [39, 40] provide a sampling-based approach which uses interiorpoint linear programming algorithms to provide approximation algorithms for two-stageas well as multi-stage stochastic versions of a certain class of combinatorial optimizationproblems. Gupta, et al. [14, 15] also provide sampling-based approximation algorithmsfor two-stage and multi-stage stochastic problems, using cost-sharing functions to boundthe cost of the solution. Both of these streams of work allow for exponentially many sce-narios and have running times independent of the number of scenarios, which constitutesan improvement over our work.

    Other recent work providing approximation algorithms for stochastic versions ofcombinatorial optimization problems include that of Hayrapetyan, Swamy and Tar-dos [20] (information networks), Gupta and Pal [13] (Steiner trees), and Dhamdhere,Ravi and Singh [7] (minimum spanning trees).

    2.6. Our results

    In this paper, we provide polynomial-time approximation algorithms for several classi-cal combinatorial optimization problems, in a two-stage stochastic optimization settingwi...


View more >