29
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 88, No. 1, pp. 77-105, JANUARY 1996 Outcome-Based Algorithm for Optimizing over the Efficient Set of a Bicriteria Linear Programming Problem H. P. BENSON t AND D. LEE 2 Abstract. This article presents a finite, outcome-based algorithm for optimizing a lower semicontinuous function over the efficient set of a bicriteria linear programming problem. The algorithm searches the effi- cient faces of the outcome set of the bicriteria linear programming prob- lem. It exploits the fact that the dimension of the outcome set of the bicriteria problem is at most two. As a result, in comparison to decision- based approaches, the number of efficient faces that need to be found is markedly reduced. Furthermore, the dimensions of the eff• faces found are always at most one. The algorithm can be implemented for a wide variety of lower semicontinuous objective functions. Key Words. Multiple criteria decision making, efficient set, global optimization. 1. Introduction The multiple objective mathematical programming problem involves the simultaneous maximization of two or more noncomparable criteria functions over a nonempty set. The concept of an efficient solution has played a useful role in the analysis and solution of this problem. In particular, many of the approaches for analyzing and solving this problem generate either all or at least some of the efficient solution set. In this way, inherent tradeoffs in the problem are revealed, and most-preferred solutions can be sought. Included among these types of approaches, for instance, are the vector maximization approach, interactive approaches, and several others; see, for instance, Cohon (Ref. 1), Evans (Ref. 2), Goicoechea, Hansen, and Duckstein (Ref. 3), Kuhn and Tucker (Ref. 4), Luc (Ref. 5), Ringuest (Ref. 6), Rosenthal ~Professor, Collegeof Business Administration, Universityof Florida, Gainesville,Florida. 2Lecturer, College of Business Administration, Myung Ji University, Seoul, Korea. 77 0022-3239/96/0100-0077509,50/0 1996 Plenum Publishing Corporation

BENSON Bi Criteria Linear Programming

Embed Size (px)

Citation preview

Page 1: BENSON Bi Criteria Linear Programming

JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 88, No. 1, pp. 77-105, JANUARY 1996

Outcome-Based Algorithm for Optimizing over the Efficient Set of a

Bicriteria Linear Programming Problem

H. P. B E N S O N t A N D D. L E E 2

Abstract. This article presents a finite, outcome-based algorithm for optimizing a lower semicontinuous function over the efficient set of a bicriteria linear programming problem. The algorithm searches the effi- cient faces of the outcome set of the bicriteria linear programming prob- lem. It exploits the fact that the dimension of the outcome set of the bicriteria problem is at most two. As a result, in comparison to decision- based approaches, the number of efficient faces that need to be found is markedly reduced. Furthermore, the dimensions of the eff• faces found are always at most one. The algorithm can be implemented for a wide variety of lower semicontinuous objective functions.

Key Words. Multiple criteria decision making, efficient set, global optimization.

1. Introduction

The multiple objective mathematical programming problem involves the simultaneous maximization of two or more noncomparable criteria functions over a nonempty set. The concept of an efficient solution has played a useful role in the analysis and solution of this problem. In particular, many of the approaches for analyzing and solving this problem generate either all or at least some of the efficient solution set. In this way, inherent tradeoffs in the problem are revealed, and most-preferred solutions can be sought. Included among these types of approaches, for instance, are the vector maximization approach, interactive approaches, and several others; see, for instance, Cohon (Ref. 1), Evans (Ref. 2), Goicoechea, Hansen, and Duckstein (Ref. 3), Kuhn and Tucker (Ref. 4), Luc (Ref. 5), Ringuest (Ref. 6), Rosenthal

~Professor, College of Business Administration, University of Florida, Gainesville, Florida. 2Lecturer, College of Business Administration, Myung Ji University, Seoul, Korea.

77 0022-3239/96/0100-0077509,50/0 �9 1996 Plenum Publishing Corporation

Page 2: BENSON Bi Criteria Linear Programming

78 JOTA: VOL. 88, NO. 1, JANUARY 1996

(Ref. 7), Sawaragi, Nakayama, and Tanino (Ref. 8), Stadler (Ref. 9), Steuer (Ref. 10), Yu (Refs. 11-12), Zeleny (Ref. 13), and references therein.

Recently, researchers and practitioners have been increasingly interested in problems involving optimizations of functions over the efficient sets of multiple objective mathematical programs. Interest in these problems is motivated by many factors.

First, these problems arise in many applications. For instance, practical problems in production planning (Ref. 14), value theory (Ref. 15), portfolio management (Ref. 16), and a variety of other areas can be represented as optimizations over efficient sets.

Second, by optimizing a function over an efficient set, instead of using one of the standard algorithms to find and compare efficient solutions and their tradeoffs, the absolute importance, rather than the relative importance, of each criterion in the multiple objective mathematical program is taken into account. Thus, an efficient solution found via the optimization approach can be expected to be superior to those found by standard approaches (Ref. 17).

Third, the approach of optimizing over efficient sets is relatively easy for decision makers to work with. In particular, it relieves decision makers of the bulk of the burdensome work that is entailed from using more standard approaches (Refs. 14, 18, 19).

Finally, the special case of optimization over an efficient set in which a criterion of a multiple objective mathematical program is minimized over the efficient set of the program has several distinct uses of its own. For instance, solutions for this special case aid decision makers in setting goals, ranking or eliminating objective functions and comparing efficient solutions to one another (Refs. 20-24). In addition, these solutions are needed to ensure the effectiveness of several standard interactive algorithms for multi- ple objective mathematical programming, including STEM (Refs. 25-26), the Belenson-Kapur algorithm (Ref. 27), and the algorithm of Kok and Lootsma (Ref. 28).

Mathematically, problems of optimizing functions over efficient sets of multiple objective mathematical programming problems are difficult global optimization problems; i.e., they generally possess local optima that are not global. This is true regardless of the function to be optimized, because effi- cient sets of multiple objective mathematical programs are generally noncon- vex sets. Even in the case of multiple objective linear programming, the efficient set is generally nonconvex. Furthermore, in problems of optimizing over efficient sets, as in most global optimization problems, the number of local optima that are not global can be very large (Refs. 29-32). Confound- ing the situation further is the fact that, since the feasible regions of these problems are efficient sets, they cannot be expressed in the traditional convex programming format as a system of functional inequalities.

Page 3: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 79

The first studies of the nature and mathematical structure of problems of optimizing over efficient sets appear in the 1980s and have continued through today (see e.g. Refs. 14, 33-34). Various algorithms for globally solving these problems have also appeared (Refs. 15-17, 21, 24, 34-40). Most of these algorithms are at least partially dependent upon difficult global optimization subroutines, cutting plane procedures, searches over nonconvex sets, solutions of nonconvex systems of inequalities, or branch-and-bound searches. Therefore, these algorithms generally entail heavy computational burdens. As a result, some heuristic methods for finding approximate global optimal solutions have also been proposed (Refs. 20 and 41--43).

When a multiple objective mathematical program has exactly two objec- tive functions, it is often called a bicriteria mathematical programming prob- lem. As in many other problems from mathematics and operations research, the theory, algorithms, and applications of multiple objective mathematical programming considerably simplify in the case where only two objective functions are present; see for example Aksoy (Ref. 44), Aneja and Nair (Ref. 45), Benson (Ref. 46), Benson and Morin (Ref. 47), Cohon, Church, and Sheer (Ref. 48), Gearhart (Ref. 49), Geoffrion (Ref. 50), Kiziltan and Yucaoglu (Ref. 51), Klein, Moskowitz, and Ravindran (Ref. 52), Shin and Allen (Ref. 53), Shin and Lee (Ref. 54), Steuer (Ref. 10), Walker (Ref. 55), and references therein. Yet, surprisingly, to our knowledge, no algorithms have yet been proposed for optimizing over the efficient sets of bicriteria mathematical programs. The only study of such optimizations of which we are aware is limited to showing how to detect and solve certain special cases that can be solved easily by nonglobal optimization methods (Ref. 37).

In this article, we present an outcome-based algorithm for optimizing a lower semicontinuous function over the efficient set of a bicriteria linear programming problem. The algorithm exploits the fact that the dimension of the outcome set of the underlying bicriteria linear program is at most two. The result is an algorithm with the following attractive characteristics:

(a) The algorithm can be implemented for any lower semicontinuous objective function f than can be minimized over a nonempty, compact polyhedron by practical optimization subroutines.

(b) The computational efficiency of the algorithm is enhanced signifi- cantly by the fact that, to find an optimal solution, it uses an indirect search of the set of efficient decisions. This search, unlike direct searches of the efficient decision set, is organized according to the set of efficient outcomes, rather than the set of efficient decisions. Generally, this outcome-based approach significantly reduces the number of efficient faces that need to be found. Furthermore, the efficient faces found always have dimensions of at most one (Refs. 45, 47, 56-63).

(c) Further computational efficiencies are obtained from the fact that, in each iteration, the algorithm optimizes f over a set of efficient decisions

Page 4: BENSON Bi Criteria Linear Programming

80 JOTA: VOL. 88, NO. 1, JANUARY 1996

that corresponds to an entire face of the set of efficient outcomes, thereby typically eliminating large subsets of the set of efficient decisions from further consideration.

(d) A fathoming test is included in the algorithm. This test, when it is satisfied, terminates the algorithm without requiring any further genera- tion of faces in the set of efficient outcomes, no matter how many such faces may remain which have not yet been generated.

(e) The algorithm is always finite, since the number of efficient faces in the outcome space is always finite.

(f) If the subroutine used for optimizingfover a polyhedron is exact, then the algorithm will terminate with an exact optimal solution.

The organization of this article is as follows. In Section 2, theoretical prerequisites for the algorithm are given. Included here are some important geometrical results on outcome sets in multiple objective programming as well as specialized results needed to develop the outcome-based algorithm. In Section 3, the algorithm is presented and a geometrically-motivated proof of its convergence is given. Section 4 illustrates the computational benefits to be expected from the algorithm by giving two examples. Some conclusions are given in the last section.

2. Theoretical Prerequisites

Let Xc_ R" be a nonempty, compact polyhedron. Assume without loss of generality that

X = { x e R n l A x = a , x > O } ,

where A is an rn x n matrix and a e R '~. Let C denote a 2 x n matrix whose ith row equals cfe R ~, i = 1, 2. Then, the bicriteria linear programming prob- lem may be written as

(BX) Vmax Cx,

s.t. x~X.

The concept of an efficient solution or decision has played a prominent role in analyzing and solving problem (BX), where an efficient solution is defined as follows.

Definition 2.1. A point x ~ is an efficient solution of problem (BX) when x~ EX and, whenever C x ~ C x ~ for some x~X, then Cx= Cx ~

An efficient solution is also called a nondorninated or Pareto-optirnal solution. Let XE denote the set of all efficient solutions for problem (BX).

Page 5: BENSON Bi Criteria Linear Programming

JOTA: VOU 88, NO. 1, JANUARY 1996 81

The central problem of interest in this article is the problem of minimiz- ing a lower semicontinuous function f : Rn~R over the efficient set XE of problem (BX). This problem is given by

(P) min f (x ) ,

s.t. x~Xe.

Let v denote the optimal value of problem (P). Notice that, since X is compact, ArE will also be compact (Ref. 12). This implies that problem (P) possesses at least one global optimal solution (e.g., Ref. 64). In fact, when f i s linear or quasiconcave, at least one global optimal solution will coincide with an extreme point o f X (Refs. 14, 34).

Let

Y= {CxlxEX}.

From Rockafellar (Ref. 65), Yis a nonempty, compact polyhedron. We will refer to Y as the outcome set or image of X under C (Ref. 65). Notice that, in this context, the matrix C represents a linear mapping of the feasible decision set X of problem (BX) onto the outcome set Y. Furthermore, it is easy to see that the outcome set CXE of XE under C, denoted Ye, is precisely the set of efficient points of the bicriteria linear programming problem

(BY) Vmax I2y,

s.t. y~Y,

where 12 denotes the 2 x 2 identity matrix. The set Ye is also called the set of admissible points of Y; see for instance Refs. 66-69.

To solve problem (P), the algorithm to be presented in this article will use an indirect search of XE organized according to the faces of YE, rather than according to points or faces ofXe. As we shall see, in this way, generally far fewer efficient faces need to be generated than if Xe were directly searched. Furthermore, the algorithm will need to generate only faces of dimension zero and one in YE, i.e., efficient extreme points and edges of Y. In contrast to this, direct face searches in Xe could require searching faces and facets of much higher dimension. In several articles, Dauer, Benson, and other authors (Refs. 34-36, 43, 59-63) have demonstrated previously that focusing on the outcome sets, rather than the decision sets, in multiple objective mathematical programming can have major computational benefits of this type.

To motivate and later verify the geometrical workings of the algorithm, some general geometric results concerning the outcome sets of certain

Page 6: BENSON Bi Criteria Linear Programming

82 JOTA: VOL. 88, NO. 1, JANUARY 1996

multiple objective convex programming problems must be reviewed. To help accomplish this, assume that p > 2 ; let W~_R" be a nonempty convex set; let D be a p • n matrix; and let

Z= {Dwlwe W}

be the image of W under D. In addition, let WE denote the set of efficient points of the multiple objective convex programming problem

(M) Vmax Dw,

s.t. we W;

and let ZE denote the set of efficient points of Z. Notice that Z is a nonempty convex set in R e (Ref. 65). Also notice that WE and Ze may be empty (e.g., if W is not compact).

Recall that a face of a convex set V is a convex subset F of V such that any closed line segment in V with at least one relative interior point in F must have both of its endpoints in F. The zero-dimensional faces of V are called the extreme points of V. The set of extreme points Vex of V may be empty. However, the set of faces of V is always nonempty, since the empty set and V itself are always faces of V.

The following result will be used frequently in the sequel. It confirms the fact that ZE is equal to the image of WE under D and can be proven easily from the definitions.

Proposition 2.1.

(a) For any z~ if w~ W satisfies Dw ~ z ~ then w~ (b) For any woe We, if z~ ~ then z~

Benson (Ref. 63) has shown that We and ZE consist of unions of efficient faces of W and Z, respectively. However, the mapping D in problem (M) does not necessarily map an efficient face of W onto an efficient face of Z (Ref. 63). For instance, Dauer (Ref. 59) and Benson (Ref. 63) have given several examples in which large numbers of efficient zero-dimensional and one-dimensional faces (i.e., extreme points and edges) of W are mapped by D into strict subsets of the relative interiors of efficient faces of Z. This phenomenon can occur even in the special case of multiple objective linear programming. Fortunately, however, the inverse mapping of D maps effi- cient faces of Z onto efficient faces of W, as shown by the following result.

Page 7: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 83

Theorem 2.1. See Ref. 63. Let Zr be an arbitrary efficient face of Z. Then, WF = {We WIDweZF} is an efficient face of W, and dim WF>dim ZF.

Theorem 2.1 states in part that, given any efficient face ZF in the out- come set Z, the set of all decisions in W that map into ZF under D form an efficient face of W. However, from the comments preceding the theorem, the reverse is not true; i.e., not every efficient face of W is mapped by D onto an efficient face of Z. In particular, while some efficient faces of W are mapped by D onto efficient faces of Z, there may be easily much larger numbers of efficient faces of W that are strict subsets of these types of faces that map into nonfacial convex subsets of Z (Ref. 63).

Theorem 2.1 also states that, for each efficient face ZF in the outcome set Z, the corresponding preimage under D is an efficient face WF of W of equal or greater dimension. From Ref. 63, the dimension of WF can often be in fact much greater than the dimension of Zr. Furthermore, many effi- cient faces that are strict subfaces of faces of the type WF can exist that are mapped into nonfacial subsets of ZF = D WF, but also have dimensions far exceeding the dimension of ZF (Ref. 63).

Taken altogether, Theorem 2.1 and the discussions accompanying it imply that, to efficiently solve problem (P), it would be preferable to organize the search for an optimal solution according to the faces of Ye rather than XE. In particular, a search organized in this way would generally need to generate significantly fewer numbers of efficient faces of Ye than if the faces of Xe were searched directly. Furthermore, the dimensions of the faces in YE can be expected to be significantly smaller than those of XE. These observations motivated the development of the outcome-based algorithm for problem (P) to be presented in this article.

In the remainder of this section, we present some more particular results that will be needed to develop the outcome-based algorithm. Toward this end, for each i= 1, 2, let

Mi=max (el, x), (la)

s.t. xEX, (lb)

and let m2 equal the optimal value of the linear program

(Q) max (c2,x),

s.t. ( c l , x ) = Mj ,

x~X.

In addition, let

I= {bER[mz<b<M2},

Page 8: BENSON Bi Criteria Linear Programming

84 JOTA: VOL. 88, NO. 1, JANUARY 1996

and for each beI, consider the linear programming problem

(Pb) max (c l ,x) ,

s.t. (c2, x) >_- b,

x~X.

The following result follows immediately from Benson (Ref. 46).

Theorem 2.2. A point x~ ~ satisfies x~ if and only if x ~ is an optimal solution to problem (Pb) for some b~I.

The algorithm will use problem (Pb) and the dual linear program to problem (Pb) to systematically generate faces of Ye. The next result will be fundamental to this process. Although this result is well known, we give a short proof that will be useful later on. For each bsI, let w(b) denote the optimal value of problem (Pb).

Theorem 2.3. The function w is continuous, concave, and piecewise linear on L

Proof. By duality theory of linear programming (Ref. 70), for each b E I, w(b) equals the optimal value of the dual linear program to problem (Pb). For each beI, this dual linear program is given by

(Qb) min -bu+(a , q),

s.t. -c2u+ Arq>__cl,

u__>0,

where u~R, q~R" are the dual variables. For each b~I, if we let Z denote the feasible region of problem (Qb), then since w is finite on I, this implies that, for each b~I,

w(b) =min{-bu + (a, q)l (u, q) ~Zex).

Since Zex is a finite set, it follows that, on I, w is given by the minimum of a finite number of linear functions. From Theorem 8.7 in Ref. 70, this implies that w is concave and piecewise linear on I. From elementary functional analysis, this also implies that w is continuous on I. []

In some cases of problem (BX), X~ and X coincide; i.e., every feasible solution for problem (BX) is an efficient solution. Notice that Xe=X if and only if Ye= Y. Problems (BX) in which Xe=X are called completely efficient. Although the issue of how common complete efficiency is has yet to be

Page 9: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. I, JANUARY 1996 85

addressed, various tests have been formulated for detecting complete effi- ciency. One of the more convenient tests is contained in the following theorem. In this theorem, e denotes the vector in R 2 whose entries are each equal to one.

Theorem 2.4. See Ref. 71. Problem (BX) is completely efficient if and only if t = 0, where t is the optimal value of the linear program

(T) min (a,q),

s.t. - C r u + A r r - z = C r e ,

Arq-z>O,

u,z>-_O.

Since X is nonempty and compact, the linear program (T) in Theorem 2.4 always has an optimal solution, and t is always nonnegative and finite (Ref. 71). Notice that, when problem (BX) is completely efficient, problem (P) is identical to the problem

(PR) min f (x ) ,

s.t. x~X.

3. Outcome-Based Algorithm

3.1. Statement of Algorithm. The outcome set

r={cx4x~x}

for problem (BX) is a nonempty, compact polyhedron of dimension two or less. Therefore, provided that problem (BX) is not completely efficient, each point in YE must lie on the relative boundary of Y (Ref. 63). From Refs. 10 and 65, since Yis polyhedral, this implies that Ice is connected and, when problem (BX) is not completely efficient, that Y~ consists either of a single point in Yex, a single edge of Y, or a union of a finite number of edges of Y.

The general geometric idea of the outcome-based algorithm is to system- atically generate each face of Y~ until an optimal solution for problem (P) has been found. From the paragraph above, we know that, barring complete efficiency, each face of Y~ is either a point or a line segment. As each face of YE is identified, the objective funct ionfof problem (P) is minimized over the corresponding efficient face F in the decision set. This yields a single efficient point x F in F. As the search proceeds and the points x F are found,

Page 10: BENSON Bi Criteria Linear Programming

86 JOTA: VOL. 88, NO. l, JANUARY 1996

their objective function va luesf (x F) in problem (P) are calculated. At each point in the search, a record of the smallest of these va lues f (x ~) and of the solution x ~ (the incumbent solution) achieving this value is kept. After either all of the faces of Ye have been identified, or it can be shown that no remaining faces in YE need to be identified, the algorithm indicates that the current incumbent solution is an optimal solution for problem (P) and stops. Since YE consists of a finite number of faces, the algorithm will always be finite.

The algorithm may be stated as follows.

Outcome-Based Algorithm.

Initialization Step. See Steps 1 through 5 below.

Compute the optimal objective function value t of the linear program (T), and find any optimal solution x R for problem (PR). If t = 0 or xReXE, stop: x R is an optimal solution for problem (P). Otherwise, let LB = f (x R) and continue. Use (1) to calculate M1 and/142, and find an optimal solution x c to the linear program (Q). Set m2 = (c2, x c) and UB =f(xC). If LB ~ UB, stop: x ~ is an optimal solution for problem (P). Otherwise, continue. If m2>=M2, go to Step 5. Otherwise, let bl =m2 and al =M1, set k = 1, and go to iteration k. Find any optimal solution x 1 to the problem

(PI) rain f ( x ) ,

s.t. (ci, x ) = Mi, i= 1, 2,

x~X,

and stop: x I is an optimal solution for problem (P).

Step 1.

Step 2.

Step 3.

Step 4.

Step 5.

Iteration k, k > 1. See Steps kl through k6 below.

Step kl . Set b = bk and a = ak and find any optimal solution (uk, q~) to the linear program

(Pb,a) m a x u,

s.t. - bu + (a, q) = a,

--c2u+ ATq>mcl,

u>_-0.

Page 11: BENSON Bi Criteria Linear Programming

Step k2.

Step k3.

Step k4.

Step k5.

Step k6.

JOTA: VOL. 88, NO. 1, JANUARY 1996 87

Find any optimal solution x k to the problem

(Fk) min f(x),

s.t. (c , , x ) + uk (c2, x ) = (a, q~),

x~X.

I f f ( x k) >UB, go to Step k4. Otherwise, go to Step k3. Let xC=x k and UB=f(x<) . If LB> UB, stop: x ~ is an opti- mal solution for problem (P). Otherwise, continue. Find any optimal solution ~k+l to the linear program

max (c2, x),

s.t. (ci , x ) + u~ (c2, x ) = (a, qk),

X ~X,

and set

bk+l=(c~,X~+~), ak+l-- (el, JZk+l).

If b~+ ~ > M2, stop: x ~' is an optimal solution for problem (P). Otherwise, continue. If (c2, xR)>bk+~, set k=k+ 1 and go to iteration k. Other- wise, with b=bk+~, find any optimal solution x R to the problem

(PRb) rain

s.t.

f(x),

(c2, x)>b,

x~Y,

and continue. Set LB =f(x R). If LB > UB, stop: x C is an optimal solution for problem (P). Otherwise, set k = k + 1 and go to iteration k.

In Step 1 of the algorithm, the point x R must be tested for membership in Xe. This can be done by solving a single linear program. In particular, it is easy to see that xR~xe if and only if x R is an optimal solution to the linear program (Pb) at b = (c2, x R) eL

Notice that, if t = 0 or xREXE in the Step 1 of the algorithm, the algorithm terminates. By Theorem 2.4, t = 0 if and only if problem (BX) is completely efficient. Therefore, in both of these cases, the optimal solution x R computed to the relaxed problem (PR) satisfies xReXe; the algorithm returns the optimal solution x R to problem (P) and terminates. Otherwise,

Page 12: BENSON Bi Criteria Linear Programming

88 JOTA: VOL. 88, NO. 1, JANUARY 1996

f ( x R) is a lower bound for the optimal value v of problem (P), and the face search procedure must be invoked to solve problem (P).

Although Step 1 checks for complete efficiency of problem (BX), there are certain other special cases of problem (P) that should be checked by the user before the algorithm is invoked. When one of these cases arises, problem (P) can be solved by special methods that are more efficient than the algo- rithm. For a discussion of some of these special cases, the reader may consult Ref. 37.

In Steps 2 through 4, the interval

I= {beRIrn2 <_b < MR}

and an initial incumbent solution x c are calculated. Step 5 is executed only in the special case where m2 = m2. In this case, from Theorems 2.1 and 2.2, Ye consists of the single point yl given by

Y -LMA'

and the optimal solution to problem (P1) found in Step 5 minimizes f over X~.

The iterative steps k > 1 of the algorithm are executed when Ye consists of one or more edges of Y. At the beginning of a typical iteration k, the incumbent solution minimizes f over all faces Fx of X~ for which xeFx implies that m2~ (c2, X)<=bk, and ak equals the optimal value W(bk) of the linear program (Pb) at b=bk. In Step kl, as we shall see later, the values of UA and qk are calculated in such a way that

Fkr = { ye YI Y, + Uk y2 = (a, qk )} (2)

describes the face of Ye given by the line segment connecting the points [w(bk), bk], [W(bk+l), bk+l]~R 2, where bk+l is defined as in Step k4. From Theorem 2.1, this implies that the point x k calculated in Step k2 minimizes f over Fkx, where Fkx denotes the face of XE given by

Fkx = {xeXl CxeF~ }.

For each k, Step k4 finds the value b=b.~+l >bk ofb such that [w(b), b] is an endpoint of the efficient edge (2) of YE. If bk+~ >M2, then all of Ire has been identified and the search terminates. Otherwise, if possible, the lower bound LB for v is updated in Steps k5 and k6 and the search continues. Notice that, in any step of the algorithm, when the fathoming criterion LB > UB is tested and found to hold, the search may terminate even though one or more faces of YE may as yet remain unidentified.

Page 13: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 89

3.2. Convergence. To understand the geometry and to show the con- vergence of the outcome-based algorithm, we must prove some preliminary results. These results are given by the following lemmas.

Lemma 3.1. In the outcome-based algorithm, for each k > 1, ~g+~ is an optimal solution to the linear program (Pb) at b = b~§ Furthermore, for each k > 1, and for each/7~{beR[bk<b<b~+1}, (uk, q*) is an optimal solution to the dual linear program (Qb) to problem (P~) at b =/7.

Proof. Recall from the proof of Theorem 2.3 that for each b~L the dual linear program to problem (Pb) is given by

(Qb) rain - b u + ( a , q ) ,

s.t. - c 2 u + A r q > c l ,

u=0,

and that the optimal value of problem (Qb) equals w(b). Assume that k > 1. Then, since ~k+l is an optimal solution to the prob-

lem solved in Step k4 of the algorithm, if follows that

(c2, ~?k+ z) _-_ bk+ ~, (3)

(e l , ~kw l) _]_ Uk (C2,)~k+ 1) = (a, qk), (4)

2k+l eX. (5)

From (3) and (5), 2 ~+l is a feasible solution to problem (Pb) at b=b~+z. From (3) and (4), it follows that

(el, X k+l) = --ukbk+l + (a, qk). (6)

From Step kl and the definition of problem (Qb), (u~, qk) is a feasible solution to problem (Qb) at b=bk+l. From (6), since ffk+l is a feasible solution to problem (Pb) at b -bk+ 1, by duality theory of linear program- ming (Ref. 70), this implies that 2 ~§ 1 and (u~, q*) are optimal solutions to problems (Pb) and (Qb) at b=b~+ 1, respectively.

Notice from the definitions of MI and m2, and from Step 4 of the algorithm, that a~ = w(bl). Furthermore, since Sk+r is an optimal solution to problem (Pb) at b = bk+ 1, Step k4 implies that ak§ 1 = w(b~+ ~). It follows that b=bk and a=w(b~) in the linear program (Pb.~) in Step kl. Since (uk, q*) is a feasible solution to this linear program, and since b=b~ and a = w(bk) in this program, this implies that (u~, qk) is a feasible solution to problem (Qb) at b = be which satisfies

- bk uk + (a, qk ) = w(b~).

Page 14: BENSON Bi Criteria Linear Programming

90 JOTA: VOL. 88, NO. 1, J A N U A R Y 1996

But from the proof of Theorem 2.3, w(bk) is the optimal value of problem (Qb) at b = b~. Therefore, (uk, qk) is an optimal solution to this problem.

Assume now that

/7~{b~Rlbk <b<bk+l}.

Then, for some 7/~ R satisfying 0 _<_ ~, _< 1,

/7= 7b~+ (1 - ~')bk+ 1.

From Theorem 2.3, w is a concave function on L Therefore, it follows that

w(/7) > rw(bk) + (1 - r)w(b~ + ,). (7)

Since (u~, qk) is an optimal solution to problem (Qb) at both b = bk and b = bk+l, and since w(b) is the optimal value of problem (Qb) for b=bksI and for b=bk+leL (7) implies that

w(/7) > r ( - b ~ u~ + (a, qk) ) + (1 - r ) ( - b k + 1 uk + (a, qk ) )

= [ - Tbk- (1 - )')bk+l]Uk+ (a, qk) = -/Tuk + (a, qk ).

If we let Z denote the feasible region of problem (Q~), then this implies that the optimal value of problem (Q~) is bounded below by -/TUk + (a, ~ ), where (uk, qk)~Z. Since (uk, qk)~Z, this implies that

w(/7) = --/7Uk + (a, qk),

and that (uk, q~) is an optimal solution for problem (Q~). []

Lemma 3.2. For each k > 1, there exists an ~ > 0 such that the vector (uk, qk) computed in Step kl of the outcome-based algorithm satisfies

w(/7) = -/Tuk + (a, qk ),

for all/7 such that bk</7<bk+ gk.

Proof. Suppose that k>__l. From Step kl and linear programming theory (Ref. 72), - uk is the right-hand derivative of w at bk. By Theorem 2.3, w is piecewise linear. Taken together, the latter two statements imply that, for some gk > 0,

w(/7) = -/Tu~ + (a, q~ ),

for all/7 satisfying bk </7<bk + g~. []

Lemma 3.3. The numbers bk, k > 1, computed in the outcome-based algorithm satisfy bk<bk+ 1 for each k.

Page 15: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 91

Proofi For each k>2, by Lemma 3.1, 2k is an optimal solution to problem (Pb) at b = bk. Therefore, from Lemma 3.2 and the definition of w, it follows that

(c, , ~k ) = --bkUk + (a, qk)

for each k>2. For each k>2, since bk = (c2, ~k) and ~k~x, this implies that :~k is a feasible solution to the linear program solved in Step k4 of the algorithm. From Step k4, it follows that bk < bk+ ~ for all k___> 2. To show that bj <b2, the argument for k > 2 can be repeated with k= 1 and with ~ replaced by the incumbent solution x C computed in Step 2 of the algorithm.

[]

Lemma 3.4. For each k> 1, the vector (uk, qk) computed in Step kl of the outcome-based algorithm satisfies Uk > 0.

Proofi Suppose that k > 1, and let

where gk is as given in Lemma 3.2. Let x ~ be an optimal solution to the linear program (Pb) at b =/5, and let x* maximize (c~, x) over X. Then from Step kl of the algorithm, Lemma 3.2, and the definition of w,

(cl, x r = -/~uk + (a, qk ), (8)

where

-c2uk + A r q ~ c l . (9)

From (9), qk is a feasible solution to the linear program

rain (a, q)

s.t. Arq>c,+ukc2.

Since x* ~X, x* is a feasible solution for the dual linear program to the latter problem, which may be written as

max (cl, x) + Uk(C2, x),

s.t. x~X.

Therefore, by duality theory of linear programming (Ref. 70),

(cl, x*) + u~ (c2, x*) <_ (a, qk). (10)

Page 16: BENSON Bi Criteria Linear Programming

92 JOTA: VOL. 88, NO. 1, JANUARY 1996

From (8) and (10)

[<cl, x 5) - <cj, x*)] +Uk[b-- <c2, x*)] >0. (11)

Since b>bk and k > 1, by Lemma 3.3,/7> bl. Therefore, (c2, x 6) >b~. From the definitions of M1, m2, and x*, since b~ = m2, this implies that

[<c,, x 6> - <c,, x*>] <0.

Furthermore, since x*~X, (c~, x*) = M1 and/7> bl =m2, the definition of m2 implies that

[g - <c~, x*>] >0.

From (11), the latter two statements imply that Uk > O. []

Remark 3.1. It is easy to show that Lemma 3.4 implies that the optimal value function w(b) of problem (Pb) is strictly decreasing on L

Lemma 3.5. For each k > 1, the values of bk and bk + ~ computed in the outcome-based algorithm satisfy bk<bk+~. Furthermore, for each /7~{b~Rlbk <b<bk+l} ' W(/7)=_/TUk + (a, qk).

Proof. Assume that k > 1. By Lemma 3.2, we may choose a number b> bk such that

w(b) = --buk + (a, qk). (12)

Let x • be an optimal solution to problem (Pb) at b =/7. Then, x6sX and <c2, xS)>-_L

Furthermore, from (12) and the definition of w, x E also satisfies

(cl, x b) = --bUk + (a, qk ). (13)

From Step kl, (Uk, qk) is a feasible solution to problem (Qb) at b = b. Since x ~ is an optimal solution to problem (Pb) at b =/7, and since problems (Pb) and (Qb) at b = b are linear programming duals of each other, by (13) and duality theory of linear programming (Ref. 70), (Uk, qk ) is an optimal solu- tion to problem (Qb) at b =/7.

From Lemma 3.4, uk > 0. By the complementary slackness property of linear programming (Ref. 70), this implies that <c2, x6>=/7 (Ref. 70). By substituting <c2, x b) for 6 in (13), we obtain that

(c,, x ~) + Uk <C2, X ~) = (a, qk).

Since x ~ X , this equality implies that x ~ is a feasible solution for the linear program solved in Step k4 of the algorithm. From the definitions of ~k+l and bk+l, this implies that (c2, x ~) <bk+ 1. Since (c2, x 5) = 6> bk, it follows

Page 17: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. I, JANUARY 1996 93

that bk+ 1 > bk. The second statement in the lemma follows immediately from Lemma 3.1 and the proof of Theorem 2.3. []

Remark 3.2. It is evident from Lemmas 3.1 and 3.5 that, for each k > 1, after iteration k of the outcome-based algorithm, one can find an additional linear piece of the graph of the function w. This piece of the graph of w is the line segment in R 2 that lies on the line with equation

w(b) = -buk + (a, q~ )

and has endpoints [bk, w(bk)], [bk + i, w(bk + 1)]. Furthermore, from Lemma 3.4, the slope of this line segment is negative. The next result shows that this line segment corresponds to an efficient edge of Y of the form (2).

Theorem 3.1. For each k > 1, let L~ denote the line segment in R 2 with endpoints [w(bi), hi], i=k, k+ 1, that is generated by the algorithm after iteration k. Then, Lg is an efficient face of Y, and Lk = F~, where F~r is given by (2).

Proof. Assume that k > 1. From Yu (Ref. I 1) and linear programming theory (Ref. 70), since uk > 0 (cf. Lemma 3.4), the optimal solution set Fk to the linear program (PFk) given by

max (yl + uky2),

s.t. yeY ,

is an efficient face of Y. To prove the theorem, we will show that Fk = Lk = Fkr, where F~ is given by (2).

Toward this end, suppose that [w(b), b]~Lk. Then from Remark 3.2 and Lemma 3.5, b~<6<bk+~ and

w(b) + uk6= (a, qk ).

Let x b denote an optimal solution to problem (Pb) at b =/~. Then, by defini- tion of w, (cl, x b) = w(/~). Furthermore, from the proof of Lemma 3.5, @2, x ~) =/~. Therefore,

(c , , x ~) + .~ (e2, x ~ ) = (a, q~). (14)

Furthermore, in the proof of Lemma 3.4, the only property of x* that was used to derive (10) was x*~X. Therefore,

(c, , x ) + uk (c2, x ) =< (a, q~),

Page 18: BENSON Bi Criteria Linear Programming

94 JOTA: VOL. 88, NO. 1, JANUARY 1996

for all x~X. From (14), since xr'eX, this implies that (a, q~) is the optimal value of the linear program

max @1, x) + Uk (c2, x),

s.t. xeX.

It is easy to see that, as a result, (a, qk) is the optimal value of problem (PFk) and Fk = F~,. Furthermore, since

w(/~) + udS= (a, qk),

where

w(/7) = (cl, x e) and b= (c2, Xb),

we also see that [w(b), 6 ] e f ~ . By the choice of [w(b), b], Lk~fkr . To conclude the proof, we must show that Fky~_Lk. To show this, we

will first show that, if y = (yl, Y2)e Y and y2 does not satisfy bk Ny2<bk+~, then y(sF~. To accomplish this, assume that y = (y~, yz) e Y and that yz fails to satisfy bk <Y2 < bk+ 1. Then, either y2 > m2 or Y2 < rn2.

Case 1. y2 ~ m2. In this case, since ye Y, Y2 ~ M2 must also hold. Notice from Lemmas 3.1 and 3.5 that, for each j > 1, since -uj+l is the unique right- hand derivative of w at bj+ 1, (uj, q J) is a feasible but nonoptimal solution to the linear program given in Step ( j + 1)1 of the algorithm. Therefore, for each j >_ 1, Step (j + 1)1 implies that uj < u;+ 1. Since the feasible region Z of p rob l em (Qb) is invariant over beI and has a finite number of extreme points, and since for each j > 1, the optimal value of the linear program solved in Step j 1 is achieved by one of these extreme points, by Remark 3.2, if the tests involving the inequality LB >UB are eliminated from the out- come-based algorithm, then, for some finite number/~, the algorithm will terminate in Step/~4 after generating the entire graph of w over L Assume in the remainder of this proof that the algorithm has been used to generate this graph. Then, since y2 eL y2 satisfies bj<y2 < bj+ 1 for some integer j > 1. Therefore, j ~ k.

By Lemma 3.5,

w(/~) = --bug + (a, qX ),

for any beR that satisfies bk<6<bk+l . Therefore, if we choose such a number b, w'(b)=--Uk. By Theorem 2.3, w is concave on L The latter two statements imply from Ref. 64 that

w( y2) < w(~) - u~ ( y 2 - 6),

Page 19: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 95

or equivalently,

w(y2) + u~y2 < w(~) + u~. (15)

Suppose that (15) holds with equality. Since by the choice of /~, [w(b), b]~L~, this implies by Remark 3.2 that

w( y2) + u, y2 = ( a, q~>.

By definition of Lk, Y2 must therefore satisfy b~ <y2 < bk+ j, which is a con- tradiction. Hence, (15) must hold with strict inequality. Since [w(b), b] eLk by the choice of/7, and L~_Fkr by the first part of this proof, this implies that

w(yz) + UkY2 < (a, q~ ). (16)

By definition of w(y2), if (371, y2) ~ Y, then pl < w(y2). Since y~ Y, this implies that y~ < w(y2). Together with (16), this implies that

Yl + uky2 < (a, qk),

so that yCFkr must hold.

Case 2. y2 < me. From Step 4 of the algorithm, m2 = bj. We may thus apply Case 1 to y = [w(b0, bl]e Y to obtain

W(bl) +ukbj < (a, qk ). (17)

By Lemma 3.4, u~ > 0. Since y2 < m2 and m2 = b~, this implies that ugy2 < U~bl. Furthermore, from the definitions of w, M~ and m2, we obtain that w(bO = M~. Since y~ Y, this implies by the definition of M~ that y~ < W(bl). There- fore, from (17) and the fact that ukyz<ukbl, it follows that

yl +uky2 < (a, q~),

SO that yCF~ must hold. We have thus shown that, if y~ Y and y2 fails to satisfy bk-<__y2-< bk+~,

then yq~F~r. By the contrapositive of this statement, we conclude that, if y~Fkr, then b~ <Y2 = bk+ 1.

We now show that Fkr~_Lk. Assume that y~F~r. Then b~<=y2<=bk+l must hold. From Lemma 3.5, this implies that

w(y2) = -uky2 + (a, q~).

Since y ~ F~,

y~ + uky2 = (a, qk).

Page 20: BENSON Bi Criteria Linear Programming

96 JOTA: VOL. 88, NO. 1, JANUARY 1996

The latter two statements imply that y~ = w(y2). Thus,

yl = w(y2) = -ukyz + (a, q~ ).

Since bk <Yz < bk+ 1, from Remark 3.2 and the definition of Lk, this implies that yEL~. Therefore, Fgr~_Lk, and the theorem is proven. []

The convergence of the outcome-based algorithm can now be shown.

Theorem 3.2. The outcome-based algorithm is finite and always termi- nates with an optimal solution to problem (P).

Proof. If xReXe is detected in Step 1, then from xREXE and the defini- tion of problem (PR), v > f ( x R ) > v . In this case then, v = f ( x R) and the algorithm appropriately terminates with the optimal solution x R to problem (P).

If x R ~XE, then as explained at the beginning of Section 3.1, either Y is completely efficient, Ye is a singleton, or YE consists of one or more edges of Y. We deal with these three cases separately.

When Y is completely efficient, as explained in Section 3.1, Step 1 of the algorithm detects this case, and the algorithm computes the optimal solution x R to problem (P) and terminates. If YE consists of a singleton, then it follows from Theorem 2.2 that m2 = M2 and that Ye = {(M~, M2)r}. From Steps 2 through 5 of the algorithm, in this case, if the algorithm does not terminate in Step 3, then it detects that YE is a singleton, finds a point in X that minimizes f over all x e X such that Cx equals this singleton, and terminates. It is easy to see that (M1, M2)re Y~x, so that {(Mr, M2)} is a face of Y. By Theorem 2.1, the latter two statements imply that, when Ye is a singleton and the algorithm does not terminate in Step 3, it finds an optimal solution to problem (P) in the initialization step.

If the algorithm terminates in Step 3, then from the values of LB and UB in that step, it follows that f ( x c) _<f(x~), where x ~ and x R are optimal solutions to the linear programs (Q) and (PR), respectively. It is easy to see that this implies that x~eXe and that f ( x R) =<v. Therefore, when the algo- rithm terminates in Step 3,

v <_f(x c) <_f(x R) ~ v.

From the previous two statements, if the algorithm terminates in Step 3, then v = f (xC), and the point x c found by the algorithm is an optimal solution for problem (P). Notice that this holds regardless of whether Ye is a singleton or not.

Page 21: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 97

Assume now that Y# YE, that Ye is not a singleton, and that the algorithm does not terminate in Step 3. These assumptions and the argu- ments given so far in this proof imply that the algorithm proceeds beyond the initialization step. Thus, either the algorithm terminates during some iteration/~> 1 by detecting that LB > UB is satisfied, or it does not terminate in this way.

Suppose first that the algorithm does not terminate by detecting that LB ___ UB is satisfied. Then, since the algorithm proceeds beyond the initiali- zation step, Theorem 3.1 and its proof imply that, in each iteration k > 1 that is executed, a distinct efficient face F~r of Y of the form (2) is generated. By Theorem 2.1, it follows that, for each such k, the point x k found in Step k2 of the algorithm minimizes f over the efficient face F~. of X that consists of all x ~ X such that C x e F k r . Since Y is polyhedral, it has a finite number of efficient faces (Ref. 11). From Theorem 2.2 and Step k4, since the algo- rithm does not terminate by detecting LB>UB, it follows that, after some finite number of iterations/~, the algorithm will terminate in Step k4 after identifying all of the efficient faces of YE. At that point, x c will minimize f over XE.

Now, suppose that, during some iteration, the algorithm detects that LB > UB is satisfied. Then, by using arguments similar to those used for the case where the algorithm temfinates in Step 3, it is easy to show that, at the point of termination, x c will be an optimal solution for problem (P). []

4. Examples

Notice that the computational accuracy and effort involved in executing the outcome-based algorithm are determined largely by the types, numbers, and sizes of the optimization problems that must be solved. An examination of these problems reveals that, except for problems (PR), (P0, (Fk), and (PRb) , all of these problems are linear programs. The remaining problems each involve minimizing f over a compact polyhedron. A number of quite accurate and relatively efficient algorithms for minimizing various types of lower semicontinuous functions f over compact polyhedra exist. Included among these, for example, are algorithms for cases where f is linear, quad- ratic, convex, concave and quasiconcave (cf. Refs. 30-32, 64). Therefore, the outcome-based algorithm can be expected to efficiently find exact or approximate optimal solutions to problem (P) for a wide variety of lower- semicontinuous functions f

The number of iterations and subproblem optimizations required by the algorithm to solve a given problem (P) depends upon the objective function f and the number of efficient edges, if any, that exist in Ye. These

Page 22: BENSON Bi Criteria Linear Programming

98 JOTA: VOL, 88, NO. 1, JANUARY 1996

factors determine the iteration, if any, in which the fathoming criterion LB > UB is satisfied. As we have explained, by searching edges of YE rather than faces of Xe and by including the fathoming test, it is expected that significant numbers of iterations and suboptimizations will often be avoided.

The following two examples help illustrate these points.

Example 4.1. Let m = 10 and n=20, and let A be given by

A = [IIo i II0],

where I10 denotes the 10 • 10 identity matrix. Let a~R 1~ be the vector whose entries each equal 1.0. For notational convenience, to define C, first let djER2,j = 1, 2, 3, be given by

d, = [ - i:~] ' dZ=[ 0.6671 -0.75~ -0.333J ' d3=[ 0.25J'

and let d4~R 2 be the vector of zeroes. Then, let the first four columns of C equal d 1, the next four equal d 2, the next two equal d 3, and the last 10 equal d 4.

Notice that, if we view xj, = 11, 1 2 , . . . , 20, as slack variables, then the feasible region X of problem (BX) in this example is a hypercube in R 1~ It is not difficult to show that this hypercube has 34 efficient extreme points, 68 efficient edges, one two-dimensional efficient face, and two four-dimensional efficient faces. In contrast, the polyhedral image Y= CX of X under C is a two-dimensional polyhedron with only four efficient extreme points and three efficient edges. Therefore, for any objective function f: R2~ that can be minimized over Xe by the outcome-based algorithm, at most three iterations of the algorithm will be required.

Let f : R2~ be defined for each x~R 2~ by

f (x) = (d, x), where de R 2~ is given by

1.0, j = 1, 2,4, 6, 8, 10, d j = ~ - 1.0, j = 3 , 5,7,9,

( 0.0, j = l l , 12 . . . . . 20.

In this case, as we have seen, since f is linear, problem (P) will have an optimal solution that is an extreme point of X. Furthermore, all subproblems that must be solved when applying the outcome-based algorithm to this problem will be linear programming problems. What follows is a brief sum- mary of the computations and conclusions that result by applying the algo- rithm to this example. All points in X are given with their slack variable values omitted.

Page 23: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 99

Initialization Step. In this step, the algorithm detects that t > 0, so that problem (BX) is not completely efficient. It calculates LB = - 4 . 0 0 as the initial lower bound for v, and detects that the optimal solution x R to problem (PR) which provides this lower bound does not satisfy xReXe. Therefore, the minimum o f f over X equals - 4.00, o ~ - 4.00, and the algorithm must continue. It next finds that Mj = 2.667, M2 =4.500, and that

(xC)T=(0, 0, 0, 0, 1, 1, 1, 1,0,0).

It then sets

m2 = (c2, xC) = -1.333, UB = f ( x c) = 0.00.

Since LB < UB and m2 < M2, the algorithm must proceed to iteration 1 with

bl = -1.333, a~ =2.667,

because it has detected that Y has at least one efficient edge to be identified. Iteration 1. In this iteration, the algorithm finds a minimizer x ~ o f f

over the efficient face of X consisting of all x ~ X such that Cx is an element of the efficient edge F lr of Y given by

F l = {Y~ YI Yl +Y2 = 1.33}.

It finds the vector x j given by

(x~)r= (0, 0, 1,0, 1, 1, 1, 1, 0, 0),

with f (x ~) = - 1.0. Since f (x 1) < UB, the incumbent solution x c is set equal to x 1, and UB is set equal to -1.0. Since LB < UB,

b2 = 2.667, a2 = - 1.333

are calculated. Since b2 < M2, Y has at least one more efficient edge to be potentially identified. Before proceeding to iteration 2 to do so, however, the algorithm checks to see if LB can now perhaps be increased. To do so, it calculates (c2, xR) and finds that (c2, x R) < b2. Therefore, LB can now perhaps be increased. The algorithm then solves the linear program (PRb) with b = b2 for an optimal solution x R. Using x R, it sets LB = f (x R) = -1.917. Although this increases the value of LB as compared to its previous value, LB < UB still holds, so that the search must continue.

Iteration 2. In this iteration, the algorithm finds a minimizer x 2 o f f over the efficient face of X consisting of all xEX such that Cx is an element of the efficient edge F ~ of Y given by

F 2= (y~ Yly~ + 2.00y2 =4.00}.

Page 24: BENSON Bi Criteria Linear Programming

100 JOTA: VOL. 88, NO. 1, JANUARY 1996

It finds the vector X 2 given by

(xZ)r= (1, 1, 1, 1, 0, 0, 1, 0, 0, 0),

with f ( x 2) = 1. Since f ( x 2) > UB, the algorithm calculates

b3 = 4.00, a 3 = -4.00

in preparation for identifying a third efficient edge of Y in iteration 3. It detects that b3 < M2, so that such an edge does indeed exist. Before proceed- ing to iteration 3, however, it detects that (c2, x R) ~r Therefore, it solves the linear program (PRb) with b = b 3 . The value of LB is thereby increased to LB = 0.25. Since now LB = 0.25 > UB = - 1.0, no additional efficient edges of Y need to be identified and the algorithm stops, having found the optimal solution x c = x ~ with f (x c) = UB = v = - 1.0.

In this example, the algorithm required solving 14 linear programming problems to find an optimal solution to problem (P). Notice in the example that, although X contains over 100 efficient faces, the outcome-based algo- rithm needed to identify only five efficient faces of Y (three efficient extreme points and two efficient edges) to solve the problem. Also notice that, although Y contains a fourth efficient extreme point and a third efficient edge, the satisfaction of the fathoming test LB > UB in iteration 2 precluded the need to identify this point and edge. Furthermore, as proven in Section 3, since complete efficiency does not hold in this example, the algorithm found faces of YE of dimension at most one, even though XE contains faces of significantly-higher dimension.

Example 4.2. Consider the same data as in Example 4.1, except let f : R2~ be the convex quadratic function defined for each x e R 2~ by

10 f ( x ) = Z (11- j ) (x2-0.25) 2.

j=l

Since the only difference between Example 4.1 and this example is in the definition off , Xe and YE are unchanged. In particular, as in Example 4.1, since Y~ consists of exactly three edges of Y, the outcome-based algorithm will require at most three iterations to solve problem (P). However, since f is now a convex function, problem (P) need not have an extreme point optimal solution. Furthermore, while most of the subproblems that must be solved when applying the outcome-based algorithm will again be linear programming problems, a minority of these problems will be convex quad- ratic programs.

In this case, as in Example 4.1, the outcome-based algorithm solves problem (P) in two iterations. In particular, it searches the same two edges

Page 25: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 101

of Ye as it did in Example 4.1, and it fathoms the third efficient edge in iteration 2. The optimal point x* ~XE that it finds is given by

(x*) r= (0.25, 0.25, 0.25, 0.25, 1, I, 1, 1, 0, 0),

with v = f ( x * ) = 10.3125, where the slack variables of x* have been omitted. Notice that x* is not an extreme point of X and that, in contrast to v = 10.3125, the minimum value o f f over X is 0. To find x*, the algorithm solves nine linear programs and five convex quadratic programs.

5. Conclusions

The problem (P) of optimizing a lower semicontinuous function over the efficient set of a bicriteria linear programming problem lies in a class of problems with important uses in multiple criteria decision making. Mathe- matically, the problems in this class are generally difficult global optimization problems requiring computationally-burdensome methods for their solution. In this article, by concentrating on the outcome set, rather than on the decision set, of the underlying bicriteria linear program, we have developed an efficient search algorithm for solving problem (P) that is organized according to the efficient edges of the outcome set. This approach, in com- parison to decision set-based approaches, can reduce markedly the number of efficient faces that need to be identified to solve the problem. Furthermore, it always reduces the dimensions of the efficient faces that need to be identi- fied to at most one.

References

1. COHON, J. L., Multiobjective Programming and Planning, Academic Press, New York, New York, 1978.

2. EVANS, G. W., An Overview of Techniques for Soloing Multiobjeetioe Mathemati- cal Programs, Management Science, Vol. 30, pp. 1268-1282, 1984.

3. GOICOECHEA, A., HANSEN, D. R., and DUCKSTZIN, L., Multiobjectioe Decision Analysis with Engineering and Business Applications, John Wiley and Sons, New York, New York, 1982.

4. KuH~, H. W., and TUCKER, A. W., Nonlinear Programming, Proceedings of the 2nd Berkeley Symposium on Mathematical Statistics and Probability, Edited by J. Neyman, University of California Press, Berkeley, California, pp. 481-492, 1950.

5. Luc, D. T., Theory of Vector Optimization, Springer Verlag, Berlin, Germany, 1989.

Page 26: BENSON Bi Criteria Linear Programming

102 JOTA: VOL. 88, NO. 1, JANUARY 1996

6. RINGUEST, J. L., Multiobjective Optimization: Behavioral and Computational Considerations, Kluwer Academic Publishers, Boston, Massachusetts, 1992.

7. ROSENTHAL, R. E., Principles of Multiobjective Optimization, Decision Sciences, Vol. 16, pp. 133-152, 1985.

8. SAWARAG1, Y., NAKAYAMA, H., and TANINO, T., Theory of Multiobjective Optimization, Academic Press, Orlando, Florida, 1985.

9. S'rADLER, W., A Survey of Multicriteria Optimization or the Vector Maximum Problem, Part 1:1776-1960, Journal of Optimization Theory and Applications, Vol. 29, pp. 1-52, 1979.

10. STEUER, R. E., Multiple Criteria Optimization: Theory, Computation, and Appli- cation, John Wiley and Sons, New York, New York, 1986.

11. Yu, P. L., Multiple Criteria Decision Making, Plenum, New York, New York, 1985.

12. Yu, P. L., Multiple Criteria Decision Making: Five Basic Concepts, Optimization, Edited by G. L. Nemhauser, A. H. G. Rinnooy Kan, and M. J. Todd, North Holland, Amsterdam, Holland, pp. 663-699, 1989.

13. ZELENY, M., Multiple Criteria Decision Making, McGraw-Hill, New York, New York, 1982.

14. BENSON, H. P., Optimization over the Efficient Set, Journal of Mathematical Analysis and Applications, Vol. 98, pp. 562-580, 1984.

15. Ft~LOP, J., A Cutting Plane Method for Linear Optimization over the Efficient Set, Generalized Convexity, Edited by S. Komlosi, T. Rapcsak, and S. Schaible, Springer Verlag, Berlin, Germany, pp. 374-385, 1994.

16. THACH, P. T., KONNO, H., and YOKOTA, D., A DuaIApproach to a Minimization on the Set of Pareto-Optimal Solutions, Working Paper, Institute of Human and Social Sciences, Tokyo Institute of Technology, Tokyo, Japan, 1994.

17. PHXLIP, J., Algorithms for the Vector Maximization Problem, Mathematical Pro- gramming, Vol. 2, pp. 207-229, 1972.

18. GALLAGHER, R. J., and SALEH, O. A., A Representation of an Efficiency Equiva- lent Polyhedron for the Objective Set of a Multiple Objective Linear Program, European Journal of Operational Research (to appear).

19. SHIN, W. S., and RAVINt)RAN, A., Interactive Multiple Objective Optimization: Survey, Part I: Continuous Case, Computers and Operations Research, Vol. 18, pp. 97-114, 1991.

20. DESSOUKV, M. I., GHIASSL M., and DAvis, W. J., Estimates of the Minimum Nondominated Criterion Values in Multiple Criteria Decision Making, Engin- eering Costs and Production Economics, Vol. 10, pp. 95-104, 1986.

21. ISERMANN, H., and STEUER, R. E., ComputationalExperience Concerning Payoff Tables and Minimum Criteria Values over the Efficient Set, European Journal of Operational Research, Vol. 33, pp. 91-97, 1987.

22. REEVES, G. R., and REID, R. C., Minimum Values over the Efficient Set in Multiple Objective Decision Making, European Journal of Operational Research, Vol. 36, pp. 334-338, 1988.

23. W~ISTROVFER, H. R., Careful Use of Pessimistic Values Is Needed in Multiple Objectives Optimization, Operations Research Letters, Vol. 4, pp. 23-25, 1985.

Page 27: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. 1, JANUARY 1996 103

24. BENSON, H. P., An All-Linear Programming Relaxation Algorithm for Optimiza- tion over the Efficient Set, Journal of Global Optimization, Vol. 1, pp. 83-104, 1991.

25. BENAYOUN, R., DE MONTGOLFIER, J., TERGNY, J., and LARITCHEV, 0., Linear Programming with Multiple Objective Functions." Step Method (STEM), Mathe- matical Programming, Vol. 1, pp. 366-375, 1971.

26. BENSON, H. P., LEE, D., and McCLURE, J. P., Applying Multiple Criteria Deci- sion Making in Practice." The Citrus Rootstock Selection Problem in Florida, Discussion Paper, Department of Decision and Information Sciences, University of Florida, Gainesville, Florida, 1992.

27. BELENSON, S., and KAPUR, K. C., An Algorithm for Solving Multicriterion Linear Programming Problems with Examples, Operational Research Quarterly, Vol. 24, pp. 65-77, 1973.

28. KOK, M., and LOOTSMA, F. A., Pairwise-Comparison Methods in Multiple Objec- tive Programming, with Applications in a Long-Term Energy-Planning Model, European Journal of Operational Research, Vol. 22, pp. 44-55, 1985.

29. BENSON, H. P., Concave Minimization." Theory, Applications, and Algorithms, Handbook of Global Optimization, Edited by R. Horst and P. Pardalos, Kluwer Academic Publishers, Dordrecht, Holland, pp. 43-148, 1995.

30. HORST, R., Deterministic Global Optimization: Recent Advances and New Fields of Application, Naval Research Logistics, Vol. 37, pp. 433-471, 1990.

31. HORST, R., and TuY, H., Global Optimization." Deterministic Approaches, 2nd Edition, Springer Verlag, Berlin, Germany, 1993.

32. HORST, R., and PARDALOS, P., Editors, Handbook of Global Optimization, Kluwer Academic Publishers, Dordrecht, Holland, 1995.

33. BOHNTINEANU, S., Optimality Conditions for Minimization over the (Weakly or Properly) Efficient Set, Journal of Mathematical Analysis and Applications, Vol. 173, pp. 523-541, 1993.

34. BOLINTINEANU, S., Minimization of a Quasi-concave Function over an Efficient Set, Mathematical Programming, Vol. 61, pp. 89-110, 1993.

35. BENSON, H. P., A Finite, Nonadjacent Extreme Point Search Algorithm for Opti- mization over the Efficient Set, Journal of Optimization Theory and Applications, Vol. 73, pp. 47-64, 1992.

36. BENSON, H. P., A Bisection Extreme Point Search Algorithm for Optimizing over the Efficient Set in the Linear Dependence Case, Journal of Global Optimization, Vol. 3, pp. 95-111, 1993.

37. BENSON, H. P., and SAVIN, S., Optimization over the Efficient Set." Four Special Cases, Journal of Optimization Theory and Applications, Vol. 80. pp. 3-18, 1994.

38. BENSON, H. P., An Algorithm for Optimizing over the Weakly-Efficient Set, Euro- pean Journal of Operational Research, Vol. 25, pp. 192-199, 1986.

39. ECKER, J. G., and SONG, J. H., Optimizing a Linear Function over an Efficient Set, Working Paper, Department of Mathematical Sciences, Rensselaer Poly- technic Institute, Troy, New York, 1993.

40. Muu, L. P., A Method for Optimizing a Linear Function over the Efficient Set, Working Paper, Institute of Mathematics, Hanoi, Vietnam, 1992.

Page 28: BENSON Bi Criteria Linear Programming

104 JOTA: VOL. 88, NO. 1, JANUARY 1996

41. DAUER, J. P., Optimization over the Efficient Set Using an Active Constraint Approach, Zeitschrift fiir Operations Research, Vol. 35, pp. 185-195, 1991.

42. BENSON, H. P., and SAYIN, S., A Face Search I-Ieuristic for Optimizing over the Efficient Set, Naval Research Logistics, Vol. 40, pp. 103-116, 1993.

43. KORHONEN, P., SALO, S., and STEUER, R., A Heuristic for Estimating Nadir Criterion Values in Multiple Objective Linear Programming, Working Paper, Helsinki School of Economics and Business Administration, Helsinki, Finland, 1992.

44. AKsov, Y., An Interactive Branch-and-Bound Algorithm for Bicriterion Noncon- vex~Mixed Integer Programming, Naval Research Logistics, Vol. 37, pp. 403- 417, 1990.

45. ANEJA, Y. P., and NAIR, K. P. K., Bicriteria Transportation Problem, Manage- ment Science, Vol. 25, pp. 73-78, 1979.

46. BENSON, H. P., Vector Maximization with Two Objective Functions, Journal of Optimization Theory and Applications, Vol. 28, pp. 253-257, 1979.

47. BENSON, H. P., and MORIN, T. L., A Bicriteria Mathematical Programming Model for Nutrition Planning in Developing Nations, Management Science, Vol. 33, pp. 1593-1601, 1987.

48. COHON, J. L., CHURCH, R. L., and SHEER, D. P., Generating Multiobjective Tradeoffs: An Algorithm for Bicriterion Problems, Water Resources Research, Vol. 15, pp. 1001-1010, 1979.

49. GEARrtART, W. B., On the Characterization of Pareto-Optimal Solutions in Bicrit- eria Optimization, Journal of Optimization Theory and Applications, Vol. 27, pp. 301-307, 1979.

50. GEOFFRION, A. M., Solving Bicriterion Mathematical Programs, Operations Research, Vol. 15, pp. 39-54, 1967.

51. KIZILTAN, G., and YUCAOGLU, E., An Algorithm for Bicriterion Linear Program- ming, European Journal of Operational Research, Vol. 10, pp. 406-411, 1982.

52. KLEIN, G., MOSKOWITZ, H., and RAVINDRAN, A., Comparative Evaluation of Prior versus Progressive Articulation of Preferences in Bicriterion Optimization, Naval Research Logistics, Vol. 33, pp. 309-323, 1986.

53. SHIN, W. S., and ALLEN, D. B., An Interactive Paired Comparison Method for Bicriterion Integer Programming, Naval Research Logistics, Vol. 41, pp. 423- 434, 1994.

54. SHIN, W. S., and LEE, J. J., A Multirun Interactive Method for Bicriterion Optimi- zation Problems, Naval Research Logistics, Vol. 39, pp. 115-135, 1992.

55. WALKER, J., An Interactive Method as an Aid in Solving Bicriterion Mathematical Programming Problems, Journal of the Operational Research Society, Vol. 29, pp. 915-922, 1978.

56. HAIMES, Y. Y., WISMER, D. A., and LASDON, L. S., On the Bicriterion Formula- tion of Integrated System Identification and Systems Optimization, IEEE Transac- tions on Systems, Man, and Cybernetics, Vol. 1, pp. 296-297, 1971.

57. ConoN, J. L., SCAVONE, G., and SOLANKI, R., Multicriterion Optimization in Resources Planning, Multicriteria Optimization in Engineering and in the Sciences, Edited by W. Stadler, Plenum Press, New York, New York, pp. 117- 160, 1988.

Page 29: BENSON Bi Criteria Linear Programming

JOTA: VOL. 88, NO. I, JANUARY 1996 105

58. VAN WASSENHOVE, L. N., and GELDERS, L. F., Solving a Bicriterion Scheduling Problem, European Journal of Operational Research, Vol. 4, pp. 42-48, 1980.

59. DAUZR, J. P., Analysis of the Objective Space in Multiple Objective Linear Pro- gramming, Journal of Mathematical Analysis and Applications, Vol. 126, pp. 579-593, 1987.

60. DAUER, J. P., On Degeneracy and Collapsing in the Construction of the Set of Objective Values in a Multiple Objective Linear Program, Annals of Operations Research, Vol. 47, pp. 279-292, 1993.

61. DAUER, J. P., and Llu, Y. H., Solving Multiple Objective Linear Programs in Objective Space, European Journal of Operational Research, Vol. 46, pp. 350- 357, 1990.

62. DAUER, J. P., and SALEH, O. A., Constructing the Set of Efficient Objective Values in Multiple Objective Linear Programs, European Journal of Operational Research, Vol. 46, pp. 358-365, 1990.

63. BENSON, H. P., A Geometrical Analysis of the Efficient Outcome Set in Multiple- Objective Convex Programs with Linear Criterion Functions, Journal of Global Optimization, Vol. 6, pp. 231-25 l, 1995.

64. MANGASARIAN, O. L., Nonlinear Programming, McGraw-Hill Book Company, New York, New York, 1969.

65. ROCKAFELLAR, R. T., Convex Analysis, Princeton University Press, Princeton, New Jersey, 1970.

66. ARROW, K. J., BARANKIN, E. W., and BLACKWELL, D., Admissible Points of Convex Sets, Contributions to the Theory of Games, Edited by H. W. Kuhn and A. W. Tucker, Princeton University Press, Princeton, New Jersey, pp. 87- 91, 1953.

67. BITRAN, G. R., and MAGNANTI, T. L., The Structure of Admissible Points with Respect to Cone Dominance, Journal of Optimization Theory and Applications, Vol. 29, pp. 573-614, 1979.

68. BENSON, H. P., Admissible Points of a Convex Polyhedron, Journal of Optimiza- tion Theory and Applications, Vol. 38, pp. 341-361, 1982.

69. BLACKWELL, D., and GIRSHICK, M. A., Theory of Games and Statistical Deci- sions, Dover Publications, New York, New York, 1954.

70. MURTY, K. G., Linear Programming, John Wiley and Sons, New York, New York, 1983.

71. BENSON, H. P., Complete Efficiency and the Initialization of Algorithms for Multi- ple Objective Programming, Operations Research Letters, Vol. 10, pp. 481-487, 1991.

72. BAZARAA, M. S., JARVIS, J. J., and SHERALI, H. D., Linear Programming and Network Flows, 2nd Edition, John Wiley and Sons, New York, New York, 1990.