13
56 European Journal of Operational Research 34 (1988) 56-68 North-Holland On the convergence of reference point methods in multiobjective programming Peter BOGETOFT * Odense University, Odense, Denmark Asa HALLEFJORD * * Link6ping University, Link6ping, Sweden Matthijs KOK * * * Delft University of Technology, Delft, Netherlands Abstract: The purpose of this paper is to present some results about the convergence of interactive reference point methods in multiobjective programming. In particular, we describe how dual information may guide the decision maker in his choice of the successive reference points. In the literature different convergence models have been proposed. The analyst may induce convergence by selecting appropriate rules of the communication. Or he may rely on the learning process of the decision maker to induce some kind of 'psychological' convergence. In neither case are the activities of the decision maker precisely described. Consequently, the quality of the final decision cannot be established, and the question of convergence remains an unsolved issue. We describe different ways in which the decision maker may select his successive reference points, and we discuss the convergence of the resulting reference point procedures. Also, we comment on the relevance of these different assumptions about the decision maker's behavior. The procedures are illustrated by a small numerical example. Keywords: Multiobjective programming, reference points, cutting planes, projections 1. Introduction A number of methods have been suggested for solving problems with multiple criteria. For a survey of the area see e.g. Chankong and Haimes (1983) or Roy and Vincke (1981). Some methods are close to incorporating the objectives into one single objective function, thus creating an ordinary single-objective problem, ver- * Current affiliation: Yale University, New Haven, USA. * * Current affiliation: Chr. Michelsen Institute, Bergen, Nor- way. *** Current affiliation: Delft Hydraulics Laboratory, Em- meloord, Netherlands. Received June 1986; revised December 1986 sions of which are solved in interaction with the decision maker. One family of such methods is based on the idea that there exists an overall utility function, actually converting the multiob- jective problem into a problem with a single objec- tive. The utility function, which is assumed to be unknown initially, is estimated, based on the reac- tions of the decision maker. Examples of such methods have been given by Zionts and Wallenius (1976) and Geoffrion, Dyer and Feinberg (1972). On the other hand, the single-objective problem could be one based on the idea of minimizing a (pseudo-) distance from a given reference point (target point, aspiration level, displaced ideal point). Well-known versions of this method are the STEP method (Benayoun et al., 1971) goal programming (Charnes and Cooper, 1977) and the 0377-2217/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland)

On the convergence of reference point methods in multiobjective programming

Embed Size (px)

Citation preview

Page 1: On the convergence of reference point methods in multiobjective programming

56 European Journal of Operational Research 34 (1988) 56-68 North-Holland

On the convergence of reference point methods in multiobjective programming

Peter B O G E T O F T * Odense University, Odense, Denmark

A s a H A L L E F J O R D * * Link6ping University, Link6ping, Sweden

Mat th i j s K O K * * * Delft University of Technology, Delft, Netherlands

Abstract: The purpose of this paper is to present some results about the convergence of interactive reference point methods in multiobjective programming. In particular, we describe how dual information may guide the decision maker in his choice of the successive reference points.

In the literature different convergence models have been proposed. The analyst may induce convergence by selecting appropriate rules of the communication. Or he may rely on the learning process of the decision maker to induce some kind of 'psychological ' convergence. In neither case are the activities of the decision maker precisely described. Consequently, the quality of the final decision cannot be established, and the question of convergence remains an unsolved issue.

We describe different ways in which the decision maker may select his successive reference points, and we discuss the convergence of the resulting reference point procedures. Also, we comment on the relevance of these different assumptions about the decision maker 's behavior. The procedures are illustrated by a small numerical example.

Keywords: Multiobjective programming, reference points, cutting planes, projections

1. Introduction

A number of methods have been suggested for solving problems with multiple criteria. For a survey of the area see e.g. Chankong and Haimes (1983) or Roy and Vincke (1981).

Some methods are close to incorporating the objectives into one single objective function, thus creating an ordinary single-objective problem, ver-

* Current affiliation: Yale University, New Haven, USA. * * Current affiliation: Chr. Michelsen Institute, Bergen, Nor-

way. *** Current affiliation: Delft Hydraulics Laboratory, Em-

meloord, Netherlands.

Received June 1986; revised December 1986

sions of which are solved in interaction with the decision maker. One family of such methods is based on the idea that there exists an overall utility function, actually converting the multiob- jective problem into a problem with a single objec- tive. The utility function, which is assumed to be unknown initially, is estimated, based on the reac- tions of the decision maker. Examples of such methods have been given by Zionts and Wallenius (1976) and Geoffrion, Dyer and Feinberg (1972). On the other hand, the single-objective problem could be one based on the idea of minimizing a (pseudo-) distance from a given reference point (target point, aspiration level, displaced ideal point). Well-known versions of this method are the STEP method (Benayoun et al., 1971) goal programming (Charnes and Cooper, 1977) and the

0377-2217/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland)

Page 2: On the convergence of reference point methods in multiobjective programming

P. Bogetoft et al. / Reference point methods in multiobjective programming 57

achievement function approach (e.g. Wierzbicki, 1979). The attractive feature with reference point methods is that the decision maker works directly with 'primal' instead of 'dual' information--real quantities instead of weights or tradeoffs.

With few exceptions the reference point meth- ods have been designed as interactive procedures, where the decision maker is believed to 'learn' about the problem during the process. Since the decision maker is unable or unwilling to state his true tradeoffs between the various objectives, this learning process is rather unstructured and un- defined. Therefore in contrast to many other methods of multiple objective programming (Vincke, 1981), mathematical convergence in ref- erence point methods is rarely discussed. Rather, some 'psychological' convergence is assumed to guide the decision maker toward the correct solu- tion. No convergence is guaranteed in strict mathematical sense.

One can of course argue that convergence in mathematical sense is uninteresting, just as it is irrelevant in sensitivity analysis or an ad hoc use of various scenarios to learn about the decision problem. But there are some cases where the deci- sion maker can actually learn something from a convergent procedure, that cannot be learned from a more unstructured procedure. One example is when one of the objectives turns out to dominate the others in the sense that the problem is actually a single-objective problem--in such a case the procedure should converge towards a single-objec- tive problem. Another example was encountered in a study of long range forestry planning where one of the authors was involved (Hallefjord et al., 1986). In this case, the decision makers preference was apparently such that one of the objectives was not an objective to be maximized, but rather a desire to end up 'close' to some previously decided value. A systematic and convergent procedure for choosing new reference points would soon reveal this.

Some rather technical rules have been proposed to make the sequence of solutions converge, see for example Benayoun et al. (1971), Kallio et al. (1981) and Wierzbicki (1980). However, these rules are not related to the wishes of the decision maker. Consequently, the quality of the point of conver- gence cannot be established.

In this paper we suggest general procedures for 'guiding' the decision maker in his choice of new

reference points: Having suggested a reference point, the decision maker obtains a feasible suggestion and tradeoffs between the objectives. The tradeoff information is, under certain as- sumptions, available as a vector of dual variables in the single-objective problem. With this informa- tion at hand the decision maker suggests a new reference point, and so on.

The outline of the paper is as follows. Section 2 contains the problem statement and some basic definitions. Also it contains a preliminary discus- sion about the relevance of dual information in reference point procedures. In the following sec- tions we explicitly model how this kind of infor- mation could be used by the decision maker. In Section 3, we develop a cutting plane procedure making full use of the primal and dual informa- tion generated in previous interactions. In Section 4 we interpret the dual information in terms of partial tradeoffs, and we demonstrate how these may assist the decision maker in the selection of improved reference points. Simplified procedures using only the present primal and dual informa- tion are developed in Section 5. A numerical illus- tration of the different procedures is given in Section 6. Section 7 contains some final remarks.

2. Problem statement and definitions

The multiobjective programming problem can be stated as

max [ f l ( x ) . . . . . f q (X) ] , (P1)

s.t. g i ( x ) < 0, i = 1 . . . . . m,

where x is an n-dimensional variable vector, the fk's are objective functions, and the g /s are con- straint functions. Throughout this paper we will assume that the fk's are concave and that the gi's are convex.

In the objective function space, the problem (P1) can be restated as

m a x [ Y l , ' " , Yq],

(P2) s.t. f k ( x ) =Yk , k = 1 . . . . . q,

g i ( x ) <~ O, i = 1 . . . . . m.

Definition. The set X = {x Ig , (x) ~< O, i = 1 . . . . . m }, is the feasible set of (P1).

Page 3: On the convergence of reference point methods in multiobjective programming

58 P. Bogetoft et al. / Reference point methods in multiobjective programming

The set Y = ( Y k l Y k = f k ( x ) , k = l . . . . . q, x ~ X ) is called the attainable set.

The point )3 = (.91 . . . . . )3q), where

) 3 k = m a x ( f k ( x ) l x ~ X ) , k = l . . . . . q,

is the idealpoint ( utopia point ) of (P1). An attainable solution y * is said to be efficient

(Pareto optimal, nondominated), if there exists no other attainable y such that y >/y* with at least one k such that yk > Yk*.

A reference point ~ = (Ya,.. . , Yq) is a point in the objective function space expressing the deci- sion maker's wish or optimistic guess concerning the outcomes of the q objectives.

Given a suggested reference point ~, an at- tainable solution, y*, which is 'close' to ~, is computed by solving the problem

rain d ( y , y ) ,

(P3) s.t. f k (x ) = y , , k = 1 . . . . . q,

gi(x) <<, O, i= 1 . . . . . m,

where d is a (pseudo-) distance function. By this we mean that d is quasiconvex with a global minimum in y = ~. The pseudodistance d could for instance be any p-norm,

q __f~klp) 1/p d(y , y ) = E [Yk

k = l

or the entropy function q q

Yk d ( y , P ) = E Y , log E - E Y k ,

k = l k = l

see Hallefjord and JiSrnsten (1986). In the sequel we will use inequalities instead of

equalities in the constraints coupling the variables x in the decision space with the variables y in the objective function space. So instead of f , ( x ) = Y k we will write f~(x)>~Yk, and we will assume that each constraint is strictly binding i.e. binding with a strictly positive Lagrange multiplier. Under those assumptions the two statements are equivalent, but each problem is convexified.

The following theorem and corollary generalize the results in Kallio et al. (1980) and Nakayama and Sawaragi (1984) to a wider class of 'distance' minimization problems.

Theorem 1. The solution (x*, y * ) to the problem

min d (y , y) ,

(P4) s.t. f k ( x ) >~Yk, (hk) , k = l . . . . . q,

gi(x) <~ O, i = 1 . . . . . m,

is efficient if the multiplier ~k associated with each constraint fk( x ) >i Yk is positice, under the assump- tions that d and the gi' s are convex and the fk's are concave.

Proof. The Lagrangean of (P4) is q

L ( x , y, X , / x ) = d ( y , . ~ ) - E Xk( f k (x ) - -Yk ) k = l

+ ~ mg~(x) i=1

where X, /~ >~ 0. According to the saddlepoint the- orem,

L ( x * , y*, X*, t l*)<~L(x, y*, X*, p*)

for all (feasible) x. Here (x*, y*, X*, ~*) de- notes an optimal solution to (P4). Thus,

1

d (y* , f ~ * ) - Y'~ X*k(fk(x* ) - -y~) k = l

m

"q- E ~ g i ( x* ) i=1

q

<~d(y*, y ) - E h * k ( f k ( x ) - y ~ ) k = l

m

+ E i=1

q ~ - ~_, k*kfk(x*) + Ix*gi(x*)

k = l i=1

q ~ <~ - ~, X*kfk(x) + p*gi(x).

k = l i=1

And since p * g i ( x * ) = 0 and ~*gi(x)<~O for i = 1 . . . . . m and x feasible,

q q

-- E ~*kfk( x * ) < ~ - E h*kf,(x) k = l k = l

q q

¢~ E ?~*kfk(x*) >~ E X*kf,(x). k=0 k = l

So, (x*, X*) is an optimal solution to the prob-

Page 4: On the convergence of reference point methods in multiobjective programming

P. Bogetoft et al. / Reference point methods in multiobjective programming 59

lem

q

max ~ XkYk, k = l

(P5) s.t. fk( x)>~yk, k = l . . . . ,q,

gi(x)<~O, k = l . . . . . m.

Furthermore, if X'~ > 0, k = 1 . . . . . q, a well- known result (e.g. Philip, 1972) says that (x*, y* ) is efficient. []

So basicly the proof of Theorem 1 is a sep- aration argument. Observe the following corollary:

Corollary 1. Let d, g~ . . . . . gm be convex and let fl . . . . . fq be concave. I f (x*, y*, X*) is an opti- mal (primal and dual) solution of (P4) and X* > 0 then

q

y * ~ argmax ~ X*ky k. [] v ~ Y k = l

Theorem 1 and Corollary 1 show, how the Lagrange multipliers k* provide information about the possibilities in a neighbourhood of the current solution point y* , i.e. about the tradeoffs between the different objectives near y* . We pro- pose that this information be submitted to the decision maker in addition to the solution point (x* , y*) . The idea is that this could assist the decision maker in his search for a best-comprise solution.

The proposal to expose dual information to the decision maker is far from new. In the literature on reference point methods, Lewandowski and Grauer (1982) and Nakayama and Sawaragi (1984) suggest the use of dual information, Wierzbicki (1979) suggest that a selection of efficient points around y* is exposed and, again, Nakayama and Sawaragi (1984) suggest to use the multipliers k* to obtain simple approximating feasibility checks of new reference points, see also Kok (1984) ane Bogetoft (1986) for an overview of tradeoff in- formation in different methods. However, exactly how dual information could assist the decision maker in his choice of successive reference points has not been modelled.

Below we discuss further the relevance of the tradeoff information in k*, and we explicitly model how this kind of information could be used by the decision maker. Especially, we show that the Lagrange multipliers give sufficient information to

make reference point methods converge. First, we develop a cutting plane scheme making full use of the primal (x*, y * ) and dual X* information generated in the present and previous iterations. Next we describe some heuristic procedures using only the present primal and dual information i.e. with a smaller burden imposed on the decision maker.

3. A cutting-plane algorithm

Assume that there exists an overall concave and non-decreasing utility function U, incorporating the q objectives into a single objective. Thus, the well-defined basic decision problem is

max U(yl . . . . . Yq), (P6)

s.t. y ~ Y.

Now each time the distance minimization problem is solved with positive Lagrange multipliers, we get an outer approximation on the attainable set Y. This is clear from Corollary 1. The solution (y * , X*) defines a feasibility cut

X*y~<X*y* V y ~ Y ,

delineating Y from outside. A simple idea would be to select as the new

reference point the best alternative in the present optimistic approximation of the attainable set. So the decision maker should successively adjust his reference point or aspiration levels in view of the infeasibilities exposed to him.

A cutting-plane algorithm based on these ideas may be formally defined as follows. Let AN de- note the problem analyst and DM the decision maker.

Cutting-plane procedure Step 0 (AN or DM). Let the objective function

space be defined by a bounded set in R q, e.g.

--Mk<~yk<~Mk, k = l . . . . . q,

where the Mk'S are sufficiently large numbers. Let ~ be the the ideal point, j = 1, y = yl and go to Step 2.

Step 1 (DM). Solve

max U ( y ) , . t . t . t

s.t. X y~<X y , t = l . . . . . j - l ,

--Mk<<.yk~M k, k = l . . . . . q.

Page 5: On the convergence of reference point methods in multiobjective programming

6 0 P. Bogetoft et al. / Reference point methods in multiobjective programming

Let P be the solution and .PJ = ~. Step 2 (AN). Solve

rain d ( y , y ) ,

s.t. f k ( x ) >~Yk, Xk, k = 1 . . . . . q,

gi(x) <~ O, i = 1 . . . . . m.

Let y* be an optimal solution. If y is attainable, stop. In this case y* and the corresponding solu- tion x * in the decision space is optimal. Otherwise, let X* be the dual solution of the first q con- straints, let ( y ' J , X*J )= (y *, X*), j = j + l and go to Step 1.

Technically the procedure above is a straight forward cutting-plane algorithm. If the attainable set Y is bounded from above, U is quasiconcave, f l , . . . , fq are concave and finite and gl . . . . . g,, are convex, the sequence of solutions (Y*J)~=I and the sequence of reference points (Y J);=l both converge to the optimal solution to the basic deci- sion problem (P6). This is so since the require- ments for a general cutting-plane algorithm, cf. Luenberger (1984), are fulfilled.

The cutting-plane procedure above shows how information about the Lagrange multipliers may guide the decision maker in his choice of the successive reference points. Now in this two-level scheme the decision maker still has to store a great amount of information and perform some nontriv- ial evaluations, as modelled by the sequence of relaxed master problems in Step 1. Consequently the immediate normative or descriptive value of this procedure is probably restricted to cases, where the decision maker is a well-equiped divi- sion of a large organization. In most cases, it remains an important issue to reduce the burden imposed on the decision maker. We now turn to such less demanding procedures.

4. Lagrange multipliers, tradeoff information, and search directions

In this section we clarify the interpretation of the Lagrange multipliers as tradeoff information. Also we show how the DM could use the last set of multipliers to check optimality of the decision proposed and - - in case of non-optimality--to select new reference points.

Let X* be the optimal Lagrange multipliers

associated with the solution y* to the distance minimization problem (P4). By Corollary 1 we know that

H ( y * , 7~* ) = { y ~ m q l X * y = ~t*y * }

is an n - 1 dimensional supporting hyperplane to Ya t y*

Clearly,

* = ( 1 , 0 , . . . - X l / X k , 0 , 0), k = 2 . . . . ,q, Oh , • . . ,

span the associated n - 1 dimensional subspace H ( y * , )t*). So, any feasible direction in H(Y*, ~*) is of the form

q

v = ~_, Tiv*, y i ~ N , i = 2 , . . . , q . i = 2

The vectors vf . . . . . Vq represent very simple tradeoff information. Shortly they could be classi- fied as partial tradeoff rates that give exaggerated impressions of the possibilities to substitute be- tween the different pairs of objectives. Thus for example, v~' tells us that we have to give up at least Xl /~ 2 units of the second objective to gain one unit of the first objective. The more general vector v describes more complex tradeoffs. Shortly v could be classified as a total tradeoff rate that gives exaggerated impressions of the possibilities around y*. Thus v indicates that we have to give up at least 72.Xl/X2 units of Y2,. . . , and 7q X~/)kq units of yq to obtain "/2 + • • • +"/q units of ya. A more extensive discussion of multipliers as tradeoff information is - - in a slightly different setting--provided by Haimes and Chankong (1979).

Experience suggests that decision makers have limited information processing capabilities, c.f. for example Saaty (1980). Therefore the simple partial tradeoffs v~' . . . . . Vq above are the natural pieces of information to focus on. Generally we would suggest information about X* to be explained to the decision maker in terms of these vectors.

The partial tradeoffs v~' . . . . . Vq are not only simple to comprehend. They also contain suffi- cient information to check for optimality of a given proposal, cf. Theorem 3 below.

Theorem 3 (Optimality). Let U be nondecreasing pseudoconcave and (y*, ~*) an optimal solution to

Page 6: On the convergence of reference point methods in multiobjective programming

P. Bogetofi et aL / Reference point methods in multiobjective programming 61

the distance minimization problem (P4). Now, if

U( y* ) >1 U( y* + yi v* )

Vy, ~ R, i = 2 . . . . . q,

then y* is an optimal solution to the basic decision problem (P6).

Proof. By the assumption of the theorem,

++~TU(y*)vi<~.O , i = 2 . . . . . q,

c.f. Theorem 2.1 in Zangwill (1969). Now, let y' ~ H , i.e.

q y ' y* y ' , .

- - = ~i Oi i=2

for suitable values "~i'E R. By the inequalities above

VU(y* ) ( y ' - y* ) <~ 0

and U pseudoconcave implies

U(y ' ) < U(y*) .

So, y* maximizes U o n H and by y * ~ H ~ Y and the fact that all points in Y are weakly dominated by a point in H, c.f. Corollary 1 above, we get that U(y *) maximizes U on Y. This proves the theorem. []

Theorem 3 is actually a special application of the theorem in Korhonen and Laakso (1986), in which the feasible set is supported from outside by a cone

C = yly = y * + Y'~ yid,, ys >~ 0 . i = l

*) we Letting (d i, i = 1 . . . . . p) =(_+v~' . . . . . +_% get theorem 3 above.

It should be noted that Theorem 3 provides sufficient but not necessary conditions for y* to be optimal. If H is not a unique supporting hyper- plane to Y at y*, the proposed y* may be optimal without the conditions fulfilled. This is illustrated in Figure 1.

An efficient solution having a unique support- ing hyperplane in the objective space we call d- unique. So, the conditions of Theorem 3 need not be necessary if y* is not d-unique.

On the other hand, if y* is d-unique, the conditions are indeed necessary. To see this, as-

Y2

\ \

/

Figure 1. A non d-unique point

H

u=umax

\ >

sume y* is an optimal solution to the basic deci- sion problem. In this case the two convex sets above the optimal indifference curve and below the efficient frontier are weakly separated. By H the unique hyperplane supporting the efficient frontier at y*, H must be the separating hyper- plane. Consequently, U cannot be improved on H and we have the conditions in Theorem 3.

Also, in case of a given d-unique proposal being non-optimal, the partial tradeoffs may be used in the selection of a new reference point or improving feasible direction. This is emphasized by Theorem 4 below. This theorem generalizes well-known results like Theorem 10.3.3 in Bazaraa and Shetty (1979) about improving feasible direc- tions, in the sence that we allow non-linear con- straints and we apply more general projections in terms of distance minimization. On the other hand, we assume d-uniqueness of the point considered. This provides extensive simplifications.

In the following, let y * ( ~ ) be the solution to the distance minimization problem (P4), when is the reference point.

Theorem 4 (Direction of improvement). Let U be strictly increasing pseudoconcave, and let fl . . . . . fq and gl . . . . . g,, be strictly increasing differentiable concave and convex functions, respectively. Further-

Page 7: On the convergence of reference point methods in multiobjective programming

6 2 P. Bogetoft et al. / Reference point methods in multiobjective programming

more let (y*, X*) be an optimal solution to the distance minimization problem (P4).

Now, if y* is not optimal to the basic decision problem (P6) we have

3 v ~ H ( y * , ~ . * ) , e > O : V ( y * + e v ) > U ( y * ) .

We could select v as any vector with ~7U(y *) v > 0 and it suffices to select from { + v~,. . . , + v* ).

Also, assuming y* d-unique and a unique solu- tion to the distance minimization problem, we have that for any (v, e) fulfilling the assumption above

3 8 ~ (0, el: U ( y * ( y * + S v ) ) > U ( y * )

i.e. by a curve search in the direction of v the utility will be strictly improved.

Proof. The first part is obvious as y * would otherwise be an optimal solution to the basic decision problem (P6), cf. Theorem 3 above.

The second part follows as {y* + 8018 E R} would otherwise separate the convex set above the indifference curve through y* and the convex set below the efficient f ront ier - -both restricted to the direction v--which would contradict the first part of the theorem. More precisely, the proof for the second part runs as follows.

First, we change the basis of R q to make the first basis vector orthogonal to H. Coordinates with respect to this new basis wi l l - - to ease nota- t i o n - s t i l l be called y~ . . . . . yq.

Next, we parametrize the indifference curve through y * and the efficient frontier by Y2 . . . . . Yq.

Thus, let the efficient frontier be described as the graph of a defined by

a(y2 . . . . . Yq)

:-- max f l ( x ) ,

s.t. ( f 2 ( x ) . . . .

g,(x) <. 0,

, L ( x ) ) >-- . . . . . yq),

i = l , . . . , m .

Also, let the indifference curve through y* be defined as the graph of/3 given by

/3(Y2,--., Yq) '= rnin Yl,

s.t. U(y ) >1 U ( y * ) .

Furthermore, by the implicit function theorem

t . t . W/3(y* ) = - (/-72 ( y ) / U 1 ( Y ) . . . . .

Uq(y* ) /UI ' (y* ))

t . where U~ ( y ) , i = 1 . . . . . q, are partial derivatives. Now consider a and/3 restricted to movements

from y* in the direction of v

K(x) := a((y2* . . . . , y $ ) '~ K( U2 . . . . , v q ) ) ,

x>0 , .= y g ) + v") ) ,

x>~0,

where v2, . . . , v q are the last q-1 coordinates within the new basis of the v considered in the theorem.

By y* d-unique and the basis change per- formed we now have

a ' (0 ) = 0 .

Also,

fl'(O) = V/3(y* )v

= - (UI' ( y * ) ) - l V U ( y * ) v < 0

by the assumption that v is a direction of im- provement in H.

Finally, remember that

~(0) = fl(0) =y,*

The situation is summarized in Figure 2. Clearly, for X sufficiently close to zero.

>

This proves the second part of the theorem if the distance minimization problem is equivalent to projection in the direction of the first basis vector.

Finally, in the general case, we need an ad- ditional argument. Let y * * solve the distance minimization problem for y = y * + ~(0, v2,..., vq). Then

yi**<~i, i = 2 . . . . . q,

as the opposite strict inequality for any i by a decreasing would imply that

( Y ? * ", Wi lT , Yi, ** * ) , - . Y i + l , . . . , Y ~

Clearly, by fl . . . . . fq concave and gl . . . . . gm con- vex, we have that u is concave. Similarly, by U concave, /3 is convex.

is feasible. So, the distance to ~ could be weakly decreased, i.e. an alternative optimum would exist and we have a contradiction.

Page 8: On the convergence of reference point methods in multiobjective programming

P. Bogetoft et al. / Reference point methods in multiobjective programming

.V1

I I I I

i I \ _ I I ,, I I I I

I-I

8

Figure 2. Wishes and possibilities in a search direction

63

Also, by the inequalities above

Yl* * > o/(y2 . . . . . Yq)"

Finally, we get by fl increasing

]~(Y2** . . . . . Yq* ) ~ ~(Y2 . . . . , .~q)

=~(x)<~(x)

= O~(Y2, ' ' ' , Yq) ~<Yl**,

i.e. the indifference curve through y* is Strictly below Yl** in (Y2 . . . . . yq**). Hereby, y** is a point of strictly improved utility.

This ends the proof of Theorem 4. []

5. Simplified procedures

In this section we present two simple reference point procedures. The burden imposed on the decision maker is limited. He does not have to remember all previous cuts nor to perform genuine q-dimensional optimization. Rather it suffices to remember the present cut and - - in the first proce- d u r e - t o perform simple one-dimensional (sub-) optimizations.

The first procedure combines the ideas of the gradient projection and the coordinate ascent methods of nonlinear programming, c.f. for exam-

ple Zangwill (1969) or Luenberger (1984). It may be defined as follows:j

Curve search procedure Step 0 (AN). Select a point ~1 ~ Rq that is on

or above the efficient frontier. For example the ideal point. Let j = 1, ~ = ~].

Step 1 (AN). Solve the distance minimization problem (P4). Let y* be an optimal solution and ~* be the dual solution to the first q constraints. Also, let (y* ' , X *j) = (y*, X*). Present

.) ( y * , v~ . . . . . Vq

to the decision maker. Step 2 (DM). Select

v* ~ argmax ~7U( y * ) v

s.t. v~(+_v~ . . . . . +v*}.

If ~TU(y*)v* =0, stop. In this case y* is opti- mal, c.f. Theorem 3.

Step 3 (AN). Determine

a C y ) = y * ( y * + y v * ) , y ~ [ 0 , e]

(where e is a given constant) and present it to the decision maker for example by using computer graphics.

Step 4 (DM). Select

V* ~ argmax U(a(V)), s.t. ~, ~ [0, e].

Page 9: On the convergence of reference point methods in multiobjective programming

64 P. Bogetoft et at.. / Reference point methods in multiobjective programming

Let 9 = y * +' /*v*, ~/+1=~, j = j + 1 and go to Step 1.

The procedure above is tightly related to the visual interactive method of Korhonen and Laakso (1986). The only difference is that we present dual information to the decision maker to assist him choose a new reference point. Also, we make explicit assumptions about the behaviour of the decision maker.

As mentioned above, the cutting-plane al- gorithm converges in rather general non-linear cases. Recent counter-examples, shows that simi- lar properties cannot be established for gradient projection methods to which the present curve search procedure has many resemblances. Conse- quently, we do not seek general convergence theo- rems for the curve search procedure above. Never- theless, gradient projection methods have been successfully implemented and have been found to be effective in solving general non-linear pro- grams, cf. Luenberger (1984). Therefore, we expect the curve search procedure to be well-behaved in many cases. This expectation is supported by re- sent computer simulations of the Korhonen and Laakso (1986) procedure. It turns out that the procedure performs well even if new reference points is selected randomly.

Numerous modifications of the curve search procedure are of course possible.

The direction selection in Step 2 could be gener- alized to

The most critical part of the procedure above is the curve search. First, the decision maker may not proceed as described. Second, the idea of exposing whole sections of the efficient frontier is not common in the literature on reference point procedures. Therefore, we end this section by de- scribing one example of a procedure, where the curve search is substitute by a search on a hyper- plane.

Regress procedure Step 0 (AN or DM). Select any yl ~ R q on or

above the efficient frontier. For example the ideal point. Let f = ~1, yl =y l and j = 2.

Step 1 (AN). Solve the distance minimization problem (P4). Let y* =y*J be an optimal solu- tion and ~* be the dual solution to the first q constraints. Represent )~* by v~',..., Vq and pre- sent

(y* , v~' . . . . . Vq*)

to the decision maker. Step 2 (DM). Solve

yJ ~ argmax U(yJ) , q

s.t. YJ <<-Y* + E )~i v*, i = 2

)~i ~ R, i = 2 . . . . ,q.

Let

select v* ~ { v~ HlV'U(y* )v >O, Ilvll ~g} where K > 0 is a given constant. Furthermore 'spacer steps' could be introduced, and we could use a cyclic coordinate ascent approach succes- sively considering tradeoff's between Yl and Yz, Yl and Y3 . . . . . Yl and yq.

Also the curve search in Step 4 could be gener- alized to

select -/* ~ [0, e] so that

u ( a ( y * ) ) > U ( y * ) + k or

~,* ~ argmax U ( a ( y ) ) s.t. V~ [0, e],

where k > 0 is a given constant, c.f. exercise 5.13 in Zangwill (1969) for a similar procedure. Again, it is only important to use this curve search in- finitely often as long as the remaining steps do not imply a descent.

yj = j y j + j - ~ s _ , = _ r l ~ y t .

Jl=l

Let ~ =yJ, j = j + 1 and go to Step 1.

The idea of the procedure above conforms with the notion of satisficing decision making. A deci- sion maker realizing that his reference point is unattainable makes a certain regression in his aspirations. He adjusts his aspirations in the direc- tion against the point he tentatively believes to be optimal.

We note that the Regress Procedure has some resemblances with the Kornai and Liptak (1965) procedure. The decision maker only partly adjusts his reference point in view of the present dual information. On the other hand, it deviates from the Kornai-Liptak procedure as we do not make a

Page 10: On the convergence of reference point methods in multiobjective programming

P. Bogetoft et aL / Reference point methods in multiobjective programming 65

similar regression in the dual information sub- mitted to the decision maker. In consequence, we cannot easily provide a general convergence result for the regress procedure above.

6. A numerical example

Consider the example of Zionts and Wallenius (1983) or Korhonen and Laakso (1986).

The multiobjective programming problem is

max [xl, x 2, x3],

S.t. 3 x 1 + 2 x 2 + 3X 3 ~< 18,

x~ + 2x 2 + x 3 ~< 10,

9x I + 20x 2 + 7x 3 < 96,

7x 1 + 20x 2 + 9x 3 ~< 96,

X 1, X 2, X 3 ~ O.

Clearly, the ideal point

)3 = (6, 4.8, 6).

We furthermore assume the utility function to be

U(X1, X2, X 3 ) : - - ( ( 2 5 - 3X1) h - t - ( 2 5 - 5x:) h

+(25 - 3x3)h) '/h

where h is very large. So, for most purposes, this pseudoconcave utility function is equivalent to the quasiconcave Leontieff function

O(x~, x2, x3) =min{3x l , 5x2, 3x3}

and the optimal solution is clearly

yOp,= (2.5, 1.5, 2.5)

as illustrated in Figure 3. Finally, in all the calcu- lations below, the metric applied is

d(y, y)=max(lyi-Nil 1 i = 1 , 2 , 3 }

i.e. we use the Tchebycheff-metric as the distance function.

Now, starting with the ideal point as the first

x 2

I 0 , 4 , 8 , 0 )

1 , 4 , 1 ) ( 4 , 3 , 0 )

( O, 3 , 4 ) f ' ~ [ yopt

9 / / I ,,/ I / I / I / I / /

_ _ _..t, /

( 6 , 0 , 0 )

x 1

/m/ ( 0 , 0 , 6 )

x 3 Figure 3. Attainable set in numerical example

Page 11: On the convergence of reference point methods in multiobjective programming

66 P. Bogetoft et al. / Reference point methods in multiobjective programming

Table 1 Cutting plane procedure in numerical example

It. no. Ref. point Proj. point Dual inf. j ~ y* X*

1 0.0, 6.0, 0.0 0.00, 4.80, 0.00 0.000, 1.000, 0.000 2 8.0, 4.8, 8.0 3.00, 0.00, 3.00 0.500, 0.000, 0.500 3 3.0, 1.8, 3.0 2.55, 1.35, 2.55 0.375, 0.250, 0.375 4 2.5, 1.5, 2.5 2.50, 1.50, 2.50

reference point, the three procedures evolves as follows.

In the Cutting-plane procedure, Step 0, let M k = 8, k = 1, 2, 3. Then in Step 2, the projected point and dual information are

y * = (2.550, 1.350, 2.550),

X* = (0.375, 0.249, 0.375).

Next , in Step 1, we solve

max min{3x1, 5X2, 3x 3 ) ,

s.t. 0.375x 1 + 0.250X 2 + 0.375x3

~< ),*y * --- 2.250,

-8<~xi<~8, i = 1 , 2 , 3.

The solution, i.e. the new reference point is = (2.5, 1.5, 2.5). Since this is a feasible point,

Step 2 simply confirms the optimality of this point. This very fast convergence of the cutting plane procedure is partly explained by the chosen starting point. If, instead, we start with (0, 6, 0) as the first reference point, the procedure evolves as shown in Table 1.

Next, consider the Curve search procedure. In Step 0, we select the ideal point and in step 1 the distance minimization problem leads to

y* = (2.550, 1.350, 2.530),

v~" = (1, -1 .506, 0), v~' = (1, O, - 1 ) .

2.856 Y2

1.350 Yl

> (a)

0

Figure 4. Curve search information

Yl

2.550

2.464

1.550 1.479

2.550

(b)

Y2

3.464

1.550 1.479

Page 12: On the convergence of reference point methods in multiobjective programming

P. Bogetoft et al. / Reference point methods in multiobjective programming 67

In Step 2, the decision maker selects

v* = ( - 1 , 1.506, 0)

and in Step 3, the picture in Figure 4(a) is pre- sented to the decision maker.

In Step 4, he selects 3'* = 0.085 and hereby the new reference point becomes

p2 = (2.464, 1.479, 2.550).

Now, since we are at the same face of the attaina- ble set, the information submitted to the decision maker in the next Step 1 is repeated. However, this time the decision maker in Step 2 selects

v* = (1, O, - 1 ) .

In Step 3, the analyst now produces the picture in Figure 4(b). In Step 4, the decision maker (using k = 10) selects ~* = 0.0428 and the new reference point becomes

•3 = (2.507, 1.479, 2.507).

Again, we remain at the same face and the infor- mation submitted to the decision maker in Step 1 is unchanged. Now, however, he selects

v* = ( - 1 , 1.506, 0)

In Step 2 and in Step 3, Figure. 4(a) is again presented to the decision maker. He selects y* = 0.012 and the new reference point becomes

p4 = (2.495, 1.497, 2.507)

tion, the dual information is of course

X* = (0.375, 0.249, 0.375).

Again, convergence is obvious.

7. Computational complexity

We now consider the computational complexity of using the proposed procedure for solving a practical multiple objective programming prob- lem. Whichever of the utility maximization prob- lems is Chosen, the dimension of the optimization problem in each step is more or less equal to the original multiple objective programming problem. If the utility function is known explicitly, then the computational burden in each step can be com- pared with solving the original multiple objective programming problem. If the utility function is unknown, we assumed that the decision maker chooses the solution with maximum utility, subject to the current set of constraints. Here, a distance minimization problem has to be solved in each step. If the decision maker is not consistent with some utility function, the procedure will not con- verge, and this fact is a valuable piece of informa- tion. So, to summarize, the problem complexity in each step is acceptable in mathematical sense buth the number of iterations may be high, depending on the decision maker's ability to 'solve' his utility maximization problem.

etc. Clearly, the procedure converges. Finally, consider the Regress procedure. Start-

ing in the ideal point the reference point, pro- jected point and the best point on the approximat- ing hyperplane evolve as in Table 2. In all itera-

8. Final remarks

In this paper we discuss how dual information may assist the decision maker in the choice of

Table 2 Regress procedure in numerical example

It. no. Proj. point Best point Ref. point j y* ~ .P

1 6.000, 4.800, 6.000 2 2.550, 1.350, 2.550 2.550, 1.350, 2.250 4.125, 3.075, 4.125 3 2.513, 1.463, 2.513 2.438, 1.463, 2.438 3.563, 2.528, 3.563 4 2.506, 1.481, 2.506 2.469, 1.481, 2.469 3.289, 2.273, 3.289 5 2.504, 1.488, 2.504 2.481, 1.488, 2.481 3.127, 2.116, 3.127 6 2.503, 1.492, 2.503 2.486, 1.492, 2.486 3.021, 2.012, 3.021 7 2.502, 1.494, 2.502 2.490, 1.494, 2.490 2.945, 1.938, 2.945 8 2.502, 1.495, 2.502 2.492, 1.495, 2.492 2.888, 1.883, 2.888

Page 13: On the convergence of reference point methods in multiobjective programming

68 P. Bogetoft et al. / Reference point methods in multiobjectioe programming

appropriate reference points. Furthermore, we consider the convergence properties of some gen- eral reference points procedures based on this idea.

It turns out that tradeoffs are normally availa- ble having solved a distance minimization prob- lem. Using this information the decision maker may check the optimality of a given proposal and - - in case of non-optimality--select a new and improved reference point.

High information-processing capacity on part of the decision maker must be assumed to ensure convergence of reference point procedures. As one example we describe a cutting plane algorithm, where the decision maker must remember all pre- viously generated constraints.

Still, well behaved procedures with a smaller burden imposed on the decision maker can be constructed. Two procedures are suggested in which the decision maker only reacts on the last set of tradeoffs. Similarity with well-known non- linear optimization procedures suggest that con- vergence will occur in many cases.

Acknowledgement

This work started during the Second Euro Summer Institute, Brussels 1985. We wish to thank Prof. J.P. Brans, Ph. Vincke and M. Despontin for organizing the three weeks of fruitful and friendly collaboration.

References

Bazaraa, M.S., and Shetty, C.M. (1979), Nonlinear Program- ming, Theory and Algorithms, Wiley, New York.

Benayoun, R., deMontgolfier, J., Tergny, J., and O.I. Laritchev (1971), "Linear programming with multiple objective func- tions: Step method (STEM)", Mathematical Programming 1, 336-375.

Bogetoft, P. (1986), "General communication schemes for mul- tiobjective decision making", European Journal of Oper- ational Research 26, 108-122.

Chankong, V., and Haimes, Y.Y. (1983), Multiple Objective Decision Making." Theory and Methodology, North-Holland, Amsterdam.

Charnes, A., and Cooper, W.W. (1977), "'Goal programming and multiple objective optimization", European Journal of Operational Research 1, pp. 39-54.

Geoffrion, A.M., Dyer, J.S., and Feinberg, A. (1972), "An interactive approach for multi-criterion optimization", Management Science 19, 357-368.

Haimes, Y.Y., and Chankong, V. (1979), "Kuhn-Tucker mul- tipliers as trade-offs in multiobjective decision-making analysis", Automatica 15, 59-72.

Hallefjord, ,~., and JSrnsten, K. (1986), "An entropy target point approach to multiobjective programming", Interna- tional Journal of System Science.

Hallefjord, ,~., Eriksson, O., and Jornsten, K. (1986), "A long range forestry planning problem with multiple objectives", European Journal of Operational Research 26, 123-133.

Kallio, M., Lewandowski, A., and Orchard-Hays, W. (1980), "An implementation of the reference point approach for multiobjective optimization", Proceedings of the IIASA Workshop on Large-Scale Linear Programming, 2-6 June 1980, Laxenburg, Austria.

Kok, M. (1984), "Tradeoff information in interactive multi-ob- jective linear programming methods", WP-84-35, IIASA, Austria.

Korhonen, P., and Laakso, J. (1986), "A visual interactive method for solving the multiple criteria problem", European Journal of Operational Research 24, 277-287.

Kornai, J., and Liptak, Th. (1965), "Two-level planning", Econometrica 33, 141-169.

Lewandowski, A., and Grauer, M. (1982), "The reference point optimization approach-Methods of efficient implementa- tion", in: M. Grauer, A. Lewandowski and A.P. Wierzbicki (eds.), Multiobjective and Stochastic Optimization, pp. 353-376, CP-82-$12, IIASA, Austria.

Luenberger, D.G. (1984), Linear and Nonlinear Programming, Addison-Wesley, Reading, MA.

Nakayama, H., and Sawaragi, Y. (1984), "Satisficing tradeoff method for multiobjective programming and its applica- tions", paper presented at the 9th IFAC World Congress, July 2-4, 1984, Budapest, Hungary.

Philip, J. (1972), "Algorithms for the vector maximization problem", Mathematical Programming 2, 207-229.

Roy, B., and Vincke, Ph. (1981), "Multicriteria analysis: Survey and methods", European Journal of Operational Research 8, 207-218.

Saaty, T. (1980), The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation, McGraw-Hill, New York.

Vincke, Ph. (1982), "Presentation et analyse de neuf methods multicriterias interactives", Lamsade, Cahier 42, Paris.

Wierzbicki, A.P. (1979), "A methodological guide to multiob- jective optimization", Proceedings of the 9th Conference on Optimization Techniques, Warsaw.

Wierzbicki, A.P. (1980), "The use of reference objectives in multiobjective optimization", in: G. Fandel and T. Gal (eds.), Multiple Criteria Decision Making Theory and Appli- cation, Springer, Berlin, pp. 468-486.

Zangwill, W.I. (1969), Nonlinear Programming: A Unified Ap- proach, Prentice-Hall, Englewood Cliffs, NJ.

Zionts, S., and Wallenius, J. (1976), "An interactive program- ming method for solving the multiple criteria problem", Management Science 22, 652-663.