Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
0-1 Knapsack Problems with Random Budgets
by
Shubhabrata Das &Diptesh Ghosh*
February 2001
* Faculty of Economic SciencesUniversity of GroningenThe Netherlands
Please address all correspondence to:
Dr. Shubhabrata DasAssistant ProfessorIndian Institute of ManagementBannerghatta RoadBangalore - 560 076Fax: (080) 6584050E-mai l : shubh<xa>iimb.ernetin
Knapsack Problems with Random Budgets
Shubhabrata DasQM&IS Area, Indian Institute of Management, Bangalore, India
Diptesh GhoshFaculty of Economic Sciences, University of Groningen, The Netherlands
Abstract
Given a set of elements, each having a profit and cost associated with it, and a budget, the 0-1 knapsackproblem finds a subset of the elements with maximum possible combined profit subject to the combinedcost not exceeding the budget. In this paper we study a stochastic version of the problem in which thebudget is random. We propose two different formulations of this problem, based on different ways ofhandling infeasibilities, and propose exact and heuristic algorithms to solve the problems represented bythese formulations. We also present the results from some computational experiments.
Keywords: static stochastic knapsack problem, random budget, in feasibility, branch and bound.
Given a budget B and a set of elements E = (c i , . . . , cn) with a profit vector P = (pi , . . . , pn) eJHJ and a cost vector C = (c i , . . . , cn) € 1H+, the 0-1 knapsack problem (refer Martello and Toth [10]for a detailed introduction), denoted here by KNAP(P,CfB), is the problem of finding a subsetS C E which maximizes £ e € SPj while satisfying XLe.6sc) ^ ^. ^ w e identify a binary vectorx = (x i , . . . , x n ) ' with a set S C E by the relation xt = 1 4=£ ei e S, we can define KNAP(P, C, B)as
K N A P ( P , C , B ) = a r g m a x { P x : C x < B , x € { 0 , 1 } n } . (1)
The vector x € {0, l } n is called a feasible solution to KNAP(P,C,B) (or simply, a solution) if itsatisfies the budget constraint Cx < B. An optimal or maximum-profit solution to KNAP(P, C, B)is denoted by Xg.
Stochastic versions of KNAP(P,C,B) are of two forms, static and dynamic. Full informationregarding the randomness of the problem elements is assumed to be known before the start of thesolution process in the static version, while in the dynamic version, some of these parameters areunknown when the solution process starts. Steinberg and Park [13], Sniedovich [11, 12], Henig [7],Carraway et al. [4], and Yu [16] have studied static stochastic 0-1 knapsack problems in which theP vector is random. Cohn and Barnhart [5] studied a special case of these problems in whichthe P vector is a known linear function of the C vector and the C vector is random. Averbakh [1]studied probabilistic properties of a heuristic algorithm to solve integer linear programs with multipleknapsack constraints. Ghosh and Das [6] recently studied a general class of stochastic discreteoptimization problems (of which KNAP(P, C, B) with random P is a special case). Static stochasticknapsack problems can also be seen as a special case of general stochastic integer programmingproblems, especially in the literature on recourse-based approaches. Discussions on this topic canbe found in Birge and Louveaux [3], and Van der Vlerk [15]. Dynamic stochastic knapsack problemshave been studied by several authors, like Kleywegt and Papastavrou [8], Marchetti-Spaccamela andVercellis [9] and Szkatula [14].
1 Formulation of the Problem
In the current work we will study a static stochastic knapsack problem in which the budget israndom. In the next section, we will introduce the problem, expand the notion of feasibility, andsubsequently consider two different notions of optimality. In Section 2, we develop algorithms, bothexact and heuristic, to solve these problems, and report the results of preliminary computationalexperimentations with these algorithms in Section 3. We conclude the work with a summary inSection 4 and point out possible directions for future research in this area.
1 Formulation of the Problem
We consider a stochastic version of KNAP(P, C, B), denoted here by S-KNAP(P, C, B(F)), in whichthe budget B is a random variable with a density function f(-) and a survival function F(«) definedby
F(b)=Pr[B>b] .
We denote the support of B (equivalently F) by [BL,BU] , where BL is non-negative, and Bu isnot necessarily finite. The mean and upper a-th percentile of B are denoted by By. and B(a),respectively, i.e
Pr[B > B ( a )] = a.
The stochasticity of the budget implies that (some) solutions will meet the budget requirementsonly some of the time. This calls for appropriate modifications in the notion of feasibility, as wellas methods for dealing with the profits accrued from a solution when the random budget value fallsbelow the cost of the solution.
Redefining Feasibility: There are several alternative ways of defining a feasible solution to stochas-tic knapsack problems with random budgets. Few obvious choices are to require that the budgetconstraint be met
• for all values in the range of B,
• for some values in the range of B,
• for an average value of B
• at least with a specified probability.
Accordingly, we may define the following notions of feasibility in the current setup:
Definition 1: A solution x is said to be a strongly feasible solution to S-KNAP(P, C, B(F)) if Cx < BL,or equivalently, F(Cx) = 1.
Definition 2: A solution x is said to be a weakly feasible solution to S-KNAP(P, C, B(F)), if Cx < B u .
Definition 3: A solution x is said to be a mean feasible solution to S-KNAP(P, C, B(F)), if Cx < BH.
Definition 4: A solution x is said to be feasible with a reliability coefficient a to S-KNAP(P, C, B(F)),if Pr[Cx < B] > a, or equivalently, if Cx < B ( a ) . A feasible solution with a reliability coefficient0.5 is called a median feasible solution.
1 Formulation of the Problem
Remark 1: If Ts, Fw and T^. denote the class of strongly-, weakly-, and mean-feasible solutionsrespectively, while J ( a ) stands for the class of feasible solutions with reliability coefficient a, it iseasy to see that
The concepts of mean and median feasibility coincide (i.e., T^ = .T^o.s)) in many cases, includingwhen B has a symmetric distribution.
Remark 2: A sufficient (and almost necessary) condition for a solution x to be weakly feasible isF(Cx) > 0.
Remark 3: All the above-mentioned notions of feasibility in the stochastic knapsack problem reduceto the notion of feasibility in the deterministic problem when B is degenerate. This justifies oursubsequent search for a suitably defined optimal solution to be restricted to one of the above definedclass of solutions.
Remark 4: A binary vector x is a strongly feasible solution to S-KNAP(P, C, B(F)) if and only if itis a feasible solution of K N A P ( P , C , B L ) . Thus, the requirement of strong feasibility reduces theproblem to its deterministic counterpart. Hence, we will be mostly restricting ourselves to weaklyfeasible solutions. However, the methodologies described here should be valid if one confines oneselfto the classes T^) or T^ with minor modifications.
Redefining Optimality: The definition of weak feasibility necessitates appropriate methods fordealing with the profits accrued from a solution when the random budget value falls below its cost.We use two approaches to deal with such situations. In the first approach, we discard any profitaccrued from a solution if it is infeasible in a given scenario (i.e., for a given value b of B). In orderto distinguish it from the profit in the non-stochastic problem, we will refer this profit
nT(x,b) = P x x Icx<b
as the truncated profit of the solution. (I is the usual indicator function.) In the second approach, weaccept profits accrued from solutions whose costs exceed the budget, (but not Bu), but also includea penalty for the portion of the cost of the solution exceeding the budget. The profit value, thusobtained,
nP(x,b) = Px - «(Cx - b) x Icx>b
is called the penalized profit of the solution, d(-) is called a penalty (or recourse) function. In thiswork we restrict ourselves to linear penalty functions (i.e. -ft(t) = 0t), although some situations maywarrant more steep (viz. exponential) penalties. Note that both the truncated and penalized profitsof a solution, being functions of B, are themselves random variables.
The most direct way of defining optimality for static stochastic 0-1 knapsack problems is tomaximize the expected value of the truncated or penalized profits. The two profit criteria will, ingeneral, lead to different optimal solutions. Another common approach to optimization in stochasticproblems is in terms of the regret associated with a solution. Let us define the loss L(x|B = b)associated with a solution x as L(TTf(x^,b) — FTj(x,b)), where x£ is the maximum profit solutionwhen the budget B equals to b, J = T or P, and L() is a non-decreasing function on (0,oo). Anoptimal solution may be defined as the one having minimum expected loss or regret, the expectationbeing taken over b. A third approach could be to find a solution with the minimum maxb L value.This corresponds to the minmax regret solution studied in Averbakh [2].
We confine ourselves in this paper to the first two approaches to optimization. The followinglemma shows that the two are equivalent when L(-) is linear (i.e. L(t) = at, which is the mostcommon form of loss and adopted here).
1 Formulation of the Problem
Lemma 1: In S-KNAP problems, maximizing the expected truncated (or penalized) profit is equiva-lent to minimizing the regret associated with the truncated (or penalized) profit provided the regretis defined through a linear loss function.
Proof: Let L(t) = at, where a is a positive constant. Then
where J is either T for the truncated profit case, or P for the penalized profit case.
Note that the regret of solution x is
Rj(x) = E ta(nj(x;,b) -nj (x ,b) ) ] = a(E [TTjCx^b)] - E
the expectation being taken over b values. Since E [Tlj(x^,b)] does not involve x, finding a memberof argmin{Rj(x)} is equivalent to finding a member of argmin{—E [TT;(x, b)] i.e., a member ofargmax{E[TTj(x,b)]}. •
We now obtain the general expressions for E [FTj(x, b)]. It is easy to see that the expected valueof the truncated profit is
E lTTT(x,b)] = f U Px x ICx<b(-dF(b)) = (Px) - F(Cx), (2)JBL
which implies that if we adopt the policy of discarding the profit accrued from solutions when theyare infeasible in a scenario, then S-KNAP can be formulated as
T-KNAP(P,C,B(F)) =argmax{ZT(x) =Px-F(Cx) : C x < B u , x€{0 ,1} n } . (3)
Given a specific budget value b of B, the penalized profit, assuming a linear penalty d[t) = 8t,of a weakly feasible solution x is
rx i u\ f P x i f C x < b , , .nP(x,b) = | P x _ 0 t c x - b ] i f b < C x . (4>
Expression (4) implies that the expected penalized profit of a strongly feasible solution is the sameas its profit. For any other weakly feasible solution x
fCx
E[nP(x,b)] = Px-6 (Cx-b) (-dF(b))JBL
where ^i(t) = JgL b (—dF(b)). So adopting the linear penalty approach, S-KNAP can be formulatedas
P-KNAP(P,C,B(F)) =
C x < B U ) x e { 0 , 1 } n } . (6)
Remark 5: In the T-KNAP(P,C,B(F)), there is no loss of generality in restricting the solutionspace to Tw, because for Zy(x) = 0 for any x with Cx > Bu- That is not the case for P-KNAP(P, C, B(F)), where for some problem instances with a low enough value of 9, there may exista xo with CXQ > Bu such that
ZP(x0)<max{Zp(x): Cx < Bu, x€{0,1}n}.
The justification of excluding such solutions in our consideration lies with the fact that the optimal (inany formulation) of a stochastic knapsack problem should reduce to that of a maximum profit solutionof deterministic knapsack problem when the budget B has a degenerate (single-point) distribution.
1 Formulation of the Problem
Remark 6: Literature on general stochastic integer programming problems suggests mainly two ap-proaches to deal with randomness in constraint coefficients. The first is through recourse functions,a special case of which is essentially our P-KNAP formulation. The second practice is to limit tothe feasible solution belonging to ^(a)? however without any alterations in the objective function(unlike our P-KNAP or T-KNAP formulation).
We conclude this section with deriving the functional forms of ZT(X) and Zp(x) for some com-mon probability distributions. Since Zy(x) = Zp(x) = Px for all strongly feasible solutions, theexpressions given below are valid for other weakly feasible solutions only.
Example 1: Suppose B has a Uniform distribution on [BL, Bu]. Then
I i f b < B L
fc£ ifBL<b<Buifb>Bu
|*(b) = 1{\2~X) if BL < b < BU. Thus,
and
Example 2: Suppose B is normally distributed with mean p. and variance a2, i.e. its density is givenby f(b) = £<!>(*=*), where
<|>{t) = - J = exp( - l - ) , and <D(t) = f d>(u)du = 1 - O(-t)V27T 2 J_oo
are the density and the distribution of the standard Normal distribution. Note that the survivalfunction of B is F(b) = < D ( ^ ) , and n(b) = \i®(^) - c$>{^). Thus
(9)u
and
nx-v.^ (1Q)
Example 3: Suppose B has a shifted exponential distribution on [BL,OO], i.e.
•{Aexp[-A(b-BL)] if b > BL0 otherwise.
Hence
exp[—A(b — BL)] otherwise,
2 Solution Techniques
and
H(b) = f tAexp[-A(t - BL)] dt = (BL + 7) - (b + U exp[-A(b - BL)].JBL
A A
Thus,
ZT(x) = Px exp[-MCx - BL)], (11)
and
ZP(x) =Px-9Cx[ l + 2exp[-A(Cx-BL)]] + £[i -exp[-A(Cx-BL)]] + 6BL (12)
2 Solution Techniques
We devise exact algorithms for solving T-KNAP(P,C,B(F)) and P-KNAP(P,C,B(F)) in the Sub-section 2.1. While the general version of the latter problems are well-studied in the literature, andconsequently several efficient exact algorithms exist, we also present an exact algorithm not onlyfor the sake of completeness, but also because a presented variation of this algorithm can be usefulwhen one has only limited knowledge of the probability distribution. We also experience the worth ofknowing the exact form of the distribution in terms of computational speed. We present a heuristicof these problems in the Subsection 2.2.
2.1 The Exact Algorithm
The exact algorithm that we consider for solving both T-KNAP(P, C, B(F)) and R-KNAP(P, C, B(F))is a depth first branch and bound algorithm (DFBB). The pseudocode for the general DFBBprocedure is provided in Figure 1. There are two problem-specific functions in the procedure,CalculateObjective(-) and CalculateBound(-). The CalculateObjective(-) function is easily im-plemented using expression (2) for T-KNAP(P, C, B(F)) and expression (5) for P-KNAP(P, C, B(F))respectively. Therefore the remainder of this subsection is dedicated to the definition of the Calculate-Bound(-) function for both the problems.
We will use the following notation in this subsection. A vector x = (xi ,X2,... ,xa), a < n, x €{0, l} a is referred to as a partial solution. Any vector x = (xi ,X2,... ,xn) € {0,1}n such that Xi = xtfor i = 1,... , a is called a realization of x. We denote by pj the ratio £f for j = 1,... , n, assumethat pi > p2 > • • • > Pn> and define pn+i = 0. We also denote na = ££=1 V)% ba = 2ZjLi Pj*j>£=1 V)% ba = 2ZjLi7rm = £ jL, PjXj + Ljla+i cj, and b m = ^TjLi *&) + L j ^ a + i Cj, for a + 1 < m < n. Notice that
b a , and b m are all functions of x.
Computing CalculateBound(x) for T-KNAP(P,C,B(F)): We will find an upper bound forZy(x) for all realizations x of x. Let x be a realization of x with cost b in the interval [bm,bm+i].The objective function of a linear relaxation of KNAP(P, C, b) (see Martello and Toth [10]), whenb € [bm ,bm + i] , is:
ni(b) = 7tm + p m + 1 (b - bm) = Ai,mb + A2,m> (13)
where Ai,m = pm+i and A2,m = Ttm — Ai>mbm. Therefore an upper bound for the maximum valueof ZT(X) for all realizations x of x with Cx = b G [bm,bm+i] is given by (Ai,mb + A2,m)F(b), andthe overall upper bound for all weakly feasible solutions is given by
max f{(7 t m -p m + 1 b m ) + pm + ib}-(A l f mb + A2,m)F(b)] = max {¥(m)}, (14)b<Bu Lv J a<m<r
2 Solution Techniques
Note:n is the size of the problem, i.e the cardinality o/E;BestSoFor stores the best solution found, initialized to 0;BestPsi stores the objective function value 0/BestSoFar, initialized to —00;
CalculateObjective(x) calculates the objective function value of a given solution x.CalculateBound(x) calculates an upper bound to the objective function value achievable from a givenpartial solution x.
procedure DFBBParameters
x : Solution;a : Index up to which values have been assigned in x;
beginif (a = n) thenbegin
if CalculateObjective(x) > BestPsi) thenbegin
BestSoFar :=x;BestPsi := CalculateObjective(x);
end;end;XQ+I := 1;
if (Px < Bu) and (CalculateBound(x) > BestPsi)BranchAndBound(xt a •+• 1);
xa+i :=0;if (CalculateBound(x) > BestPsi)
BranchAndBound(x, a -f 1);end;
Fig. 1: Pseudocode for a depth first branch and bound algorithm
where r = max{k : bjc < Bu} and
V ( m ) = max (Ai>mb + A2,m)F(b). (15)b m <b<b m + i
Notice that X2,m is constant over the interval [bm, bm+i]. If one chooses to restrict oneself to meanor median feasible solutions, only the Bu in the definition of r need to be suitably replaced.
Upper bounds to W(m) can be calculated fast if we know in advance that F(b) is either convexor concave over a given interval. The shape of F(b) is known for all distributions. For instance, F(b)is linear for Uniform distributions, while for Normal distributions, F(b) is concave over [BL,^] andconvex over [jx, Bu], where \i is the mean of the distribution.
Let us first consider the case when F(b) is convex over [bm, bm+i]. Refer to Figure 2. The dashedline shows a linear approximation F^b) of F(b) such that T\[b) > F(b) in the interval of interest.Using elementary geometry we can represent T\{b) with the equation
where A3 m = F ( bb - ^ ) ^ ( b - ) and A4>m - F(bm) -
2 Solution Techniques
bm+1
Fig. 2: Linear approximation (upper bound) of F(b) when F(b) is convex
The maxima of the function n t(b) • Ft(b) occurs at b m = -bound Wx(m) of W(m) is
7tmF(bm) i f b ^ < b m
1". So an upper
(17)
Fig. 3: Linear approximation (upper bound) of F(b) when F(b) is concave
Next, let us assume that F(b) is concave over [b m ,b m + i ] . Refer to Figure 3. The dashed lineshows a linear approximation Fi(b) of F(b) such that Fi(b) > F(b) for b m < b < bm+i. Fi(b) istangential to F(b) at b = b m . So its equation can be written as
Fi(b) = A5 ,m
where A5,m = f (bm) and A6>m = F(bm) - A5,mbm.
(18)
2 Solution Techniques
Using arguments similar to those used for developing (17) we get an upper bound Yi(m) forW(m) when b m < b < bm+i given by
7*mF(bm) i fb |a<bm
(19)
where
bi=--
A pseudocode for the CalculateBound(-) function for T-KNAP(P,C,B(F)) using expressions(17) and (19) is given in Figure 4.
function CalculateBoundParameter
x : Partial Solutionbegin
r := max{k : bk < Bu};for i = a to r dobegin
if F(b) is convex over [bi,bt+i] thenuse (17) to calculate MMi);
else if F(b) is concave over [bi,bi+i] thenuse (19) to calculate ^i(i);
elseuse (15) to calculate W\{i) = Y(i);
end;return max MMi);
a<i<r
end;
Fig. 4: CalculateBound(-) function for T-KNAP(P,bC,B(F))
Computing CalculateBound (x) for P-KNAP(P, C, B(Fj): We will now find an upper bound forZp(x) for all realizations x of x. Let x be a realization of x with cost b in the interval [bm ,bm+i].From the linear relaxation of KNAP(P, C, b)
Px < 7tm + pm + i (b - bm) <
Also since u(-) is non-decreasing and F(-) is non-increasing,
i f b m < B L < b m . f l
- bm[l - F(bm)] if BL < b m .
Therefore an upper bound for the maximum value of Zp(x), for all realizations x of x is given by
max{¥(m)}, (20)
where
7rm+1 if b m + 1 < B L
= <( 7 c m + i + 6 n ( b m + i ) i f b m < B L < b m + 1 (21)7r m , 1 +e{^(b m + 1 ) -b m [ i -F(b m ) ]} B L <b m .
2 Solution Techniques 10
and r = max{k : bk < Bu}.
Prom the discussion above we can construct the function CalculateBound(x) for P-KNAP(P, C,B(F)). The pseudocode for this function is given in Figure 5.
function CalculateBoundParameter
x : Partial Solutionbegin
if Cx < BL thenl:=max{k:bk < BL},
else14-a;
r = max{k: bk < Bu};return max ¥ m (from (21));
end,
Fig. 5: CalculateBound(-) function for P-KNAP(P,C,B(F))
Notice that the bound calculation for P-KNAP(P,C, B(F)) described in this section does notuse any special structure of the survival function F(-). Therefore we would not expect it to outputgood upper bounds. In practice, a better bound would be obtained by searching for the maximumvalue of 7rm + p m + 1 (b - bm) + ^i(b) - b[1 - F(b)] in each of the relevant intervals [b m ,b m + i ] . Thisis illustrated in the examples below.
Example 4: Suppose that B is uniformly distributed on [BL, BU] . Then using expression (8), an upperbound for Zp(-) for any realization of x with a cost in the interval [bm,bm+ j], BL < b m is given by
"™ ~2 (Bu-B L )
This expression is concave in b and reaches a maximum value at
Thus, for the uniform distribution, we can modify the bound W{m) to
if bm+i < BL
?tm+1 + 8ju(bm+i) if b m < BL < bm+i7tm + 9{n(bm) - b m [1 - F(bm)]} BL < b m and b ^ < b m
7tm + pm + i (b* - bm) + e{^(bJrl) - b^[1 - F(b^)]} BL < b m and b m < b ^ < b m + 1
7tm+1 + e { j u ( b m + 1 ) - b m + i [ i - F ( b m + 1 ) ] } B L < b m a n d b ^ > b m + i ,(22)
using F(-) and fi(-) as defined in Example 1.
Example 5: Suppose that B follows a shifted exponential distribution supported on [BL,OO] withparameter A. Then using expression (12), an upper bound for Zp(-) for any realization of x with acost in the interval [bm ,bm+i], BL < b m is given by
+2exp[-A(b-BL)]] + ^[l -exp[-A(b - BL)]]
3 Computational Experience 11
This expression reaches a maximum value at
where LambertW(x) satisfies LambertW(x) • exp[LambertW(x)] = x.
Thus, for the shifted exponential distribution, we can modify the bound Y(m) to
i f b m + i < B L
if b m < BL < b m + i- b m [1 - F(bm)]} BL < b m and b ^ < b m
— b m ) + 8{n(b£J — b ^ [l — F(b^)]} BL < b m and b m < b ^ < bm+i0{H(bm+i) - bm+i [1 - F(b m + i ) ]} BL < b m and b ^ > b m + 1 ,
(23)
using F(-) and fi(-) as defined in Example 3.
2.2 The Heuristic
The heuristic developed for both T-KNAP(P,C, B(F)) and P-KNAP(P,C, B(F)) is based on localsearch using a 2-swap neighborhood structure. The initial solution for local search was obtainedfollowing a greedy procedure.
The greedy procedure considers the elements of E one by one in a non-increasing order of profitto cost ratios and adds them to the knapsack if such an addition improved the objective functionvalue (i.e. ZT(0 in the case of T-KNAP(P,C, B(F)) and ZP(«) in the case of P-KNAP(P,C, B(F))).It stops when all the elements in E have been considered.
Once the greedy solution is obtained, the local search procedure is started using the greedysolution as the current solution. The local search procedure performs iterations until a stoppingcondition is reached, after which it outputs the current solution at that stage and terminates. Ineach iterations, all 2-swap neighbors of the current solution, i.e. solutions obtained by either throwingout one of the elements in the current solution, or adding one element into the current solution, orboth, are examined to see if any of them have a better objective function value than the currentsolution (i.e. are better neighbors). If a better neighbor is found, then the neighbor with the bestobjective function value is denoted the current solution and the iteration is over. If no neighboris better than the current solution, then the stopping condition is said to have been reached. Apseudocode of our local search heuristic is presented in Figure 6.
In the next section, we report the results of computations using the algorithms developed in thissection.
3 Computational Experience
We performed some computational experiments to evaluate the performance of the algorithms de-veloped in the previous section. We report our observations here. It should be noted that theseobservations are preliminary in nature, and need to be validated by more extensive computationalexperiments.
We generated ten problems, two each of sizes 15, 20, 25, 40, and 60. The cost values (CJ 's) foreach of the problems were chosen from a discrete uniform distribution supported on {1,2 , . . . , 100}.In one set of problems, consisting of one problem of each size, the profit values (pj's) were generated
3 Computational Experience 12
Note:The elements of E are assumed to be ordered in non-increasing order of £*• ratios;n 15 the size of the problem, i.e the cardinality o /E ;
CalculateObjective(x) calculates the objective function value of a given solution x.
procedure DFBBbegin
/ * Greedy procedure */
for i:= 1 to n dobegin
x:=xgreedy U{ej};if (CalculateObjective(x) > CalculateObjective(xgrCcdy) then
Xgrecdy • = = X;
end;
/ * Local Search */XcuTTcnt • = = Xgrecdyj
Stopping Condition := FALSE;while (StoppingCondition = FALSE) dobegin
Xbest := the best 2-swap neighbor of xcUrrent;if (CalculateObjective(xbest) > CalculateObjective(xcurrcnt) then
XcuTTcnt * = XbestJ
elseStoppingCondition := TRUE;
end;return xCUrrcnt;
end;
Fig. 6: Pseudocode for a local search heuristic
independently from a discrete uniform distribution supported on {1,2,... ,100}. The problems inthis set were called the uncorrelated problems. The £f ratios in these problems were observed tovary between 0.02 and 50.0. In the remaining problems, referred to as strongly correlated problems,the profit values were chosen so that the £f ratios were from a uniform distribution supported on[0.9,1.1]. We label each of the problems using the nomenclature "xy" where x was V or "s"depending on whether the problem was uncorrelated or strongly correlated, and y denoted theproblem size. For example, the problem "u25" refers to the uncorrelated problem with 25 elements.
In our computation, we considered two probability distributions to model the randomness of thebudget B, namely the Uniform and the Normal distribution. For the ease of comparison, the effectivesupports of these distributions were taken to be [BL = bi Y-) ch ^u = b u £ \ Cj]. In the case of thenormal distribution we chose the mean (p,) and the variance (a2) such that p-±3a = (BL, BU).)For each problem, we experimented with three sets of [bi, bu] values, viz. [0.2, 0.8], [0.3, 0.7], and[0.4, 0.6], which were chosen to ensure that the means of B remained the same for all the problems.
The computations were conducted on a Pentium 200 MHz computer running the Linux operatingsystem. The maximum time allowed to solve a problem was set to 500 CPU seconds. If a run did notcomplete in the time allotted, then the corresponding entry is marked with a '—' in our tables. TheCalculateBound(-) function for T-KNAP problems was implemented using expression (17) when
3 Computational Experience 13
B was uniformly distributed. For normally distributed B, expression (19) was used in the interval[BL, £U+5L] and expression (17) in the interval (Bu^BL )Bu]. Preliminary experimentation with theCalculateBound(-) function for P-KNAP problems showed that expression (21) did lead to upperbounds of extremely poor quality. Therefore when B was uniformly distributed, expression (22) wasused. When B was normally distributed, the bound was obtained by using a search algorithm tofind the position of the maximum value of the relaxation of Zp(-) in the various intervals.
Characteristics of the Optimal Solution: The profit sums, costs, and the respective objectivefunction values of an optimal solution x* for T-KNAP problems and P-KNAP problems are presentedin Tables 1 and 2. Note that the optimal solution, in either approach, need not be unique, and theremay exist other optimal solutions with different profit and cost sums. In this part the optimalsolution that we refer to was generated by our algorithms described in the Subsection 2.1.
The costs of the optimal solutions were observed to be closer to BL in all cases than to By. Thiscloseness (to BL) was measured by computing the expression ^*~^ where C* was the cost of ouroptimal solution. For P-KNAP problems, increasing the 9 value amounts to increasing the penaltyfor infeasibility, and hence the costs of the optimal solutions were closer to BL at higher 9 valuesthan at lower 9 values as seen in Table 2. Disregarding profit accrued from an infeasible solutionis a way of (severely) penalizing infeasibility; hence the costs of the optimal solutions to T-KNAPproblems were even closer to BL. The closeness was also more pronounced for uniformly distributedB than for normally distributed B, since the heavier left tail of the uniform distribution imposed astronger penalty for exceeding the budget. Similar behaviour is expected for all distributions witha equal or heavier left tail. Not surprisingly, in a few of these problems, especially when the rangeof the Uniform distribution was taken to be relatively small, the optimal solution was observed tobe even strongly feasible. Another interesting observation regarding the measure of closeness ofoptimal solutions to BL values for P-KNAP problems was that it was not affected by the size of theproblems.
'The objective values of the optimal solutions were seen to increase with a decrease in the lengthof the interval [bi, bu]- This is actually a direct effect of an increase in bi, which by our choice isassociated with the reduction in length of the [bi, bu] intervals.
Note that for all the problems that we considered, the optimal solutions output, for T-KNAPas well as P-KNAP problems, were mean feasible (and consequently median feasible, since both theuniform and the normal distributions are symmetric). Hence, we could have restricted ourselvesto solutions in T^ or T^o.s) for T-KNAP problems, as well as for P-KNAP problems with at leastmoderately large 9 values. Such a restriction would have made the calculation of bounds faster. Webelieve that this observation would be valid for many common distributions encountered in real-lifeproblems.
Performance of the Exact Algorithm: Table 3 presents the number of nodes expanded by theDFBB algorithm and its execution time in CPU seconds for the case in which the budget B wasuniformly distributed. Table 4 presents the same observations for the case in which the budget Bfollowed a normal distribution.
In both tables we observe that the time taken to solve strongly correlated problems was muchhigher than the time taken to solve an uncorrelated problem of the same size. This observation is inline with similar observations for deterministic 0-1 knapsack problems (refer Martello and Toth [10]).
For P-KNAP problems, in general, the number of nodes expanded and the execution time of theDFBB algorithm increased with increasing 9 value. We also saw that the execution time and thenumber of nodes expanded in the DFBB tree increased with increasing length of the [bi, bu] intervalin case of T-KNAP problems but decreased in case of P-KNAP problems. Exceptions to this trend
4 Summary and Directions of Future Research 14
were noticed in strongly correlated T-KNAP problems in which B was uniformly distributed and inP-KNAP problems with 8 = 10 and normally distributed B.
The time needed to expand a node was much higher for P-KNAP problems in which B wasnormally distributed. This was because, in these problems, the calculation of upper bounds involveda search procedure, which took more time. The bounds found in this manner are however seento be very effective, since the number of nodes expanded in these problems were much lower thanthe number of nodes expanded in similar size problems in which B followed a uniform distributionsupported on the same interval.
Performance of the Heuristic: The local search heuristic developed in the previous section per-formed very well, both in terms of solution quality as well as execution times. Our computationalexperience with this heuristic is summarized in Tables 5 and 6. Notice that it took less than 0.05CPU seconds on each of the T-KNAP problems, and less than 0.15 CPU seconds on each of theP-KNAP problems. The average suboptimality was less than 0.07% for T-KNAP problems and0.048% for R-KNAP problems. Local search performed better when B was normally distributedthan when it was uniformly distributed.
4 Summary and Directions of Future Research
In this paper we consider a static stochastic 0-1 knapsack problem in which the budget is random.The relevant literature is very briefly surveyed in the introductory section. In Section 1, we formallydefine the knapsack problems that we study here. We extend the concept of feasibility of a solutionfor deterministic knapsack problems, to define strongly feasible solutions that are feasible for allpossible values that the budget may assume, and weakly feasible solutions that are feasible onlyfor a range of the possible values of the budget. We also define alternative concepts of solutionfeasibility like mean feasibility and feasible with a reliability coefficient. We revise the expressionof the objective function value of the deterministic knapsack problem to incorporate two differentmethods of penalizing infeasibilities. We show that maximizing the expected value of the objectivefunction is equivalent to minimizing the expected value of the regret associated with a solutionto the static stochastic knapsack problem under very reasonable assumptions. We conclude thesection by defining two problems, T-KNAP and P-KNAP, based on two different ways of handlinginfeasibilities.
In Section 2 we devise an exact algorithm and a heuristic to solve T-KNAP and P-KNAPproblems. The exact algorithm is based on depth first branch and bound (DFBB), and the heuristicis based on local search, starting with a greedy solution. Most of this section is devoted to methodsfor computing upper bounds for the DFBB algorithm. We do not need to use the functional form ofthe survival function to derive these bounds; consequently the bounds are useful even when the exactfunctional form of the survival function is unknown. While the computed bounds (consequently thealgorithm) are meant explicitly for weakly feasible solutions, they can be improved (in terms ofcomputational speed) for smaller classes of feasible solutions (like Tn or J*(a)), by adjusting thevalue of r in (14) and (21) appropriately.
Section 3 contains the results of preliminary computations with T-KNAP and P-KNAP problems.We see that the costs of the optimal solutions to both T-KNAP and P-KNAP problems are almostalways very close to the lowest possible value of the budget. This shows that considering meanor median feasible solutions instead of weakly feasible solutions does not affect the quality of theoutput. We also see that the time taken by the exact algorithm almost always increases when theextent of penalization of infeasibilities in P-KNAP problems increases. The execution times increasefor T-KNAP problems when the length of the support for the distribution of the budget increases,
References 15
but decrease in case of P-KNAP problems- The local search heuristic is seen to output solutionswith objectives within 2% of that of the optimal solution within 0.15 CPU seconds.
We believe that there is need for much more elaborate computational experiments with T-KNAPand P-KNAP problems. The results that we report in Section 3 are based on ten problems. Theseresults need to be verified for a much larger data set — containing problems in which the profitand cost values come from distributions other than uniform, and ones in which the budget followsdistributions other than uniform and normal. These results can be used, for example, to examinewhether our observation regarding the exceptional hardness of strongly correlated problems in whichthe budget follows a uniform distribution with b u = 0.3 and b\ = 0.7, and the easy solvability ofP-KNAP problems in which the budget follows a normal distribution with b u = 0.3 and bi = 0.7 isreally valid, or whether it appears so due to our small sample size of observations.
Note that both T-KNAP and P-KNAP problems penalize infeasibilities. An interesting questionis whether there exists a 6 value for P-KNAP problems, for which an optimal solution to the T-KNAP problem is also an optimal solution to the P-KNAP problem? Intuitively, it seems that sucha 6 value should exist, since for very low valus of 0, the costs of optimal solutions to P-KNAPproblems should be very close to x | u , the maximum profit solution to KNAP(P,C,Bu), while forvery high valus of G, the optimal solutions to P-KNAP problems are strongly feasible. Assuming thatsuch a 8 value exists, questions regarding how it depends on the size of the problem, the distributionof the profit and cost variables, and the randomness of the budget arise. We could not answerthese questions with our limited experimentation, but we think that more elaborate computationalexperiments would help.
Another possible direction of future research in these problems is in the area of algorithm de-velopment. In this paper, we have used a DFBB algorithm with rather simplistic bounds. Morecomplex, and possibly distribution specific bounds can markedly reduce the execution times for ex-act algorithms. Development of such bounds promises to be interesting. Apart from development ofnew bounds for DFBB, one can also look for ways to adapt other specialized algorithms for deter-ministic knapsack problems to solve the stochastic version of the problem. Another possible avenueof research in algorithm development could be to develop algorithms to solve dynamic stochasticknapsack problems with random budgets.
A third direction of research would be to analyze these problems under non-linear penalty as-sumptions. One could also analyze related problems, like the integer knapsack problem, and thebin-packing problem, under similar budget or capacity variations. It follows from Ghosh and Das [6]that if the profit vector, in addition to the budget, is random, then the optimization process isunaffected if the random elements in P are replaced by their expected values. In future, we planto generalize the study by Cohn and Barnhart [5] and analyze a more general class of 0-1 knapsackproblems with random cost elements.
References
[1] I.L. Averbakh. "On an algorithm for solving the m-dimensional knapsack problem withrandom coefficients" Diskretnaya Mathematica 2, (1990) 3-9.
[2] LL. Averbakh. "Minimax regret solutions for minmax optimization problems with uncer-tainty" Operations Reseach Letters 27, (2000) 57-£5.
[3] J.R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer Series inOperations Research, Springer, (1997).
[4] R.L. Carraway, R.L. Schmidt, and L.R. Weatherford. "An algorithm for maximizing tar-get achievement in the stochastic knapsack problem with normal returns" Naval ResearchLogistics 40, (1993), 161-173.
References 16
[5] A.M. Cohn and C. Barnhart. "The stochastic knapsack problem with random weights: Aheuristic approach to robust transportation planning" THennial Symposium on Transporta-tion Analysis (TRISTAN III), San Juan, Puerto Rico, June 17-23, (1998).
[6] D. Ghosh and S. Das. "Discrete optimization problems with random cost coefficients" Re-port # 00ASS, SOM Graduate School/Research Institute, University of Groningen, (2000).
[7] M.L Henig. "Risk criteria in a stochastic knapsack problem" Operations Research 38, (1990),820-835.
[8] A.J. Kleywegt and J.D. Papastavrou. "The dynamic and stochastic knapsack problem" Op-erations Research 46, (1998), 17-35.
[9] A. Marchetti-Spaccamela and C. Vercellis. "Stochastic on-line knapsack problems" Mathe-matical Programming 68 (Series A), (1995), 73-104.
[10] S. Martello and P. Toth. Knapsack Problems: Algorithms and Complexity. Kluwer AcademicPublishers, (1990).
[11] M. Sniedovich. "Preference order stochastic knapsack problems: Methodological issues"Journal of the Operational Research Society 31, (1980), 1025-1032.
[12] M. Sniedovich. "Some comments on preference order dynamic programming models" Journalof Mathematical analysis and applications 79, (1981) 489-501.
[13] E. Steinberg and M.S. Park. "A preference order dynamic program for a knapsack problemwith stochastic rewards" Journal of the Operational Research Society 30, (1979), 141-147.
[14] K. Szkatula. "The growth of multi-constraint random knapsacks with large right-hand sidesof the constraints" Operations Research Letters 21, (1997), 25-30.
[15] M.H. an der Vlerk. Stochastic Programming with Integer Recourse. Ph.D. thesis, Universityof Groningen, The Netherlands, (1995).
[16] G. Yu. "On the max-min 0-1 knapsack problem with robust optimization applications"Operations Research 44, (1996), 407-415.
Problemul5
u20
u25
u40
u60
sl5
s20
s25
s40
s60
[bi,bu][0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
BL142.4213.6284.8
156.6234.9313.2
234.6351.9469.2
390.4585.6780.8
579.6869.41159.2
4 148.0222.0296.0
217.0325.5434.0
242.2363.3484.4
404.8607.2809.6
540.0810.01080.0
Bu569.6498.4427.2
626.4548.1469.8
938.4821.1703.8
1561.61366.41171.2
2318.42028.61738.8
592.0518.0444.0
868.0759.5651.0
968.8847.7726.6
1619.21416.81214.4
2160.01890.01620.0
Px*511575607
607706767
8929651060
124412781449
191921492419
304269313
451390457
502455518
829727850
11479811151
T-KNAPCx* Objective177228266
159245309
291355465
542586776
6438661150
285251294
429370435
467422483
789689810
10779161081
469.61545.93607.00
603.9683.23767.00
820.52958.621060.00
1082.981277.351449.00
1849.032149.002419.00
210.20242.65313.00
304.13350.01454.89
346.69399.86518.00
566.73653.55849.16
766.79884.721148.87
P-KNAPPx* Cx*511575627
662723777
89210251083
12441343
2064
218269334
300388477
342443545
571
757
177228293
202266324
291416491
542649
772
202215314
281368456
316411509
537
702
(6-10)Objective496.99571.36624.64
640.06707.56773.28
869.4981.211072.87
1145.881317.26
1957.55
185.16254.79323.05
268.54367.19465.85
303.52419.51532.51
499.04
676.00
P-KNAPPx* Cx*452575627
612706767
8389651060
1086
167240321
249353457
279402523
E—
149228293
164245309
258355465
423
E154224302
231333435
257372489
E—
(9 = 50)Objective449.45556.8615.2
609.09697.86767.00
818.55964.491060.00
1063.31
—
164.97239.66314.92
241.47349.76456.89
271.46398.09520.82
E—
P-KNAPPx* Cx*452543607
607706767
8159651060
—
—
167240314
241347457
270398522
E—
149215266
159245309
232355465
_
—
154224296
223327435
248368487
—
—
( 9 - 100)Objective446.9542.66607.00
606.39689.71767.00
815.00963.981060.00
—
—
162.95239.32314.00
238.23346.74456.77
267.69395.72520.60
—-
—
Tab. 1: Characteristics of thn Optimal Solution. Uniform Distribution
Problemul5
u20
u25
u40
u60
sl5
s20
s25
s40
s60
[bubu][0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
BL142.4213.6284.8
156.6234.9313.2
234.6351.9469.2
390.4585.6780.8
579.6869.41159.2
148.0222.0296.0
217.0325.5434.0
242.2363.3484.4
404.8607.2809.6
540.0810.01080.0
Bu569.6498.4427.2
626.4548.1469.8
938.4821.1703.8
1561.61366.41171.2
2318.42028.61738.8
592.0518.0444.0
868.0759.5651.0
968.8847.7726.6
1619.21416.81214.4
2160.01890.01620.0
Px*575607627
711762777
102510481083
139314251502
227723552473
304326343
451457495
502528567
823863
111411681260
T-KNAPCx* Objective228266293
250304324
416442491
703741831
99310761230
285306323
429435*474
467493531
783823
104510981188
554.22589.03624.44
685.79725.72773.19
949.351013.651074.93
1280.161374.351482.34
2144.392290.662443.35
265.62293.93333.06
383.98425.21480.28
438.07484.77548.33
716.00793.38
968.731073.681214.73
P-KNAPPx* Cx*607627636
762767797
102510831102
139314701540
235524192523
291326356
423469516
482539598
789877
107711881307
266293320
304309354
416491522
703793882
107611501298
272306336
402449494
449503560
750837
100911171234
(9-10)Objective572.55606.99629.46
710.94754.42788.28
987.401041.251094.06
1321.251423.031518.68
2221.832369.242498.91
259.62303.83346.60
373.99436.16501.18
426.99500.43572.17
697.49816.32
946.041106.611265.81
P-KNAPPx* Cx*575595627
706733787
96510251083
12641393
21242332
232291343
337417488
392471561
635771
8601042
228255293
245281339
355416491
569703
8331050
216272323
317396467
362438525
599732
800976
(9=50)Objective523.97581.28625.59
660.51717.45776.31
912.491005.111078.44
1200.301352.10
2039.742265.08
207.57269.5329.96
300.88388.65476.29
342.30443.83543.67
559.35725.64
757.98981.97
P-KNAPPx* Cx*511575627
667723777
91510251083
12441357
2064
218269334
306396479
358455549
577
781
177228293
207266324
317416491
542675
772
202251314
287376458
331422513
543
725
(9 = 100)Objective497.97570.16624.18
643.52709.02773.07
873.41985.231073.87
1155.821312.98
1972.29
189.39256.16324.46
273.34370.69467.56
310.63424.05534.41
509.38
690.16
Tab. 2: Characteristics of the Optimal Solution. Normal Distribution
Problemul5
u20
u25
u40
u60
sl5
s20
s25
s40
s60
[b l fbu][0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
T-KNAPNodes Time
675333
16910962
16112689
761657406
13811097558
127142169
1682182148
107826911120
42276433544381
1316531462312564
<0.01<0.01<0.01
0.010.010.01
0.01<0.01<0.0l
0.070.050.03
0.180.120.04
0.01<0.01<0.01
<0.010.140.01
0.070.200.05
0.0878.290.27
1.7365.201.17
P-KNAPNodes
627183833
162257383227
5566659
854320
176863127
82
67635287583
252838864166893
5659197615
2237358
848
9701
(9 -10)Time0.010.030.11
0.010.102.80
0.030.3233.61
0.0270.37
0.01
0.020.110.21
0.111.445.53
0.298.5687.86
0.11
1.94
P-KNAPNodes1269554711612
862733008233950
59850152176
6919235
5200212
—
120142027777
1256779208190361
15141010465953308797
—
—
(9 = 50)Time0.040.170.31
0.331.167.51
2.996.26
257.93
424.41
E0.040.120.22
0.492.686.24
6.5543.02129.00
—
—
P-KNAPNodes2039656712103
1906341301265364
168840695996
8886362
—
—
122642117805
1684490505190361
18519111941763308794
—
—
( 9 = 100)Time0.060.180.32
0.741.398.42
7.9728.02327.35
—
—
0.030.130.21
0.623.146.28
7.9148.39129.36
—
—
Tab. 3: Performance of the Exact Algorithm. Uniform Distribution
Problemul5
u20
u25
u40
u60
sl5
s20
s25
s40
s60
[bubu][0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
T-KNAPNodes Time
564324
11211055
1239468
619541312
964741536
995289292
4223143672376
126571609123
725321218899
3324535705
357683
0.010.01
<0.01
0.010.01
<0.01
0.02<0.010.01
0.080.060.03
0.180.120.06
0.010.240.01
0.541.430.18
0.150.930.83
18.88245.81
12.1211.0673.08
P-KNAPNodes
3024
1043
56110
3789
8065
493
3783041163
552503742
6763
1435
112124
5621
743675985
31102773
1116998747558
( 9 - 1 0 )Time0.040.030.79
0.080.203.71
0.190.151.07
1.000.744.10
2.602.114.17
0.070.050.78
0.170.214.45
1.090.901.32
8.417.36
40.2732.8222.47
P-KNAPNodes
3936
2504
70908
29819
87420
61640
3581308
5271634
11020786824
1805232
90494
5363716
460812
218283253
748619040
(0 = 50)Time0.060.071.50
0.131.32
19.31
0.261.22
75.23
1.157.64
2.9415.07
0.161.252.56
0.535.86
42.94
0.875.17
204.91
5.99279.27
28.25136.72
P-KNAPNodes
87315
3759
1611276
61986
2742692
433406
53694995
539
53730637373
118121703145536
60141489770976
2046
6593
(0 = 100)Time0.190.362.05
0.351.56
33.35
1.186.51
406.94
2.97412.40
3.39
0.491.742.69
2.4018.1168.27
1.3236.33304.98
5.88
25.36
Tab 4: Performance of the Exact Algorithm. Normal Distribution
Problemul5
u20
u25
u40
116O
si 5
s20
s25
s40
s60
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
T-KNAPObjective Time
469.61545.93607.00
603.90683.23767.00
820.52958.621060.00
1082.981275.001436.00
1849.032149.002419.00
210 20242.65306.80
304.13350.00454.89
346.69399.86518.00
566.73653.45848.16
766.79884.721148.00
<0.01<0.01<0.01
0.01<0.01<0.01
<0.01<0.010.01
0.010.010.02
0.020.020.02
<().() 10.01
<0.01
<0.01<0.01<0.01
<0.01<0.01<0.01
0.020.010.01
0.020.020.04
P-KNAP (9Objective
496.99571.36624.64
640.06707.56773.28
869.40981.211072.87
1145.881317.261470.32
1957.552212.592440.33
185.16254.79323.05
267.55367.19465.42
303.52419.43532.51
499.04684.68870.84
676.00929.001177.00
= 10)Time<0.01<0.01<0.01
<0.010.010.01
<0.01<0.01<0.01
0.010.010.02
0.030.030.03
<0.01<0.01<0.01
<0.010.010.01
0.01<0.01<0.01
0.010.050.02
0.050.030.06
P-KNAP (9Objective
449.45556.80615.20
609.09697.86767.00
818.55964.491060.00
1063.311290.341460.47
1866.682169.852422.79
164.97239.66312.78
241.47349.76456.89
271.46398.09520.82
444.94651.28852.54
603.19881.801154.40
= 50)Time0.01
<0.010.01
0.010.010.01
0.01<0.010.01
0.010.020.01
0.030.040.06
<0.010.010.01
0.010.010.01
<0.010.010.01
0.020.030.03
0.050.060.06
P-KNAP (9Objective
446.9542.66607.00
606.39689.71767.00
815.00963.981060.00
1049.101276.261451.08
1844.362160.702419.00
162.95237.32311.00
238.23346.74456.77
267.67395.72519.63
438.87647.22847.57
593.39875.401152.47
= 100)Time0.010.010.01
0.01<0.010.01
0.010.010.02
0.010.020.02
0.040.030.03
<0.010.010.01
0.010.01
<0.01
0.010.010.01
0.010.010.04
0.040.050.03
Tab 5: Performance of Local Search. Uniform Distribution
Problemul5
u20
u25
u40
u60
sl5
s20
s25
s40
s60
[bubj[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
[0.2,0.8][0.3,0.7][0.4,0.6]
T-KNAPObjective Time
554.22589.03624.44
685.79725.72773.19
949.351013.651074.93
1280.161374.351482.34
2144.392290.662443.35
265.62293.93331.96
383.98425.21480.28
438.07483.85547.22
716.00793.29898.00
968.101072.761214.73
0.010.01
<0.0l
<0.01<0.01<0.01
0.01<0.010.01
0.010.010.01
0.050.020.05
<0.010.01
<0.01
<0.01<0.01<0.01
0.010.010.01
0.010.010.01
0.050.050.05
P-KNAP (9Objective
572.55606.99626.72
710.94754.42788.28
987.401041.251090.21
1321.251423.031518.68
2221.832369.242498.91
259.62303.83346.60
373.97436.14501.18
426.99500.43572.17
697.49816.32936.15
946.041106.611265.81
= 10)Time<0.01<0.01<0.01
<0.01<0.01<0,01
0.01<0.01<0.01
0.020.020.03
0.040.040.04
0.01<0.01<0.01
0.01<0.01<0.01
<0.01<0.01<0.01
0.040.040.04
0.030.080.04
P-KNAPObjective
523.97581.28625.59
660.51717.45775.52
912.491005.111078.44
1200.301352.101487.55
2039.742265.082450.65
207.57269.50329.96
300.88388.34475.29
342.30443.83543.67
559.35725.64889.30
757.98981.971204.99
(9 -50)Time<0.01<0.01<0.01
<0.01<0.01<0.01
<0.01<0.01<0.01
0.030.020.01
0.040.040.07
<0.01<0.01<0.01
0.010.01
<0.01
0.010.01
<0.01
0.010.010.03
0.030.150.04
P-KNAP (9Objective
497.97570.16624.18
643.52707.95773.07
873.41985.231073.87
1155.821312.981473.11
1972.292219.562439.51
189.39256.16324.46
273.34370.68467.50
308.56424.05534.41
509.38692.25874.13
690.16936.771181.38
= 100)Time<0.01<0.01<0.01
<0.01<0.01<0.01
<0.01<0.01<0.01
0.010.010.01
0.050.050.04
<0.01<0.01<0.01
<0.010.01
<0.01
0.01<0.01<0.01
0.020.030.01
0.070.070.08
Tab. 6: Performance of Local Search. Normal Distribution