Upload
lytram
View
222
Download
0
Embed Size (px)
Citation preview
University of Groningen
The binary knapsack problemGhosh, Diptesh; Goldengorin, Boris
IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite fromit. Please check the document version below.
Document VersionPublisher's PDF, also known as Version of record
Publication date:2001
Link to publication in University of Groningen/UMCG research database
Citation for published version (APA):Ghosh, D., & Goldengorin, B. (2001). The binary knapsack problem: solutions with guaranteed quality. s.n.
CopyrightOther than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of theauthor(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).
Take-down policyIf you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediatelyand investigate your claim.
Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons thenumber of authors shown on this cover page is limited to 10 maximum.
Download date: 15-05-2019
The Binary Knapsack Problem: Solutions with Guar-anteed Quality
Diptesh Ghosh and Boris Goldengorin
SOM-theme A Primary Processes within Firms
Abstract
Binary knapsack problems are some of the most widely studied problems incombinatorial optimization. Several algorithms, both exact and approximateare known for this problem. In this paper, we embed heuristics within a branchand bound framework to produce an algorithm that generates solutions withguaranteed quality within very short times. We report computational experi-ments that show that for the more difficult strongly correlated problems, ouralgorithm can generate solutions within 0.01% of the optimal solution in lessthan 10% of the time required by exact algorithms.
Keywords: binary knapsack, accuracy parameter, branch and bound, localsearch
(also downloadable) in electronic version:http://som.rug.nl
1
1. Introduction
In a binary knapsack problem (BKP), we are given a setE = {ej} of n elements and aknapsack of ‘weight capacity’c. Each elementej has a ‘profit’pj and a ‘weight’wj.Our objective is to find the most profitable solution, i.e. subset of elements ofE, thatcan be put in the knapsack without violating its weight capacity. The profitability ofa subset ofE is defined as the sum of the profits of the elements in the subset. If wedenote the decision of including (or excluding) an elementej in the knapsack by settinga variablexj to 1 (0, respectively) then each solution can be represented as a vectorx =
(x1, x2, . . . , xn), and BKP can be represented by the following mathematical program:
BKP: z? = max{
P(x) =
n∑j=1
pjxj
∣∣∣C(x) =
n∑j=1
wjxj ≤ c, xj ∈ {0, 1}, j = 1, . . . , n}
.
z? is called theoptimal pro�t for the instance, and any solutionx? = (x?1, x
?2, . . . , x
?n)
with P(x?) = z? andC(x?) ≤ c is called anoptimal solution. In this paper, we willassume thatpj, wj, andc are positive,wj < c, for j = 1, . . . , n, andC(1) > c. Wealso assume, without loss of generality, that the elements inE are ordered accordingto non-increasing profit to weight ratios, i.e. for any two elementsei, ej ∈ E, pi
wi>
pjwj
=⇒ i > j, ties being broken arbitrarily.
BKP is among the most widely studied problems of discrete optimization, being ofinterest to both practitioners and theoreticians. Practical interest in this problem stemsfrom the fact that many practical situations are either modelled as binary knapsackproblems (for example, capital budgeting and cargo loading) or solve such problemsas subproblems (for example, cutting stock problems). Theoretical interest arises fromthe fact that, although it is among the simplest discrete optimization problems to state,it is often quite difficult to solve.
It is well-known that the optimization version of the BKP is NP-hard (refer, for ex-ample, to Garey and Johnson [1]). Exact algorithms to solve it are therefore basedon branch and bound, dynamic programming, or a hybrid of the two. Comprehensiveoverviews of the exact solution techniques for the binary knapsack problem are avail-able in Martello, Pisinger, and Toth [3] and Martello and Toth [5]. Algorithms based onbranch and bound have been traditionally preferred to those based on dynamic pro-gramming because the latter require a lot of computer memory. Recently however,algorithms using dynamic programming have been used to solve large and difficultinstances of the BKP (see, for example, algorithmcombo in Martello, Pisinger, andToth [4]). Heuristics for BKP, which generate feasible (and usually suboptimal) solu-
2
tions within short execution times are also an active area of research (refer to Martelloand Toth [5], and Ghosh [2] for a treatment on heuristics for the BKP). These heuris-tics usually perform very well in practice, and output solutions that are very close tooptimal. However, their performance guarantee is usually based on their worst caseperformance ratios, which form a very weak bound on the deviation of the profit ofthe heuristic solution to that of the optimal solution for BKP instances. Moreover thebounds obtained from such ratios is not instance-specific.
In this paper we present an algorithmα-MT1 that aims to rectify the situation. It em-beds a heuristic inside a branch and bound framework. This allows us to compute a-posteriori, an upper bound to the deviation of the heuristic solution from an optimalsolution, in terms of profits. If the deviation observed is more than an allowable limit,a backtracking operation allows us to use the heuristic with additional constraints andgenerate better solutions. Thus, in addition to the profit vector, the weight vector, andthe knapsack capacity,α-MT1 takes aprescribed accuracy parameterα as input. Thealgorithm then guarantees that the profit (zα) of the solution it outputs would satisfythe expression
z? − zα ≤ α.
The termz?−zα will henceforth be called theachieved accuracy. The branch and boundframework that we use is the well-knownmt1 algorithm in Martello and Toth [6]. Weuse this algorithm instead of more sophisticated ones because it is a typical exampleof branch and bound based algorithms for the BKP, and is one of the simplest amongsuch algorithms. Moreovermt1 is sufficient to demonstrate the behavior we are inter-ested in. Notice that the prescribed accuracy parameter inα-MT1 is not expressed as apercentage (as is common inε-approximate algorithms), but as an absolute value. Thisensures that the deviation from the optimal profit can be controlled irrespective of theactual value of the optimal profit.
The remainder of the paper is organized as follows. In the next section we describethe algorithmα-MT1. Section 3 presents results from computations carried out on ran-domly generated BKP instances belonging to well-known classes studied in the litera-ture. We conclude the paper in Section 4 where we summarize the findings in the paperand suggest directions for future research.
3
2. Theα-MT1 Algorithm
Theα-MT1 algorithm that we propose in this paper embeds a local search heuristic inthe branch and bound framework of themt1 algorithm (refer to Martello and Toth [6]for a detailed description of themt1 algorithm.) In order to achieve this, we maketwo modifications tomt1. First, we incorporate the prescribed accuracy factor in thefathoming procedure. Consider a subsetS of the set of all feasible solutions,S. Wefirst use a good and relatively fast heuristicH to obtain a good solution inS. We alsocompute an upper boundubS to the profit of the solutions inS. If the profit from theheuristic solutionzH satisfies the condition:ubS−zH ≤ α, then we know that we havefound a solution whose profit is withinα of the profit of the best solution inS. Second,we use a stopping rule that can stop the algorithm before it considers all of the subsetsof S that it generates. Letubinit be an global upper bound to the optimal profit of theinstance. If at any subsetS of S, the heuristicH produces a solutionzH satisfying theconditionzH ≥ ubinit − α, then we can stop the computations immediately. This isbecauseubinit ≥ z?, which immediately implies thatz? − zH ≤ α.
The mt1 algorithm proposed in Martello and Toth [6] uses a depth first search strat-egy to explore subproblems. However, in our implementation ofα-MT1, we follow abest-first search strategy. For exact algorithms, this strategy is known to produce anoptimal solution after evaluating the least number of subproblems. However, it requiresmore memory than algorithms using depth-first search strategies. Our best-first searchstrategy requires us to maintain a list of subproblems (which we callLIST ), and termi-nate the algorithm when the list is empty, or when our stopping rule is triggered. In thepseudocode ofα-MT1 a subproblem is denoted by a partial solutionx = (x1, . . . , xn)
defined on an alphabet{0, 1, •}. xj = 0 or 1 has its usual connotations regardingej,andxj = • denotes that no decision has been made on whether or not to includeej inthe solution. A partial solution where each of the components are either0 or 1 is calleda complete solution. We present a pseudocode of theα-MT1 algorithm in Figure 1. Itassumes the presence of three procedures:ub(x), which returns an upper bound to theprofit of the best solution from subproblemx;H(x), which returns a feasible solution tothe subproblemx; andforward-move(x), which performs a ‘forward move’ describedin mt1. We describe these procedures in detail in the remaining portion of this section.
INSERT FIGURE 1 HERE
• ub(x): Upper bounds are computed inα-MT1 in the following manner (basedon Martello and Toth [6]):
4
LetΠ(x) =
∑j:xj=1
pj, cr(x) = c−∑j:xj=1
wj, s(x) = min{j :∑ji=1,xi=•
wi > c,s−(x) = max{j : xj = •, j < s(x)}, s+(x) = min{j : xj = •, j > s(x)}, and
c(x) = c −∑s−(x)i=1,xi=•
wi. Then
ub(x) = Π(x) +
s−(x)∑i=1,xi=•
pi + max{c(x)ps+(x)
ws+(x), ps − (ws − c(x))
ps−
ws−}
is an upper bound to the optimal profit from the given instance.• H(x): The heuristicH(x) is simply a local search heuristic with a 2-exchange
neighborhood. It involves four steps. In the first step, it puts all the elementsej ∈ E with xj = 1 in a setS, computescr = c −
∑j:xj=1
wj, and constructsa setEr of elementsej ∈ E with xj = •. In the second step, it computes agreedy solutionSG by considering the elementsej ∈ Er in the natural order andincluding them in a knapsack with weight capacitycr whenever possible. Thethird step is a local search step which starts withSG and improves it with localsearch using a 2-exchange neighborhood structure defined on the elements ofEr. The last step constructs the feasible solution output byH(x) by combiningS andSG.
• forward-move(x): Theforward-move procedure inα-MT1 is identical to that inmt1. Let j = min{j : xj = •}. We first construct the setN of the largest numberof consecutive elements withxj = • that we can include in the knapsack withoutexceeding the residual weight capacitycr = c−
∑j:xj=1
wj. Setxj = 1 for eachj such thatej ∈ N. If Π(x) + P(N) = ub(x), and this value is better than theprofit of the best solution found so far, then we replace it byx, and directα-MT1to discard further search in the current subproblem. Otherwise, ifc−
∑j:xj=1
wjis less than the weight of any element inE, we carry out a dominance step, bywhich we try to replace the last element inN by two elements that are not inN. If the result of this dominance step is more profitable than the best solutionfound so far, then the best solution is updated. Theforward-move procedurereturns the modifiedx vector.
Notice that runningα-MT1 with α = 0.0 results in an optimal solution but makesα-MT1 run like mt1, while runningα-MT1 with a large value ofα makes it run likeH(x), which in this case is local search with a 2-exchange neighborhood.
5
3. Computational Experiments
In this section we report our computational experience with theα-MT1 algorithm. Wecoded the algorithm in C, compiled it using the LCC compiler for Windows NT (dueto Navia [7]), and ran it on a 733MHz Intel Pentium III machine with 128MB RAM.
We experimented with five different types of instances, viz.
Uncorrelated (UC) pj andwj values are uniformly and independently distributed inthe interval[L,H].
Weakly Correlated (WC) wj values are uniformly and independently distributed inthe interval[L,H]. For eachj, pj is uniformly random in[wj − 200,wj + 200],so thatpj > 0.
Strongly Correlated (SC) wj values are uniformly and independently distributed inthe interval[L,H]. pj = wj + 10.
Inverse Strongly Correlated (ISC) pj values are uniformly and independently dis-tributed in the interval[L,H]. wj = pj + 10.
Almost Strongly Correlated (ASC) wj values are uniformly and independently dis-tributed in the interval[L,H]. For eachj, pj is uniformly random in[wj +
98, wj + 102].
The weight capacityc for each of the instances was chosen to be0.5∑nj=1wj. In
Pisinger [8] it is shown that for UC type instances, settingc = 0.35∑nj=1wj generates
instances that are most difficult formt1. However, preliminary computations showedthat the behavior ofα-MT1 with c = 0.35
∑nj=1wj was identical to the behavior when
c = 0.5∑nj=1wj with respect to changes inα values. Thus, to maintain uniformity
over all problem types, we chosec = 0.5∑nj=1wj for all problem types.
These instance types are similar to the ones used in Martello, Pisinger and Toth [4].The only major class of instances mentioned in Martello, Pisinger and Toth [4] that wechose not to use in our computations are the class of even-odd problems. This is be-cause we experiment with real-valued data, and ‘even’ and ‘odd’ concern integers only.Also, since we are concerned with approximate solutions, even-odd problems wouldinvariably degenerate to instances very similar to the those of the strongly correlatedclass of problems.
As mentioned earlier, the data for each of the instances in our experiments are real-valued. This makes the instances more difficult to solve (since BKP is #P-Complete).For the UC and WC problem instances, we varied the size of the instances from 500 to2000. For the other problems we varied the instance sizes from 50 to 1000. For each
6
instance size and instance type, we generated forty instances, divided into two sets oftwenty instances each. In the first set,L = 1 andH = 1000. In the second setL = 1001
andH = 2000. Thus the spread of data in both the sets was the same, but the secondset of problems did not contain any element with small weights, the presence of whichoften make the solution process easier.
We examined the behavior ofα-MT1 in terms of the profit of the solution that it output,and the size of the BnB tree that it generated in order to solve instances correspondingto different instance sizes and values ofα. (The size of a branch and bound tree is thenumber of nodes it contains.)mt1 is known to be able to solve moderate to large sizedUC and WC type instances and is also known not to be able to solve any but smallSC type instances (see, for example, Martello and Toth [5]). We took this behaviorinto account while designing our experiments. For the UC and WC type instances, weallowedα-MT1 to solve the instances exactly, and withα values of5.0, 10.0, 15.0,20.0, and25.0. Given the data range for the instances, theseα values each amountto less than0.01% of the profit of an optimal solution. For SC, ISC, and ASC, wedivided the instances into two categories, small and large. The small instances were ofsizes varying from 50 to 150, and the large instances were of sizes varying from 200to 1000. We allowedα-MT1 to solve the small instances exactly, and withα values of5.0, 10.0, 15.0, 20.0, and25.0. For the large instances, we computed the upper boundubinit = ub( (•, •, . . . , •) ) and usedα values of0.02%, 0.04%, 0.06%, 0.08%,0.10%, and0.12% of ubinit. α-MT1 was allowed a maximum execution time of 10CPU minutes for each instance and eachα value. We report the behavior ofα-MT1only for those sets where at least ten of the instances were solved within the given timefor eachα value.
For UC and WC type instances and small sized SC, ISC, and ASC instances, we coulduseα-MT1 to obtain optimal solutions. Thus we could compute the actual deviation ofthe solution output byα-MT1 from that of the optimal solution. But for large sized SC,ISC, and ASC instances, we could not obtain optimal solutions usingα-MT1 withinreasonable times. For these problems therefore, we measured deviations as a percentageof ubinit. These values therefore, form an upper bound to the actual deviations fromthe optimal profit for these instances. Different instances in the same problem set havewidely different sizes of the BnB tree generated byα-MT1. So it is logical to presentthe size of the BnB tree generated for a certainα value as a percentage of the size of thetree generated forα = 0.0. This is possible for UC and WC type instances and smallsized SC, ISC, and ASC instances. For large sized SC, ISC, and ASC instances, wecould not solve the instances withα = 0.0; therefore we express the sizes of the BnB
7
trees as a percentage of the size of the trees generated whenα is 0.02% of ubinit. Notehowever, that this difference makes it impossible to compare the percentage reductionsin the sizes of the BnB trees for small and large instances of SC, ISC, and ASC typeinstances. Tables 4.1 through 4.10 present the results of our computational experiments.
INSERT TABLES 4.1 THROUGH 4.10 HERE
We now define two notations that will help to make the analysis of the results morereadable. The first isΓ(n, α), which is the ratio of the achieved accuracy to the pre-scribed accuracy for instances of sizen, and a prescribed accuracy parameterα. Thesecond notation isΦ(n, α, α0), which is the ratio of the size of the BnB tree for in-stances of sizen and a prescribed accuracy parameterα to the size of the BnB tree forinstances of sizen andα = α0.
For UC type instances (refer to Tables 4.1 and 4.2), the value ofΓ(n, α) was alwaysseen to be less than 0.5. Instances with data in the range[1, 1000] hadΓ(n, α) valuesthat were almost constant for a givenn, but increased whenn increased.Φ(n, α, 0.0)
was also an almost linearly decreasing function ofα for most of the instances that westudied. The slope of the decrease was initially steeper with increasingn, but whenα = 25.0 the slope was seen to decrease. At this stage, the size of the BnB trees forthe largest instances were, on an average, approximately6% of the size of the BnB treewhenα = 0.0. For instances with data in the range[1001, 2000], Γ(n, α) increased withincreasing with increasingα values as well as with increasingn values. The reductionin the size of the BnB trees for these problems was seen to be about half of the reductionobserved for UC instances with the samen andα but with data in[1, 1000].
For WC type instances (refer to Tables 4.3 and 4.4), the value ofΓ(n, α) was seen tobe almost constant with respect to changingα values. They were also seen to be lesssensitive to changes in problem sizes. This behavior was valid when the data was in therange[1, 1000] as well as when it was in the range[1001, 2000]. Φ(n, α, 0.0) droppedrapidly to less than 20% whenα was increased from0.0 to 15.0. The Φ(n, α, 0.0)
values for instances where data was in the range[1001, 2000] were seen to be abouttwice theΦ(n, α, 0.0) values for instances where data was in the range[1, 1000]. Aninteresting feature is that theΦ values for both ranges were not sensitive to increasesin α values whenα ≥ 15.0. This, combined with the fact thatΓ(n, α) values keptincreasing whenα ≥ 15.0, implies that increasingα values to more than15.0 is notlikely to improve the performance ofα-MT1 on WC type instances.
The behavior ofΓ(n, α) andΦ(n, α, 0.0) for SC and ISC type instances (refer to Ta-bles 4.5–4.8) were very similar to that for WC type instances. However, both these
8
types of instances were much more difficult to solve to optimality than WC type in-stances. In large instances of SC problems, the size of the BnB tree was fairly insen-sitive to increases inα values forα ≥ 15.0. The ISC type instances were observed tobe extremely easy forα-MT1. Firstly, theΓ(n, α) values were close to0.3 for theseinstances, where they were close to0.5 for all the others that we have discussed so far.Also, whenα ≥ 10.0, ISC instances led to very small BnB trees. For the large instancesof ISC that we experimented with, in which data was drawn from the range[1, 1000]
we obtained a solution within the prescribed accuracy parameter at the root of the BnBtree for each of the instances withn ≥ 300. Large ISC instances where the data wasdrawn from the range[1001, 2000] were also easy to solve, and the sizes of the BnBtrees for these instances did not vary with increasingα whenα ≥ 0.1ubinit.
The behavior ofα-MT1 on ASC instances (refer to Tables 4.9 and 4.10) were seen tobe very different from those of the other instances that we experimented with. ASCinstances where the data was in the range[1001, 2000] were seen to be easier to solvethan the instances where the data was in the range[1, 1000]. Γ(n, α) values were seen tobe more sensitive ton than the WC, SC, and ISC type instances. Also,Φ(n, 25.0, 0.0)
were seen to be more than0.6 for most of these instances. (Other problem types usuallyhadΦ(n, 25.0, 0.0) values close to0.1 .) TheΦ(n, α, 0.0) were seen to be increasingasn increased, and for larger instances, the size of the BnB tree was insensitive toincreases inα whenα ≥ 0.05ubinit. This means thatα-MT1 would be less effectivefor larger sized ASC instances.
In summary,α-MT1 proved to be very efficient for most BKP instances, both in termsof the quality of solutions that it output and in terms of the reduction of size of the BnBtree during its execution. For most problems, the deviation of the solution output byα-MT1 was less than half the prescribed accuracy. In terms of the size of the BnB tree,for most of the problems,α-MT1 produced trees with around10% of the number ofnodes present in the BnB tree formt1 whenα ≥ 15.0. Considering the data ranges,α ≥ 15.0 implies a prescribed accuracy within0.01%, making the reduction in the sizeof the BnB tree very impressive. The best results fromα-MT1 were seen for ISC typeof problems. This is partly because local search with 2-exchange neighborhoods arevery effective for such problems. The worst results fromα-MT1 were seen for ASCtype of problems.
9
4. Summary and Discussions
In this paper, we presentα-MT1, an algorithm to generate near-optimal solutions tobinary knapsack problems, with bounds on the sub-optimality of the solution output.This algorithm embeds a local search based heuristic procedure within a branch andbound framework. As a result,α-MT1 is capable of producing a solution, whose profitis within a pre-specified amount of the profit of an optimal solution. Thus, the solu-tions generated byα-MT1 are insensitive to the actual numbers in the instance datafor the problem. We tested the performance ofα-MT1 on a wide variety of randomlygenerated knapsack instances belonging to types well-known in the literature (refer toMartello, Pisinger and Toth [4]). We observed that the algorithm performs well for allexcept the almost strongly correlated problem instances. In most cases we found outthat the deviation achieved byα-MT1 was less than half the allowed deviation, and,when allowed a deviation of less than0.01% of the profit of an optimal solution, solvedproblems in times that were an order of magnitude lower than the time required by ex-act algorithms. We chose themt1 algorithm due to Martello and Toth [6] as the branchand bound algorithm on which we base ourα-MT1 algorithm, since it is a typicalbranch and bound algorithm for binary knapsack problems. There are more sophisti-cated branch and bound algorithms, which could be used to solve larger problems moreefficiently, andα-MT1-type algorithms could be devised based on such algorithms.
One of our current direction of research in this area is the following. In recent times,dynamic programming based algorithms are being proposed to solve instances of bi-nary knapsack problems (refer, for example, to Martello, Pisinger and Toth [4]). Thesealgorithms are shown to able to solve several classes of strongly correlated knapsackproblems, which are traditionally difficult for pure branch and bound based algorithms.We are examining ways to incorporate ideas similar to the ones we propose here, intosuch dynamic programming based algorithms and obtain powerful algorithms for gen-erating near-optimal solutions for a wider variety of binary knapsack problems.
Our other main area of research in this class of algorithms is to try to apply the conceptsdescribed here to find high-quality solutions to other hard combinatorial optimizationproblems such as facility location problems, quadratic cost partitioning, traveling sales-person problems, and scheduling problems.
10
References
[1] Garey MJ, Johnson DS. Computers and Intractability: A Guide to the Theory ofNP-Completeness, San Francisco:Freeman, 1979.
[2] Ghosh D. Heuristics for Knapsack Problems: Comparative Survey and SensitivityAnalysis, Fellowship Dissertation, IIM Calcutta, India, 1997.
[3] Martello S, Pisinger D, Toth P. New trends in exact algorithms for the 0-1 knap-sack problem, European Journal of Operational Research 2000;123:325–332.
[4] Martello S, Pisinger D, Toth P. Dynamic Programming and strong bounds for the0-1 knapsack problem, Management Science 1999;45:414–424.
[5] Martello S, Toth P. Knapsack Problems: Algorithms and Computer Implementa-tions, Chichester:John Wiley & Sons, 1990.
[6] Martello S, Toth P. An upper bound for the zero-one knapsack problem anda branch and bound algorithm, European Journal of Operational Research1977;1:169–175.
[7] Navia J. LCC-Win32: a compiler system for Windows 95 – NT,http://www.cs.virginia.edu/˜lcc-win32/ .
[8] Pisinger D. Core problems in knapsack algorithms, Operations Research1999;47:570–575.
11
Algorithm α-MT1Input: InstanceI = {(p1, . . . , pn), (w1, . . . , wn), c}, prescribed accuracyα.Output: A solution toI within the prescribed accuracyα.Code:01begin
02 ubinit ← ub( (•, •, . . . , •) );03 BestSolutionSoFar ← ∅;04 BestSolutionValue ← −∞;05 LIST ← { (•, •, . . . , •) };06 while LIST 6= ∅ do
07 begin
08 Choose a subproblemx = (xj) from LIST ;09 xH = (xHj ) = H(x);
10 if ubinit − P(xH) ≤ α then (* New Stopping Rule *)11 return xH andstop;12 ubx ← ub(x)
13 if ubx ≤ BestSolutionValue then (* Discard this subproblem *)14 goto 30;15 if ubx − P(xH) ≤ α then (* xH is within the prescribed accuracy α *)16 begin
17 UpdateBestSolutionSoFar andBestSolutionValue if necessary;18 goto 30;19 end ;20 x ← forward-move(x);21 (* Creating new subproblems by branching *)22 k ← min{j : xj = •};23 xnew ← x;24 xnewk ← 0;25 LIST ← LIST ∪ {xnew};26 xnew ← x;27 xnewk ← 1;28 if C(xnew) ≤ c then
29 LIST ← LIST ∪ {xnew};30 end ;31 return BestSolutionSoFar;32 end.
Figure 1: Pseudocode ofα-MT1.
12
Tabl
e4.
1:P
erfo
rman
ceofα
-MT
1on
UC
knap
sack
inst
ance
s(D
ata
rang
e[1
,1000])
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
00.
000
0.00
00.
389
0.38
90.
727
2.64
710
0.00
088
.321
86.5
5076
.625
54.1
9918
.689
750
0.00
00.
000
0.62
51.
212
3.29
46.
431
100.
000
97.7
7586
.667
67.8
1847
.373
29.1
3310
000.
000
0.00
00.
094
1.91
94.
119
4.23
110
0.00
094
.908
60.8
9629
.807
22.5
0621
.326
1250
0.00
00.
000
1.22
23.
747
6.56
612
.972
100.
000
94.1
9568
.679
42.9
8830
.814
25.7
4515
000.
000
0.00
01.
719
2.73
19.
944
12.2
6210
0.00
087
.058
60.6
1641
.562
20.0
8614
.721
1750
0.00
00.
148
1.51
63.
180
6.23
48.
656
100.
000
79.9
1342
.182
32.8
0322
.787
20.5
4220
000.
000
0.05
02.
938
4.58
88.
588
12.5
3810
0.00
089
.807
34.0
0021
.927
12.3
336.
071
Tabl
e4.
2:P
erfo
rman
ceofα
-MT
1on
UC
knap
sack
inst
ance
s(D
ata
rang
e[1
001,2
000])
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
00.
000
0.00
00.
000
0.00
30.
084
0.08
410
0.00
099
.511
98.0
0095
.049
82.4
8570
.642
750
0.00
00.
000
0.00
00.
181
0.93
83.
138
100.
000
99.2
7396
.125
85.4
4880
.657
61.3
1810
000.
000
0.00
00.
300
0.45
62.
281
5.60
010
0.00
098
.855
99.5
9771
.475
55.5
9744
.650
1250
0.00
00.
000
0.11
32.
400
4.71
37.
350
100.
000
98.6
9188
.002
51.9
2543
.996
41.2
6615
000.
000
0.00
00.
313
2.22
53.
875
8.43
810
0.00
098
.388
85.5
4353
.237
45.2
4037
.245
1750
0.00
00.
000
0.40
02.
513
6.21
312
.363
100.
000
98.2
7675
.723
58.0
4347
.007
34.1
3220
000.
000
0.00
01.
143
3.03
64.
875
5.89
310
0.00
095
.136
73.7
4365
.095
63.5
5861
.081
13
Tabl
e4.
3:P
erfo
rman
ceofα
-MT
1on
WC
knap
sack
inst
ance
s(D
ata
rang
e[1
,1000])
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
00.
000
0.27
72.
778
7.21
98.
659
10.6
0910
0.00
073
.182
27.1
428.
123
6.12
54.
267
750
0.00
00.
644
3.38
66.
534
7.41
28.
567
100.
000
49.3
2113
.058
3.73
93.
046
2.63
310
000.
000
1.54
43.
881
6.90
67.
484
9.61
910
0.00
038
.820
11.2
814.
201
2.89
00.
335
1250
0.00
01.
613
4.07
27.
444
7.71
97.
719
100.
000
27.3
978.
249
0.54
50.
137
0.09
815
000.
000
1.97
55.
659
7.33
411
.294
13.3
5910
0.00
022
.893
3.54
81.
392
0.38
40.
002
1750
0.00
02.
103
5.08
18.
197
9.51
612
.144
100.
000
19.5
342.
333
0.78
40.
418
0.00
120
000.
000
2.01
94.
475
7.66
28.
588
8.28
710
0.00
020
.164
1.82
01.
017
0.41
60.
261
Tabl
e4.
4:P
erfo
rman
ceofα
-MT
1on
WC
knap
sack
inst
ance
s(D
ata
rang
e[1
001,2
000])
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
00.
000
0.12
21.
422
6.66
67.
556
11.7
1310
0.00
085
.973
44.9
3418
.356
15.5
4013
.972
750
0.00
00.
231
4.13
87.
025
7.93
19.
137
100.
000
72.2
4125
.480
14.3
0712
.832
12.6
3910
000.
000
0.85
02.
694
3.31
33.
313
6.07
510
0.00
031
.695
10.3
089.
683
9.48
19.
415
1250
0.00
00.
719
2.81
95.
588
8.01
28.
012
100.
000
32.2
2016
.141
13.3
8412
.889
12.8
8915
000.
000
1.56
33.
750
6.95
06.
950
9.17
510
0.00
049
.092
32.6
2629
.163
29.1
6328
.318
1750
0.00
01.
431
4.04
26.
778
10.5
1410
.694
100.
000
24.5
3412
.345
11.7
178.
967
7.62
520
000.
000
1.72
93.
667
7.75
08.
938
10.2
5010
0.00
030
.524
14.4
8313
.526
13.3
9512
.266
14
Tabl
e4.
5:P
erfo
rman
ceofα
-MT
1on
SC
knap
sack
inst
ance
s(D
ata
rang
e[1
,1000])
(a)
Sm
alle
rsi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
abso
lute
valu
es.
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
0.00
00.
406
1.81
94.
552
7.62
99.
376
100.
000
72.0
0414
.800
4.93
04.
427
4.36
475
0.00
00.
479
1.90
53.
924
6.22
310
.744
100.
000
59.2
512.
338
2.10
81.
792
1.76
910
00.
000
0.55
03.
098
4.41
66.
096
9.13
210
0.00
051
.935
9.08
58.
849
8.32
97.
933
125
0.00
00.
409
2.79
04.
517
7.34
510
.177
100.
000
40.9
941.
781
1.42
71.
424
1.42
215
00.
000
0.66
82.
938
4.77
46.
329
10.0
2510
0.00
016
.199
2.87
32.
815
2.80
70.
892
(b)
Larg
ersi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
perc
enta
geva
lues
.P
erce
ntag
ede
viat
ion
from
uppe
rbo
und
ubinit
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired(
*)n
αva
lues
(%)
αva
lues
(%)
0.02
0.04
0.06
0.08
0.10
0.12
0.02
0.04
0.06
0.08
0.10
0.12
200
0.01
40.
023
0.03
10.
043
0.05
30.
065
100.
000
81.8
2380
.623
74.5
6671
.950
69.6
8530
00.
015
0.03
10.
046
0.06
00.
076
0.08
910
0.00
092
.837
75.8
6563
.941
51.3
9239
.616
400
0.01
40.
027
0.04
00.
053
0.06
30.
079
100.
000
85.2
2166
.528
35.5
8225
.161
20.3
1050
00.
011
0.01
90.
029
0.03
90.
051
0.05
710
0.00
071
.532
54.4
4341
.022
19.5
235.
773
600
0.01
30.
026
0.03
60.
048
0.05
70.
064
100.
000
89.5
3072
.915
49.8
9425
.707
4.81
670
00.
017
0.03
30.
048
0.06
10.
072
0.09
310
0.00
051
.787
39.1
0612
.422
4.95
23.
617
800
0.01
20.
020
0.03
20.
045
0.04
90.
059
100.
000
19.9
8610
.493
8.10
26.
458
4.84
090
00.
010
0.01
90.
029
0.03
90.
048
0.06
810
0.00
010
.794
9.00
16.
777
4.83
22.
879
1000
0.01
40.
028
0.03
70.
043
0.04
80.
068
100.
000
20.2
0013
.845
9.62
95.
150
2.49
4(*
)P
erce
ntag
esco
mpu
ted
onnu
mbe
rof
subp
robl
ems
gene
rate
dw
hen
α=
0.0
2ubinit
.
15
Tabl
e4.
6:P
erfo
rman
ceofα
-MT
1on
SC
knap
sack
inst
ance
s(D
ata
rang
e[1
001,2
000])
(a)
Sm
alle
rsi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
abso
lute
valu
es.
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
0.00
00.
209
2.08
45.
194
7.70
99.
992
100.
000
68.4
580.
287
0.26
40.
249
0.22
375
0.00
00.
761
4.53
35.
908
10.1
8111
.401
100.
000
7.96
61.
510
1.21
51.
141
1.13
910
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e12
5Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e15
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e
(b)
Larg
ersi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
perc
enta
geva
lues
.P
erce
ntag
ede
viat
ion
from
uppe
rbo
und
ubinit
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired(
*)n
αva
lues
(%)
αva
lues
(%)
0.02
0.04
0.06
0.08
0.10
0.12
0.02
0.04
0.06
0.08
0.10
0.12
200
0.01
50.
031
0.04
60.
062
0.07
30.
083
100.
000
79.9
2770
.859
61.1
0047
.402
33.3
1930
00.
015
0.03
20.
047
0.06
30.
081
0.09
410
0.00
080
.010
70.2
1560
.925
51.0
0838
.815
400
0.01
50.
030
0.04
30.
059
0.08
20.
101
100.
000
62.2
5853
.246
41.6
7029
.215
16.4
2850
00.
016
0.03
20.
045
0.05
60.
067
0.08
310
0.00
077
.290
63.1
1835
.635
17.3
584.
494
600
0.01
10.
025
0.03
70.
057
0.06
50.
082
100.
000
73.4
8452
.871
25.3
964.
198
0.88
070
00.
013
0.02
60.
041
0.05
80.
066
0.08
110
0.00
069
.697
32.9
521.
181
0.84
20.
547
800
Less
than
half
ofth
ein
stan
ces
coul
dbe
solv
edw
ithin
time
900
0.01
60.
031
0.04
40.
047
0.05
70.
077
100.
000
12.0
153.
919
0.80
60.
164
0.08
210
00Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e(*
)P
erce
ntag
esco
mpu
ted
onnu
mbe
rof
subp
robl
ems
gene
rate
dw
hen
α=
0.0
2ubinit
.
16
Tabl
e4.
7:P
erfo
rman
ceofα
-MT
1on
ISC
knap
sack
inst
ance
s(D
ata
rang
e[1
,1000])
(a)
Sm
alle
rsi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
abso
lute
valu
es.
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
0.00
00.
020
0.95
03.
126
6.05
97.
648
100.
000
50.3
503.
108
0.21
00.
143
0.03
575
0.00
00.
343
2.10
33.
216
5.06
76.
844
100.
000
35.2
521.
067
0.18
50.
142
0.07
510
00.
000
0.51
91.
649
6.19
27.
844
8.10
810
0.00
026
.698
2.88
30.
101
0.02
60.
026
125
0.00
00.
775
2.98
34.
239
4.84
64.
846
100.
000
22.2
953.
108
0.05
00.
012
0.01
215
00.
000
0.67
81.
125
3.39
55.
267
5.26
710
0.00
05.
120
0.23
60.
040
0.03
90.
039
(b)
Larg
ersi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
perc
enta
geva
lues
.P
erce
ntag
ede
viat
ion
from
uppe
rbo
und
ubinit
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired(
*)n
αva
lues
(%)
αva
lues
(%)
0.02
0.04
0.06
0.08
0.10
0.12
0.02
0.04
0.06
0.08
0.10
0.12
200
0.01
30.
017
0.01
80.
018
0.01
80.
018
100.
000
6.81
80.
455
0.45
50.
455
0.45
530
00.
011
0.01
10.
011
0.01
10.
011
0.01
1α
achi
eved
atth
ero
otof
the
BnB
tree
400
0.00
80.
008
0.00
80.
008
0.00
80.
008
αac
hiev
edat
the
root
ofth
eB
nBtr
ee50
00.
005
0.00
50.
005
0.00
50.
005
0.00
5α
achi
eved
atth
ero
otof
the
BnB
tree
600
0.00
50.
005
0.00
50.
005
0.00
50.
005
αac
hiev
edat
the
root
ofth
eB
nBtr
ee70
00.
004
0.00
40.
004
0.00
40.
004
0.00
4α
achi
eved
atth
ero
otof
the
BnB
tree
800
0.00
30.
003
0.00
30.
003
0.00
30.
003
αac
hiev
edat
the
root
ofth
eB
nBtr
ee90
00.
002
0.00
20.
002
0.00
20.
002
0.00
2α
achi
eved
atth
ero
otof
the
BnB
tree
1000
0.00
30.
003
0.00
30.
003
0.00
30.
003
αac
hiev
edat
the
root
ofth
eB
nBtr
ee(*
)P
erce
ntag
esco
mpu
ted
onnu
mbe
rof
subp
robl
ems
gene
rate
dw
hen
α=
0.0
2ubinit
.
17
Tabl
e4.
8:P
erfo
rman
ceofα
-MT
1on
ISC
knap
sack
inst
ance
s(D
ata
rang
e[1
001,2
000])
(a)
Sm
alle
rsi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
abso
lute
valu
es.
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
0.00
00.
146
1.20
53.
159
3.97
54.
551
100.
000
35.6
241.
533
0.72
30.
723
0.72
375
0.00
01.
190
2.54
13.
913
5.17
16.
808
100.
000
6.10
00.
916
0.87
00.
866
0.86
010
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e12
5Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e15
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e
(b)
Larg
ersi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
perc
enta
geva
lues
.P
erce
ntag
ede
viat
ion
from
uppe
rbo
und
ubinit
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired(
*)n
αva
lues
(%)
αva
lues
(%)
0.02
0.04
0.06
0.08
0.10
0.12
0.02
0.04
0.06
0.08
0.10
0.12
200
0.00
40.
004
0.00
40.
004
0.00
40.
004
100.
000
100.
000
100.
000
100.
000
100.
000
100.
000
300
0.00
20.
003
0.00
30.
006
0.00
60.
006
100.
000
57.4
1457
.414
21.3
3221
.332
21.3
3240
00.
002
0.00
40.
006
0.00
60.
006
0.00
610
0.00
037
.950
11.5
7311
.573
11.5
7311
.573
500
0.00
20.
002
0.00
20.
002
0.01
10.
021
100.
000
100.
000
100.
000
100.
000
32.3
934.
743
600
0.00
10.
006
0.00
80.
012
0.02
00.
025
100.
000
33.7
1418
.272
7.90
60.
703
0.58
870
00.
001
0.00
40.
010
0.01
30.
027
0.02
710
0.00
046
.568
9.82
31.
480
0.69
10.
691
800
0.00
10.
002
0.00
70.
010
0.01
90.
031
100.
000
52.6
776.
850
1.61
21.
257
0.90
090
00.
002
0.00
20.
002
0.00
90.
014
0.01
410
0.00
010
0.00
010
0.00
039
.086
26.2
9426
.294
1000
0.00
10.
002
0.00
40.
004
0.01
80.
028
100.
000
23.0
501.
922
1.92
21.
104
0.55
9(*
)P
erce
ntag
esco
mpu
ted
onnu
mbe
rof
subp
robl
ems
gene
rate
dw
hen
α=
0.0
2ubinit
.
18
Tabl
e4.
9:P
erfo
rman
ceofα
-MT
1on
AS
Ckn
apsa
ckin
stan
ces
(Dat
ara
nge
[1,1
000])
(a)
Sm
alle
rsi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
abso
lute
valu
es.
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
0.00
00.
000
0.00
00.
220
0.48
61.
006
100.
000
96.7
8693
.223
85.5
4573
.081
66.2
2675
0.00
00.
000
0.00
20.
570
0.76
81.
328
100.
000
98.2
0296
.680
86.0
0980
.642
77.7
2810
00.
000
0.00
00.
013
0.10
30.
509
1.19
810
0.00
099
.154
97.5
0092
.676
83.6
2078
.664
125
0.00
00.
000
0.00
00.
057
0.78
41.
236
100.
000
99.5
6197
.031
96.8
2187
.800
83.7
8515
00.
000
0.00
00.
470
0.47
00.
780
1.43
810
0.00
099
.245
98.0
3994
.234
86.3
4884
.104
(b)
Larg
ersi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
perc
enta
geva
lues
.P
erce
ntag
ede
viat
ion
from
uppe
rbo
und
ubinit
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired(
*)n
αva
lues
(%)
αva
lues
(%)
0.02
0.04
0.06
0.08
0.10
0.12
0.02
0.04
0.06
0.08
0.10
0.12
200
0.00
40.
005
0.05
30.
056
0.05
80.
061
100.
000
98.4
3869
.531
37.2
570.
027
0.02
530
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e40
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e50
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e60
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e70
00.
014
0.01
90.
026
0.02
70.
041
0.05
910
0.00
01.
300
0.48
30.
382
0.28
60.
191
800
0.01
10.
022
0.02
80.
033
0.03
80.
046
100.
000
10.7
708.
131
7.63
77.
284
6.83
690
00.
014
0.02
40.
030
0.04
50.
050
0.06
210
0.00
018
.609
4.64
63.
083
2.77
52.
155
1000
0.01
00.
013
0.02
10.
026
0.03
50.
047
100.
000
44.6
6917
.415
5.25
34.
016
2.76
8(*
)P
erce
ntag
esco
mpu
ted
onnu
mbe
rof
subp
robl
ems
gene
rate
dw
hen
α=
0.0
2ubinit
.
19
Tabl
e4.
10:P
erfo
rman
ceofα
-MT
1on
AS
Ckn
apsa
ckin
stan
ces
(Dat
ara
nge
[1001,2
000])
(a)
Sm
alle
rsi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
abso
lute
valu
es.
Dev
iatio
nsfr
omth
eop
timal
solu
tion
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired
nα
valu
esα
valu
es0.
05.
010
.015
.020
.025
.00.
05.
010
.015
.020
.025
.050
0.00
00.
048
0.04
80.
133
0.36
40.
882
100.
000
99.7
1398
.570
93.3
6289
.745
81.4
3875
0.00
00.
000
0.44
90.
685
1.39
11.
624
100.
000
95.8
3189
.127
84.1
2074
.935
64.2
8910
00.
000
0.00
00.
872
3.13
53.
374
5.17
310
0.00
088
.489
69.7
7844
.559
43.5
7026
.098
125
0.00
00.
064
1.04
32.
109
2.59
15.
036
100.
000
87.2
1164
.109
53.2
7248
.722
32.3
1015
0Le
ssth
anha
lfof
the
inst
ance
sco
uld
beso
lved
with
intim
e
(b)
Larg
ersi
zed
inst
ance
sw
ithac
cura
cypr
ovid
edin
perc
enta
geva
lues
.P
erce
ntag
ede
viat
ion
from
uppe
rbo
und
ubinit
Per
cent
age
ofnu
mbe
rof
subp
robl
ems
requ
ired(
*)n
αva
lues
(%)
αva
lues
(%)
0.02
0.04
0.06
0.08
0.10
0.12
0.02
0.04
0.06
0.08
0.10
0.12
200
Less
than
half
ofth
ein
stan
ces
coul
dbe
solv
edw
ithin
time
300
0.01
10.
016
0.02
20.
033
0.04
70.
062
100.
000
2.65
20.
557
0.29
40.
245
0.20
940
00.
014
0.02
80.
037
0.05
40.
061
0.06
510
0.00
072
.876
50.9
2818
.643
3.86
42.
404
500
0.01
20.
025
0.04
50.
059
0.07
50.
085
100.
000
58.8
7835
.572
16.9
159.
626
3.52
260
00.
012
0.02
40.
037
0.05
20.
060
0.06
610
0.00
062
.175
45.8
0226
.826
5.01
40.
707
700
0.01
20.
020
0.03
40.
052
0.06
30.
072
100.
000
66.2
7620
.585
1.20
10.
699
0.33
780
00.
014
0.02
40.
039
0.04
70.
061
0.07
310
0.00
010
.780
3.99
50.
644
0.18
40.
115
900
0.01
10.
025
0.03
50.
037
0.04
90.
057
100.
000
5.69
72.
141
0.29
00.
144
0.09
110
000.
009
0.01
30.
041
0.04
60.
046
0.05
110
0.00
071
.083
16.2
690.
683
0.68
30.
347
(*)
Per
cent
ages
com
pute
don
num
ber
ofsu
bpro
blem
sge
nera
ted
whe
nα
=0.0
2ubinit
.
20