37
184 APPENDIX : LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S., and Kannan, K. A meta-heuristic algorithm for set covering problem based on gravity. International Journal of Computational and Mathematical Sciencies, 4(5) (2010) 223-228. Raja Balachandar, S., and Kannan, K. A meta-heuristic approach for the large-scale generalized assignment problem. International Journal of Computational and Mathematical Sciencies, 3(8) (2009) 418-423. Raja Balachandar, S., and Kannan, K. A meta-heuristic algorithm for vertex covering problem based on gravity. International Journal of Computational and Mathematical Sciencies, 3(7) (2009) 330-336. Raja Balachandar, S., and Kannan, K. A new polynomial time algorithm for 0-1 multiple knapsack problem based on dominant principles. Applied Mathematics and Computation, 202(1) (2008) 71-77. Raja Balachandar, S., and Kannan, K. Randomized gravitational emulation search algorithm for symmetric traveling salesman problem. Applied Mathematics and Computation, 192(2) (2007) 413-421.

LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

  • Upload
    buimien

  • View
    235

  • Download
    6

Embed Size (px)

Citation preview

Page 1: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

184

APPENDIX : LIST OF RESEARCH PAPERS PUBLISHED

Raja Balachandar, S., and Kannan, K. A meta-heuristic algorithm for set covering

problem based on gravity. International Journal of Computational and

Mathematical Sciencies, 4(5) (2010) 223-228.

Raja Balachandar, S., and Kannan, K. A meta-heuristic approach for the large-scale

generalized assignment problem. International Journal of Computational and

Mathematical Sciencies, 3(8) (2009) 418-423.

Raja Balachandar, S., and Kannan, K. A meta-heuristic algorithm for vertex

covering problem based on gravity. International Journal of Computational and

Mathematical Sciencies, 3(7) (2009) 330-336.

Raja Balachandar, S., and Kannan, K. A new polynomial time algorithm for 0-1

multiple knapsack problem based on dominant principles. Applied

Mathematics and Computation, 202(1) (2008) 71-77.

Raja Balachandar, S., and Kannan, K. Randomized gravitational emulation search

algorithm for symmetric traveling salesman problem. Applied Mathematics

and Computation, 192(2) (2007) 413-421.

Page 2: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

A Meta-Heuristic algorithm for Set coveringproblem Based on Gravity

S. Raja Balachandar and K.Kannan

Abstract—A new Meta heuristic approach called ”Randomizedgravitational emulation search algorithm (RGES)” for solving largesize set covering problems has been designed. This algorithm is foundupon introducing randomization concept along with the two of thefour primary parameters ’velocity’ and ’gravity’ in physics. A newheuristic operator is introduced in the domain of RGES to maintainfeasibility specifically for the set covering problem to yield bestsolutions. The performance of this algorithm has been evaluated ona large set of benchmark problems from OR-library. Computationalresults showed that the randomized gravitational emulation searchalgorithm - based heuristic is capable of producing high qualitysolutions. The performance of this heuristic when compared withother existing heuristic algorithms is found to be excellent in termsof solution quality.

Keywords—Set covering Problem, Velocity, Gravitational Force,Newton’s Law, Meta Heuristic, Combinatorial optimization.

I. INTRODUCTIONThere is a class of problems, whose exponential complex-

ities have been established theoretically are known as NPproblems. Designing polynomial time algorithms for sucha class of problems is still open. Due to the demand forsolving such problems, Researchers are constantly attemptingto provide heuristic solutions one after the other focusingthe optimality by introducing several operators with salientfeatures such as (i) reducing the computational complexity,(ii) randomization etc.,Some NP problems are Set covering problem, Travelingsalesman problem, Problem of Hamiltonian paths, Knapsackproblem, Problem of optimal graph coloring. If a polynomialtime solution can be found for any of these problems, thenall of the NP problems would have polynomial solutions. NPcomplete problems are described more detail in [15].The set covering problem (SCP) is a main and fundamental

model for several important applications. Crew schedulingproblem in bus, railway and mass-transit transportations com-panies where a given set of trips has to be covered by a mini-mum - cost set of pairings, a pairing being a sequence of tripsthat can be prepared by a single crew [9] are worth mentioning.Though both exact (optimal) and heuristic approaches havebeen presented in the literature, this problem is still a difficultNP-complete problem.The set covering problem (SCP) is the problem of coveringthe rows of this m-row, n-column, zero-one matrix (aij) by

S.Raja Balachandar is with the Department of Mathematics, SASTRAUniversity,Thanjavur,INDIA, e-mail: [email protected] is with the Department of Mathematics, SASTRA Univer-

sity,Thanjavur,INDIA, e-mail: [email protected]

a subset of the columns at minimal cost. Defining xj = 1if column j (with cost cj > 0 ) is the solution and xj = 0otherwise.It can be formulated as a binary integer program as follows:minimize

n∑j=1

cjxj (1)

subject ton∑

j=1

aijxj ≥ bj, i = 1, 2, ..., m (2)

xj ∈ {0, 1} , j = 1, 2, 3, ..., n (3)

Equation (2) ensures that each row is covered by at leastone column and (3) is the integral of constraint. The costcoefficients cj are equal to 1 the problem is referred to asthe unicost SCP, otherwise,the problem is called the weightedor non-unicost SCP. The SCP has been proved to be NP -complete [15].In this paper, a new optimization algorithm based on the law

of gravity, namely Randomized gravitational emulation searchalgorithm (RGES) is proposed. This algorithm is based on theNewtonian gravity: ” Every particle in the universe attractsevery other particle with a force that is directly proportionalto the product of their masses and inversely proportional tothe square of the distance between them”.This article demonstrates that RGES technique is capable ofproducing better quality results for the large size set coveringproblem than other heuristic approaches.This paper is organized as follows: A brief survey of variousapproaches pertaining to this problem is elucidated in sectionII. In section III, we introduce the basic concepts of ouralgorithm. The proposed RGES is presented in section IV.The algorithm’s utility is illustrated with help of benchmarkproblems in section V and we include the extensive compar-ative study of result of our heuristic with existing state-of-artheuristics. Salient features of this algorithm are enumeratedin section VI, finally concluding remarks are given in sectionVII.

II. PREVIOUS WORK

The SCP with arbitrary positive costs is NP-hard [15].Several exact and heuristic approaches to solve SCPs havebeen reported to literature.

International Journal of Computational and Mathematical Sciences 4:5 2010

223

Page 3: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Existing exact algorithms are essentially based on branch-and-bound and branch-and-cut. Fisher and Kedia [14] pro-posed an exact branch-and-bound algorithm based on a dualheuristic and able to solve instances with 200 constraintsupto 2000 variables. Beasley combined a Lagrangian-basedheuristic, feasible solution exclusion constraints, Gomory’sf-cuts, to improve the branching strategy for strengtheninghis algorithm [4].This algorithm could solve instances withconstraint matrices upto the order of 400 X 4000 [6]. Harcheand Thompson [19] developed an exact algorithm, called”column subtraction”, which is capable of solving sparseinstances of set covering problems. These optimal algorithmsare based on tree-search algorithm. Since exact methods havelimitations such as ’suffering for optimality’, very heavycomputational efforts, very large search space etc. Researchersstarted desiring approximation algorithms to meet the needs ofless computation with high quality. The heuristic approachescan be divided into two main categories.The first one exploits problem characteristics and specificfeatures of each instance. Examples include Lagrangianrelaxation-based procedures, sub gradient optimization meth-ods[5], the relaxed dual model exploitation[10], and surrogateoptimization [25]. Greedy algorithms is found to be the firstnatural heuristic approach for solving large size combinatorialproblems. As for the SCP, the simplest approach is the greedyalgorithms [12]. Although simple, fast and easy to code;Greedy algorithms could rarely generate solution of goodquality as a result of their myopic and deterministic nature.Researchers have attempted to improve greedy algorithmsby introducing randomization concept. These randomized orprobabilistic greedy algorithms [32, 13, 18] often generatebetter results than pure greedy ones.The second category includes local search procedures and theadaptation of meta heuristics to the SCP, such as genetic algo-rithm[7,31,1], ant colony algorithms[24], Simulated annealingalgorithms[21,22], Neural Network algorithms[27], as well asspecifically tailored local search procedure[34],but the qualityof meta heuristic approaches using some features from the firstcategory of heuristics, and the late appearances of a highly ef-fective local search procedure make this category a competitiveapproach. Due to the unicost version specific characteristics,some specific heuristics have been developed for the unicostcase, and some general heuristics have been adapted to theunicost case viz . Alminana and Poster adaptation [2], of aLops and Lorena proposal [25] heuristics based on Lagrangianrelaxations and the Surrogate problems solutions have beentested for solving 60 newly generated random instances and5 literature based instances. Grossman and Wool [16], hasdesigned neural network architecture (ANN) to solve unicostSCP and shown the superiority of ANN over the other heuristicalgorithms available in literature. In this paper, we havedesigned a meta heuristic algorithm based on gravity andwe enhanced the performance of RGES through feasibilityoperator to obtain best solutions at less computational cost.

III. THE LAW OF GRAVITY

The gravitation is the tendency of masses to acceleratetoward each other. It is one of the four fundamental inter-

actions in nature [29] (the others are: the electromagneticforce, the weak nuclear force, and the strong nuclear force).Every particle in the universe attracts every other particle.Gravity is everywhere. The inescapability of gravity makesit different from all other natural forces. The way Newton’sgravitational force behaves is called ”action at a distance”.This means gravity acts between separated particles withoutany intermediary and without any delay. In the Newton lawof gravity, each particle attracts every other particle witha ’gravitational force’ [3,20,28,29,30,33]. The gravitationalforce between two particles is directly proportional to theproduct of their masses and inversely proportional to thesquare of the distance between them [20]:

F =GM1M2

R2(4)

where F is the magnitude of the gravitational force, G isgravitational constant,M1 andM2 are the mass of the first andsecond particles respectively, and R is the distance between thetwo particles. Newton’s second law says that when a force, F,is applied to a particle, its acceleration, a, depends only on theforce and its mass, M [20]:

a =F

M(5)

Based on (4) and (5), there is an attracting gravity forceamong all particles of the universe where the effect of biggerand the closer particle is higher. An increase in the distancebetween two particles means decreasing the gravity force be-tween them. In addition, due to the effect of decreasing gravity,the actual value of the ”‘gravitational constant” depends on theactual age of the universe. Eq. (6) gives the decrease of thegravitational constant, G, with the age [26]:

G(t) = G(to) × (to

t)β , β < 1, (6)

where G(t) is the value of the gravitational constant attime t. G(to) is the value of the gravitational constant at thefirst cosmic quantum-interval of time to [26]. Three kinds ofmasses are defined in theoretical physics:Active gravitational mass, Ma, is a measure of the strength ofthe gravitational field due to a particular object. Gravitationalfield of an object with small active gravitational mass is weakerthan the object with more active gravitational mass.Passive gravitational mass, Mp, is a measure of the strengthof an object’s interaction with the gravitational field. Withinthe same gravitational field, an object with a smaller passivegravitational mass experiences a smaller force than an objectwith a larger passive gravitational mass.Inertial mass, Mi, is a measure of an object resistance tochanging its state of motion when a force is applied. Anobject with large inertial mass changes its motion more slowly,and an object with small inertial mass changes it rapidly.Now, considering the above-mentioned aspects, we rewriteNewton’s laws. The gravitational force, Fij , that acts on massi by mass j, is proportional to the product of the activegravitational of mass j and passive gravitational of mass i, andinversely proportional to the square distance between them.ai is proportional to Fij and inversely proportional to inertia

International Journal of Computational and Mathematical Sciences 4:5 2010

224

Page 4: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

mass of i. More precisely, one can rewrite Eqs. (4) and (5) asfollows:

Fij =GMajMpi

R2, (7)

ai =Fij

Mii

, (8)

where Maj and Mpi represent the active gravitational massof particle i and passive gravitational mass of particle j,respectively, and Mii represents the inertia mass of particlei.Although inertial mass, passive gravitational mass, and ac-

tive gravitational mass are conceptually distinct, no experimenthas ever unambiguously demonstrated any difference betweenthem. The theory of general relativity rests on the assumptionthat inertial and passive gravitational mass are equivalent. Thisis known as the weak equivalence principle [23]. Standardgeneral relativity also assumes the equivalence of inertial massand active gravitational mass; this equivalence is sometimescalled the strong equivalent principle [23].

IV. RANDOMIZED GRAVITATIONAL EMULATION SEARCHALGORITHM(RGES)

In this section, we introduce our optimization algorithmbased on the law of gravity [28]. In the proposed algorithm,agents are considered as objects and their performance ismeasured by their masses. All these objects attract eachother by the gravity force, and this force causes a globalmovement of all objects towards the objects with heaviermasses. Hence, masses cooperate using a direct form ofcommunication, through gravitational force. The heavy masses- which correspond to good solutions - move more slowlythan lighter ones, this guarantees the exploitation step of thealgorithm. In RGES, each mass (agent) has four specifications:position, inertial mass, active gravitational mass, and passivegravitational mass. The position of the mass corresponds toa solution of the problem, and its gravitational and inertialmasses are determined using a fitness function. In other words,each mass presents a solution, and the algorithm is navigatedby properly adjusting the gravitational and inertia masses.By lapse of time, we expect that masses be attracted by theheaviest mass. This mass will present an optimum solution inthe search space. The RGES could be considered as an isolatedsystem of masses. It is like a small artificial world of massesobeying the Newtonian laws of gravitation and motion. Moreprecisely, masses obey the following laws:Law of gravity: each particle attracts every other particleand the gravitational force between two particles is directlyproportional to the product of their masses and inverselyproportional to the distance between them, R. We use hereR instead of R2, because according to our experiment results,R provides better results than R2 in all experimental cases.Law of motion: the current velocity of any mass is equal to thesum of the fraction of its previous velocity and the variationin the velocity. Variation in the velocity or acceleration of anymass is equal to the force acted on the system divided by massof inertia.

A. Initiation

Now, consider a system with N agents (masses). We definethe position of the ith agent by:

Xi = (x1i , x

2i , ..., x

di , ..., x

ni ) for i = 1, 2, 3, ..., N, (9)

where xdi presents the position of ith agent in the dth

dimension. At a specific time ’t’, we define the force actingon mass ’i’ from mass ’j’ as following:

F dij(t) = G(t)

Mpi(t) × Maj(t)Rij(t)+ ∈

(xdi (t) − xd

i (t)), (10)

where Maj is the active gravitational mass related to agentj, Mpi is the passive gravitational mass related to agent i, G(t)is gravitational constant at time t, ∈ is a small constant, andRij(t) is the Euclidian distance between two agents i and j:

Rij = ‖Xi(t), Xj(t)‖2 , (11)

To give a stochastic characteristic to our algorithm, wesuppose that the total force that acts on agent i in a dimensiond be a randomly weighted sum of dth components of the forcesexerted from other agents:

F di (t) =

N∑j=1,j �=i

randjFdij(t), (12)

where randj is a random number in the interval [0, 1].Hence, by the law of motion, the acceleration of the agent iat time t, and in direction dth, ad

i (t),is given as follows:

adi (t) =

F di (t)

Mii(t), (13)

where Mii is the inertial mass of ith agent. Furthermore,the next velocity of an agent is considered as a fraction ofits current velocity added to its acceleration. Therefore, itsposition and its velocity could be calculated as follows:

vdi (t + 1) = randi × vd

i (t) + adi (t), (14)

xdi (t + 1) = xd

i (t) + vdi (t + 1), (15)

where randi is a uniform random variable in the interval[0, 1]. We use this random number to give a randomizedcharacteristic to the search. The gravitational constant, G, isinitialized at the beginning and will be reduced with time tocontrol the search accuracy. In other words, G is a function ofthe initial value (Go) and time (t):

G(t) = G(Go, t), (16)

International Journal of Computational and Mathematical Sciences 4:5 2010

225

Page 5: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

B. Evaluation of fitness and updatingGravitational and inertia masses are simply calculated by

the fitness evaluation. A heavier mass means a more efficientagent. This means that better agents have higher attractions andwalk more slowly. Assuming the equality of the gravitationaland inertia mass, the values of masses are calculated using themap of fitness. We update the gravitational and inertial massesby the following equations:

Mai = Mpi = Mii = Mi, i = 1, 2, 3, ..., N, (17)

mi(t) =fiti(t) − worst(t)best(t) − worst(t)

(18)

Mi(t) =mi(t)∑N

j=1 mj(t), (19)

where fiti(t) represent the fitness value of the agent i attime t, and, worst(t) and best(t) are defined as follows for aminimization problem:

best(t) = min︸︷︷︸j∈1,...,N

fitj(t), (20)

worst(t) = max︸︷︷︸j∈1,...,N

fitj(t), (21)

One way to perform a good compromise between explorationand exploitation is to reduce the number of agents with lapseof time in Eq. (12). Hence, we propose only a set of agentswith bigger mass apply their force to the other. However, weshould be careful of using this policy because it may reducethe exploration power and increase the exploitation capability.We remind that in order to avoid trapping in a local optimumthe algorithm must use the exploration at beginning. By lapseof iterations, exploration must fade out and exploitation mustfade in. To improve the performance of RGES by controllingexploration and exploitation only the Kbest agents will attractthe others. Kbest is a function of time, with the initial valueKo at the beginning and decreasing with time. In such a way,at the beginning, all agents apply the force, and as time passes,Kbest is decreased linearly and at the end there will be just oneagent applying force to the others. Therefore, Eq.(12) couldbe modified as:

F di (t) =

∑j∈Kbest,j �=i

randjFdij(t), (22)

where Kbest is the set of first K agents with the best fitnessvalue and biggest mass.

C. Repair operatorThe solutions(agents) may violate constraints. To make all

the solutions feasible an additional operator is needed. Herea proposed heuristic operator consists of two phases namelyADD phase and DROP phase that maintains the feasibilityof the solutions in the neighborhood being generated. Thesteps required to make each solution feasible involve theidentification of all uncovered rows and the addition of

columns such that all rows are covered. This is done by theADD phase. Once columns are added, a solution becomesfeasible. DROP phase (a local optimization procedure) isapplied to remove any redundant column such that byremoving it from the solution, the solution still remainsfeasible. In the algorithm, steps (i) and (ii) identify theuncovered rows and add the least cost column to the solutionvector. Steps (iii) and (iv) identify the redundant column withhigh cost and dropped from the solution. The time complexityof this repair operator is O(mn).Different steps of the repair operator are the followings

S1xn = solution vectorBnxm = transpose of the constraint matrixD1xn = temporary solution vectorC1xm = counter vector ( 0 entry of any position is used toidentify the uncovered rows)(i)C = S × B ( matrix multiplication)(ii) ADD Phase(a) For each 0 entry in C , find the first column j( cost of j isin increasing order)(b) Add j to S ie., S(j) = 1.(c) D = S ( temporary )(iii)DROP Phase(a) Identify the column j ( cost in the decreasing order)(b) Remove j from D, if C = D × B have no zero entry, ie.,D(j) = 0.(c) S=D is a feasible solution for SCP that contains noredundant columns.The different steps of the proposed RGES algorithm are thefollowings:

(a) Search space identification.(b) Randomized initialization.(c) Repair operator.(d) Fitness evaluation of agents.(e) Update G(t), best(t), worst(t) andMi(t) for i = 1, 2, ..., N.

(f) Calculation of the total force in different directions.(g) Calculation of acceleration and velocity.(h) Updating agents’ position.(i) Repeat steps c to h until the stop criteria is reached.(j) End.

V. EXPERIMENTAL RESULTS AND ANALYSIS

This heuristic is tested on 65 SCP test instances fromBeasley’s OR library. The instances are divided into 11 setsas in Table I, in which ’Density’ is the percentage of nonzero entries in the SCP matrix. Each of types 4 and 5 has 10instances, each of types 6 and A-H has five instances. Problemsets 4-6 and A-D are the ones for which optimal solutionvalues are known. Problem sets E, F, G and H are large sizeSCPs for which optimal solution values are not known.In our experimental study, 10 trials RGES heuristic were

made for each of the test problems. In all the cases, populationsize and dimension are set to n and maximum iteration is1000. This algorithm was implemented in C and tested in P-IV,

International Journal of Computational and Mathematical Sciences 4:5 2010

226

Page 6: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

TABLE ITEST PROBLEM DETAILS

Problem set Number of Number of Density Number ofrows columns problems

4 200 1000 2 105 200 2000 2 106 200 1000 5 5A 300 3000 2 5B 300 3000 5 5C 400 4000 2 5D 400 4000 5 5E 500 5000 10 5F 500 5000 20 5G 1000 10000 2 5H 1000 10000 5 5

3.2GHz processor and 512 MB RAM running under WindowsXP. Table II exhibits computational results with the followingdetails: Instance: The name of the test problem appearing inthe Beasley’s ORLIB the first digit/alphabet indicating thename of the problem set and floating digit representing theproblem number.Opt: The number of trials out of 10 in which the RGES has

found the optimum solution /best known value.Best: The number of trails out of 10 in which the RGES

found its best solution. Here it is worth mentioning thatfor problems for which optimal solutions are available inliterature, the best solutions of proposed algorithm are equalto optimum solutions.Average Execution Time: Average execution time of RGES

algorithm for 10 trials.Mean, Min, Max: The mean, min and maximum objective

values returned in the 10 trials (the value column) and therespective percentages above the optimal value (in pct column)We can observe that the RGES found optimal solutions

for all the instances. For 55 of the problems the RGESfound the optimal solution/best known solution in every trial.The heuristic to return consistent solutions for smaller sizeproblems, for large size problems it returns solutions that varya little bit best higher (but close to each other in objectivevalue). The average gap between RGES solution and optimum/ best known is 0.015.< place table2 about here >

In order to bring out the efficiency of the proposed RGESalgorithm the solutions of the same set of test instances havebeen compared with the other heuristic and Meta heuristic al-gorithms (Simulated annealing, Genetic algorithm, Lagrangianheuristic, Greedy, 3 flip neighborhood). Table III provides asummary of the solution quality for these different heuristicsnamely average gap (average = (solution - BKS)/BKS x 100),number of optimum solutions and best solutions. RGES foundthe optimal / best-known solutions for all the 65 test instances.From this table, we can observe that RGES, CFT, and Meta-RaPS have zero deviation from the best-known or optimalsolutions for these test problems.The abbreviations mentioned in Table 3 stands for:

BJT: Simulated annealing by Brusco,Jacobs and Thom-

TABLE IIPERFORMANCE OF RGES ALGORITHM

Ins opt best Avg Mean Min MaxExec Val pct Val pct Val pctTime

4.1 10 10 189.5 429 0.0 429 0.0 429 0.04.2 10 10 182.0 512 0.0 512 0.0 512 0.04.3 10 10 179.6 516 0.0 516 0.0 516 0.04.4 10 10 188.2 494 0.0 494 0.0 494 0.04.5 10 10 183.8 512 0.0 512 0.0 512 0.04.6 10 10 185.0 560 0.0 560 0.0 560 0.04.7 10 10 185.9 430 0.0 430 0.0 430 0.04.8 10 10 181.1 492 0.0 492 0.0 492 0.04.9 10 10 189.3 641 0.0 641 0.0 641 0.04.10 10 10 184.6 514 0.0 514 0.0 514 0.05.1 10 10 195.7 253 0.0 253 0.0 253 0.05.2 10 10 194.0 302 0.0 302 0.0 302 0.05.3 10 10 198.3 226 0.0 226 0.0 226 0.05.4 10 10 192.0 242 0.0 242 0.0 242 0.05.5 10 10 199.2 211 0.0 211 0.0 211 0.05.6 10 10 193.8 213 0.0 213 0.0 213 0.05.7 10 10 194.7 293 0.0 293 0.0 293 0.05.8 10 10 197.9 288 0.0 288 0.0 288 0.05.9 10 10 198.0 279 0.0 279 0.0 279 0.05.10 10 10 191.6 265 0.0 265 0.0 265 0.06.1 10 10 190.4 138 0.0 138 0.0 138 0.06.2 10 10 187.5 146 0.0 146 0.0 146 0.06.3 10 10 193.7 145 0.0 145 0.0 145 0.06.4 10 10 194.0 131 0.0 131 0.0 131 0.06.5 10 10 188.8 161 0.0 161 0.0 161 0.0A1 10 10 207.8 253 0.0 253 0.0 253 0.0A2 10 10 210.0 252 0.0 252 0.0 252 0.0A3 10 10 204.1 232 0.0 232 0.0 232 0.0A4 10 10 208.9 234 0.0 234 0.0 234 0.0A5 10 10 206.6 236 0.0 236 0.0 236 0.0B1 10 10 211.1 69 0.0 69 0.0 69 0.0B2 10 10 207.2 76 0.0 76 0.0 76 0.0B3 10 10 209.8 80 0.0 80 0.0 80 0.0B4 10 10 213.0 79 0.0 79 0.0 79 0.0B5 10 10 205.4 72 0.0 72 0.0 72 0.0C1 10 10 222.2 227 0.0 227 0.0 227 0.0C2 10 10 226.0 219 0.0 219 0.0 219 0.0C3 10 10 215.9 243 0.0 243 0.0 243 0.0C4 10 10 228.6 219 0.0 219 0.0 219 0.0C5 10 10 224.8 215 0.0 215 0.0 215 0.0D1 10 10 219.4 60 0.0 60 0.0 60 0.0D2 10 10 225.0 66 0.0 66 0.0 66 0.0D3 10 10 227.7 72 0.0 72 0.0 72 0.0D4 10 10 224.1 62 0.0 62 0.0 62 0.0D5 10 10 228.0 61 0.0 61 0.0 61 0.0E1 10 10 229.9 29 0.0 29 0.0 29 0.0E2 10 10 237.9 30 0.0 30 0.0 30 0.0E3 10 10 228.5 27 0.0 27 0.0 27 0.0E4 10 10 231.4 28 0.0 28 0.0 28 0.0E5 10 10 234.8 28 0.0 28 0.0 28 0.0F1 10 10 230.6 14 0.0 14 0.0 14 0.0F2 10 10 234.1 15 0.0 15 0.0 15 0.0F3 10 10 235.8 14 0.0 14 0.0 14 0.0F4 10 10 231.5 14 0.0 14 0.0 14 0.0F5 10 10 236.0 13 0.0 13 0.0 13 0.0G1 8 10 268.7 176.4 0.004 176 0.0 179 0.03G2 9 10 255.3 154.1 0.001 154 0.0 155 0.01G3 9 10 264.9 166.2 0.002 166 0.0 168 0.02G4 10 10 268.5 168 0.0 168 0.0 168 0.0G5 9 10 272.4 168.1 0.001 168 0.0 169 0.01H1 10 10 266.0 63 0.0 63 0.0 63 0.0H2 10 10 259.6 63 0.0 63 0.0 63 0.0H3 8 10 261.8 59.2 0.002 59 0.0 60 0.01H4 9 10 267.4 58.1 0.001 58 0.0 59 0.01H5 10 10 273.0 55 0.0 55 0.0 55 0.0

son[8],BC: genetic algorithm by Beasley and Chu[7], Be: theLagrangian heuristic by Beasley [5], Grdy: Greedy heuristicfor set-covering problem[12] , CNS: Lagrangian -based heuris-tic by Ceria, Nobili and Sassano [11], CFT: Lagrangian - basedheuristic by caprara, Fischetti and Toth [10], MMT: 3-flipneighborhood local search by Mutsunori Yagiura, MasahiroKishida and Toshihide Ibaraki [34], Meta-RaPS - effective andsimple heuristic approach by Guanghui, Gail and Gary [17 ].

VI. FEATURES OF ALGORITHMTo see how the proposed algorithm is efficient some remarks

are noted: Since each agent could observe the performance ofthe others, the gravitational force is an information-transferringtool. Due to the force that acts on an agent from its neighbor-hood agents, it can see space around itself. A heavy mass has

International Journal of Computational and Mathematical Sciences 4:5 2010

227

Page 7: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

TABLE IIISUMMARIZED RESULTS FOR THE SOLUTION QUALITY

Prob BJT BC Be Gry CFT Meta- RGESset

RaPS4 0.00 0.00 0.06 3.78 0.00 0.00 0.005 0.00 0.09 0.18 5.51 0.00 0.00 0.006 0.00 0.00 0.56 7.72 0.00 0.00 0.00A 0.00 0.00 0.82 5.61 0.00 0.00 0.00B 0.00 0.00 0.81 5.57 0.00 0.00 0.00C 0.00 0.00 1.93 6.88 0.00 0.00 0.00D 0.00 0.00 2.75 9.79 0.00 0.00 0.00E 0.00 0.00 3.5 12.75 0.00 0.00 0.00F 0.00 0.00 7.16 12.98 0.00 0.00 0.00G 0.13 0.13 4.83 8.49 0.00 0.00 0.00H 0.32 0.63 8.12 11.78 0.00 0.00 0.00Overallgap 0.04 0.07 2.36 8.21 0.00 0.00 0.00Total 65 65 65 65 65 65 65Opt/bestin

atleastone trial 65 61 22 0 65 65 65

a large effective attraction radius and hence a great intensity ofattraction. Therefore, agents with a higher performance have agreater gravitational mass. As a result, the agents tend to movetoward the best agent. The inertia mass is against the motionand make the mass movement slow. Hence, agents with heavyinertia mass move slowly and hence search the space morelocally. So, it can be considered as an adaptive learning rate.Gravitational constant adjusts the accuracy of the search, so itdecreases with time (similar to the temperature in a SimulatedAnnealing algorithm). RGES is a memory-less algorithm.However, it works efficiently like the algorithms with memory.Our experimental results show the good convergence rate ofthe RGES. Here, we assume that the gravitational and theinertia masses are the same. However, for some applicationsdifferent values for them can be used. A bigger inertia massprovides a slower motion of agents in the search space andhence a more precise search. Conversely, a bigger gravitationalmass causes a higher attraction of agents. This permits a fasterconvergence.

VII. CONCLUSION

A feasibility operator based heuristic for the large sizeset covering problem based on RGES has been developed.Randomization enables the algorithm to escape from the localsearch and pave a way leading to find optimal solutions.Computational results indicate that our heuristic is able togenerate optimal solutions for small size problems in lesstime. For large size problems the deviation from the optimalsolutions are very less and are much below the deviationsobtained by other existing algorithms.

REFERENCES

[1] Aickelin, U., An indirect genetic algorithm for set covering problems,Journal of the Operational Research Society 53, 1118-1126,2002.

[2] Almiana M, Pastor JT, An adaptation of SH heuristic to the loca-tion set covering problem, European Journal of Operational Research;100(3):586-93,1997

[3] Barry Lynn Webster, Solving combinatorial Optimization Problems usinga newalgorithm based on gravitational attraction, Ph.D., thesis, Mel-bourne, Florida Institute of Technology, May 2004.

[4] Beasley J.E, An algorithm for set covering problems, European Journalof Operational Research 31 85-93,1990

[5] Beasley J.E, A Lagrangean heuristic for set covering problems, NavalResearch Logistics ; 37(1):151-64,1990.

[6] Beasley J.E., Jornsten. K, Enhancing an algorithm for set coveringproblems, European Journal of Operational Research 58, 293-300,1992.

[7] Beasley J.E, Chu PC, A genetic algorithm for the set covering problem,EuropeanJournal of Operational Research; 94(2):392-404,1996.

[8] Brusco M.J.,.Jacobs L.W,.Thomson G.M, A morphing procedure to sup-plement a simulated annealing heuristic for cost-and-coverage-correlatedset-covering problem, Annals of Operations Research 86, 611-627,1999.

[9] Caprara A. , Fischetti M. Toth ,P. ,D.vigo and Guida P.L., Algorithmsfor railway crew management ,Mathematical programming 79 , 125 -141,1997.

[10] Caprara A, Fischetti M, Toth P, A heuristic method for the set coveringproblem, Operational Research; 47(5):730-743,1999.

[11] Ceria S., Nobili P., Sassano A, A Lagrangian-based heuristic forlarge-scale set covering problems, Mathematical Programming 81, 215-228,1998.

[12] Chvatal. V, A greedy heuristic for the set-covering problem,Mathematicsof Operations Research 4, 233-235,1979.

[13] Feo, T., Resende, M.G.C, A probabilistic heuristic for a computationallydifficult set covering problem. Operations ResearchLetters 8, 67-71,1989.

[14] Fisher M.L., Kedia P, Optimal solutions of set covering/partitioningproblems using dual heuristics, Management Science 36, 674-688,1990.

[15] Garey M.R. and. Johnson D.S, Computers and Intractability: A Guideto the theory of NP-completeness , W.H.Freeman ,San Francisco,1979.

[16] Grossman T,Wool A, Computational experience with approximationalgorithms for the set covering problem. European Journal of OperationalResearch; 101(1):81-92,1997.

[17] Guanghui Lan, Gail W. DePuy, Gary E. Whitehouse, An effectiveand simple heuristic for the set covering problem, European Journal ofOperational Research 176, 1387-1403,2007.

[18] Haouari, M., Chaouachi, J.S., A probabilistic greedy search algorithm forcombinatorial optimization with application to the setcovering problem,Journal of the Operational Research Society 53, 792-799,2002.

[19] Harche E and Thompson G.L., The column subtraction algorithm: Anexact method for solving weighted setcovering, packing and partitioningproblems, Computers and Operations Research 21, 689-705,1994.

[20] D. Holliday, R. Resnick, J. Walker, Fundamentals of physics, John Wileyand Sons, 1993.

[21] Jacobs L.W. and Brusco M.J.,A simulated annealing based heuristic forthe set-covering problem, Working paper, Operations Management andInformation Systems Department, Northern Illinois University, Dekalb,IL, 1993.

[22] Jacobs, L., Brusco, M., Note: A local-search heuristic for large set-covering problems, Naval Research Logistics 42, 1129-1140.22,1995.

[23] I.R. Kenyon, General Relativity, Oxford University Press, 1990.[24] Lessing L, Dumitrescu I, Sttzle T, A comparison between ACO algo-

rithms for the set covering problem, Lecture Notes in Computer Science;3172;1-12,2004.

[25] Lopes FB, Lorena LA, Surrogate heuristic for set covering problems,European Journal of Operational Research; 79(1):138-150,1994.

[26] R. Mansouri, F. Nasseri, M. Khorrami, Effective time variation of G in amodel universe with variable space dimension, Physics Letters 259,194-200,1999.

[27] Ohlsson, M., Peterson, C., Soderberg, B., An efficient mean fieldapproach to the set covering problem, European Journal of OperationalResearch 133, 583-595,2001.

[28] E. Rashedi, Gravitational Search Algorithm, M.Sc. Thesis, ShahidBahonar University of Kerman, Kerman, Iran, 2007.

[29] B. Schutz, Gravity from the Ground Up, Cambridge University Press,2003.

[30] Sears,Francis W., Mark W.Zemansky and Hugh D. Young, UniversityPhysics, 7th ed.Reading,MA.Addison - Wesley,1987.

[31] Solar M, Parada V, Urrutia R., A parallel genetic algorithm to solvethe set-covering problem, Computer Operational Research; 29(9):1221-35,2002.

[32] Vasko, F.J., Wilson, G.R., An efficient heuristic for large set coveringproblems, Naval Research Logistics Quarterly 31, 163-171,1984.

[33] Voudouris,chris and Edward Tsang, Guided Local Search, TechnicalReport,CSM-247,Department of Computer Science,University of Essex,UK,1995.

[34] Yagiura M, Kishida M, Ibaraki T., A 3-flip neighborhood local searchfor the set covering problem, Technical Report 2004-001. Departmentof Applied Mathematics and Physics, Graduate School of Informatics,Kyoto University,2004

International Journal of Computational and Mathematical Sciences 4:5 2010

228

Page 8: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

A new heuristic approach for the large-scaleGeneralized assignment problem

S. Raja Balachandar and K.Kannan

Abstract—This paper presents a heuristic approach to solve theGeneralized Assignment Problem (GAP) which is NP-hard. It isworth mentioning that many researches used to develop algorithmsfor identifying the redundant constraints and variables in linearprogramming model. Some of the algorithms are presented usingintercept matrix of the constraints to identify redundant constraintsand variables prior to the start of the solution process. Here anew heuristic approach based on the dominance property of theintercept matrix to find optimal or near optimal solution of theGAP is proposed. In this heuristic, redundant variables of the GAPare identified by applying the dominance property of the interceptmatrix repeatedly. This heuristic approach is tested for 90 benchmarkproblems of sizes upto 4000, taken from OR-library and the resultsare compared with optimum solutions. Computational complexityis proved to be O(mn�) of solving GAP using this approach. Theperformance of our heuristic is compared with the best state-of-the-art heuristic algorithms with respect to both the quality of thesolutions. The encouraging results especially for relatively large sizetest problems indicate that this heuristic approach can successfullybe used for finding good solutions for highly constrained NP-hardproblems.

Keywords—Combinatorial Optimization Problem, Generalized As-signment Problem, Intercept Matrix, Heuristic, Computational Com-plexity, NP-Hard Problems.

I. INTRODUCTION

The generalized assignment problem (GAP) is a well-knownNP-Hard [28] combinatorial optimization problem. It findsthe maximum profit or minimum cost assignment of n jobsto m agents such that each job is assigned to exactly oneagent and the capacity of each agent without exceeding. Manyreal life applications can be modeled as a GAP , e.g., theresource scheduling, allocation of memory spaces, design ofcommunication network with capacity constraints for eachnetwork node, assigning software development tasks to pro-grammers, assigning jobs to computers in a network, vehiclerouting problems, and others. Several algorithms (exact andheuristic) that can effectively solve the GAP have been citedand compared as benchmarks many times in the literature.This paper, we propose a heuristic algorithm based on domi-nance principle to solve GAP. The Dominance principle basedheuristic algorithm has been implemented successfully to solve0-1 multi constrained knapsack problem [37]. This heuristicis used here in the first stage, to find optimal or near optimalsolution to GAP and second stage is to improve the near

S.Raja Balachandar is with the Department of Mathematics, SASTRAUniversity,Thanjavur,INDIA, e-mail: [email protected] is with the Department of Mathematics, SASTRA Univer-

sity,Thanjavur,INDIA, e-mail: [email protected]

optimal solution by using another heuristic called columndominant principle and row dominant principle.This paper is organized as follows: Section II explains

the definition of GAP. A brief survey of various researcherswork pertaining to this problem is elucidated in section III.The dominant principle based heuristic and its computationalcomplexity are presented in section IV. The algorithm’s utilityis illustrated with help of benchmark problems in sectionV and we have furnished the results obtained for all thebenchmark problems in section V. The extensive comparativestudy of our heuristic with other heuristic approches andsalient features of this algorithm are enumerated in sectionVI, finally the concluding remarks are given in section VII.

II. GENERALIZED ASSIGNMENT PROBLEM (GAP)Let I =��� �� ������ be a set of agents, and let J =��� �� ���� ��

be a set of jobs. For � � � , � � � define �� as the cost (profit)of assigning job j to agent i (or assigning agent i to job j),�� as the resource required by agent to perform job j(profit,if the job j is performed by agent i), and �� as the resourceavailability (capacity) of agent i. Also,��� is a 0-1 variable that1 if agent i performs job j and 0 otherwise. The mathematicalformulation of the GAP is:Maximize �

���

���

����� (1)

subject to the constraints�

���

����� � ����� � � (2)

���

��� � ���� � � (3)

��� � ��� �� ��� � ���� � � (4)

(3 ) ensures that each job is assigned to exactly one agentand (2 ) ensures that the total resource requirement of the jobsassigned to an agent does not exceed the capacity of the agent.

III. PREVIOUS WORK

There are many exact algorithms and heuristics developed tosolve the GAP. Existing algorithms include branch and bound,branch-and-cut and branch-and-price algorithms [38,33,39].Ross and Soland [38] proposed a new branch and bound algo-rithm in 1975. Savelsbergh [39] introduced branch and priceapproach in 1997. To improve Lagrangian lower bound in his

International Journal of Computational and Mathematical Sciences 3:8 2009

418

Page 9: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

algorithm, Nauss [33] combined several ideas with cuts sug-gested by Gottlieb and Rao [11] and proved the performance ofthis algorithm for GAP instances upto 3000 binary variables.In 2006, he discussed the latest integer programming basedalgorithms for the GAP[34]. However exact algorithm requiresmore computation time for finding the optimal solutions oflarge size GAP. To circumvent this computational difficulty,several researchers started designing heuristic algorithms yetcomputational attractive algorithms to find optimal or nearoptimal solutions. Heuristic algorithms are designed to pro-duce near optimal solutions for larger problem instances. Someheuristics use the linear programming relaxation [44]. In thelast decade, several algorithms including Lagrangian relaxation(LR) method[9] have been developed. Narciso and Lorena [32]have proposed combining LR with surrogate relaxation, 1999.Haddadi[14] has been applied the Lagrangian decompositionmethod to solve GAP, 1999. Haddadi and Ouzia [15,16] havebeen integrated LR and subgradient methods in branch andbound schemes, 2001 and 2004. M. A. S. Monfared [31]has established that the augmented Lagrangian method (neuralbased combinatorial optimization problems) can produce su-perior results with respect to feasibility and integrality, 2006.V.Jeet and E.Kutanoglu[20] have combined LR, subgradientoptimization, and problem space search techniques to solveGAP, 2007. Others use search techniques such as geneticalgorithms, tabu search algorithms and simulated annealing intheir meta heuristic approach to solve large size benchmarkGAP instances[3] available in literature. Osman [35] hasintroduced simulated handling methods to solve GAP. Tabusearch based heuristic algorithm used by various researchers tosolve GAP[18,8,41,25]. Chu and Beasley [7] presented geneticalgorithm (GA)- based heuristic for solving the GAP and haveshown that the performance of genetic algorithm heuristicholds good, 1996. Harald Feltl [17] introduced a hybridgenetic algorithm which is the improved version of Chu andBeasley algorithm, 2004. Yagiura et al [45,46] has designedan algorithm based on path relinking combined with ejectionchain neighbourhood approach and solved a class of GAP.Cattrysse and Wassenhove [5] present an extensive survey ofalgorithms for the GAP published until 1992. Amini and Racer[1] present a computational comparison of alternative solutionmethods. More examples of heuristic algorithms for the GAPcan be found in [7,5,1,13,21,23,26,27,28]. A comprehensivereview on exact and heuristic algorithms is given in [20].

IV. DOMINANCE PRINCIPLE (DP)

Linear programming (LP) is one of the most importanttechniques used in modeling and solving practical optimizationproblems that arise in Industries, Commerce and Management.Linear programming problems are mathematical models usedto represent real life situations in the form of linear objectivefunction and constraints. Various methods are available tosolve linear programming problems. When formulating anLP model, systems analysis and researchers often includeall possible constraints and variables although some of themmay not be binding with the optimal solution. The pres-ence of redundant constraints and variables does not alter

the optimum solution(s), but may consume extra compu-tational effort. Many researchers have proposed algorithmsfor identifying the redundant constraints and variables inLP models [2,4,12,19,22,24,29,30,40,42,43]. Paulraj.et.al[36]illustrated the intercept matrix of the constraints to identifyredundant constraints prior to the start of the solution processin their heuristic approach to solve a LP model.GAP is a well known 0-1 integer programming problem.

Since it is possible to use dominance principle in integerprogramming problem also, we use the intercept matrix of theconstraints (2) to identify the variables of value 1 and 0. Thevariables of value 0 are known as redundant variables. If theelements of intercept matrix are arranged in decreasing order,the leading element becomes the dominant variable with value1 and it provides optimum or near optimum solution of GAP.This process of identifying the leading element from interceptmatrix is known as dominant principle. The dominant principlefocuses at the resource matrix with lower requirement to comeforward for maximizing the profit. The intercept matrix of theconstraints(2) plays a vital role for achieving the goal in aheuristic manner.The dominant principle for this problem can be divided into

3 categories namely constraint, column dominance and rowdominance. For constraint dominant principle, an interceptmatrix is constructed by dividing the right hand side ofconstraints by corresponding coefficients of constraints saidin (2) of the section 2. First, we initialize the solution vectorwith zero value for all the unknowns(step-(a)). Next weconstruct the intercept matrix and identify redundant variablesthrough step-(b) and (e). The Values corresponding to columnminimum ����� are multiplied with corresponding cost coeffi-cients ����� and the maximum among this product is chosen�������

����

�����.

If the maximum product falls �� � the entry of interceptmatrix, then the corresponding ��� assumes the value 1. Nextwe update the availability (right hand side column vector) byusing the relation �� � �� � ���, and then the coefficientsof constraints ��� are replaced by 0 for all i. This process isrepeated n times.Column dominant principle is used to improve the objective

function value by reassigning row i to column j with higherprofit. The step-(h) is meant for searching the dominant i foreach column satisfying constraint (2) and (3) also focussingthe maximal.Row dominant principle is used to improve the objective

function value by reassigning column j to row i with higherprofit. The step-(i) identifies for each i, search for maximumprofit that satisfies constraints (2) and (3).We present below, the heuristic algorithm for solving GAPusing dominance principle approach.(a) Initialize the solution by assigning 0 to all �� .(b) Intercept matrix D,��� = �� ��� � if ��� � �,��� = M, a large value ; otherwise.(c) Identify 0 value variables( redundant): If any column

has �1 entry in D, then the corresponding variable identifiedas a redundant variable.

International Journal of Computational and Mathematical Sciences 3:8 2009

419

Page 10: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

(d) Dominant variable: Identify the smallest element (dom-inant variable) in each column of D.(e) Multiply the smallest elements with the corresponding

cost coefficients. If the product is Maximum in rth row andkth column, then set ��� = 1 and update the objective functionvalue ����� ��� ���� ���.(f) Update the constraint matrix: �� � ������ for all r and

set ��� = 0 for r = 1 to m.(g) If ��� = 0 for all i and j, then go to step-(h). Otherwise

go to step-(b)(h) Column dominant principle (to improve the current

solution)For each j (1 to n), identify row i such that ���� = 1 and

satisfies both ��� � ���� , and�

������������� + ��� - ���� �

��, i = 1 to m. If such an i can be found, then set ���=1 and���� = 0.(i) row dominant principle( to improve the current solution)

For each i(m to 1), identify the column j such that ���� = 1and satisfies both ��� � ����, and

�����������

��� + ���- ���� � ��, i = 1 to m. If such an i can be found, then set���=1 and ����=0 (for j* column assign i, to satisfy ���� �

�� ��

������������� .

(j) the objective function value is =��

���

��

��� ������Theorem 1. DPHEU can be solved in ���� time, poly-

nomial in the number of item types and constraints.proof: The worst -case complexity of finding the solutionsof an GAP using DPH can be obtained as follows. Assumethat there are n variables and m constraints. The procedureof initialization (step-(a)) requires O(mn) running time. TheFormation of D matrix involves n iterations, identification ofless than one entry in each column, finding smallest interceptin each column, identification of rows which consists of morethan one smallest intercept and updating of constraint matrixA. Since there are m constraints, step-(b), step-(c), step-(d),step-(e), and step-(g), require O(mn) running time each.The step-(f) requires O(n) operations to multiply cost withcorresponding smallest intercept and updating the correspond-ing row of the constraint matrix. The number of iterationsrequired for carrying out all operations in DPH is n. step-(h) and (i) are performed only once in that order.The step-(h) and (i) are attempted to improve the objective functionvalue by reassigning jobs to agents with greater profit. In termsof computational complexity, step-(h) and step-(i) take O(mn)operations. Hence, the heuristic operator has a complexity ofO(n� m).We illustrate this procedure to solve the generalized assign-

ment problem given in [10] with m = 3 and n = 8. The iterationwise report is presented in Table I.The DPH algorithm terminates the iterative process at 8��

iteration, since all the entries are in the constraint matrix areequal to zero. The objective function value is 232. Since step-(h) and step -(i) do not have any effect on columns 2 and 3in Table I, they have not been shown.Consider the first problem in GAP1[3], m = 5 and n =

15.The solution to the GAP followsStage 1 (step -(a) to step -(g))Objective function value = 302 The variables x1,5 = x1,7 =x1,13 = x1,14 = x2,2 = x2,8 = x2,11 = x3,3 = x3,6 = x3,15 =

TABLE IITERATION WISE REPORT FOR GAP GIVEN IN[10]

Iteration variables variables Objectivethat that function

assumes 1 assumes 0 value1 x17 x27,x37 412 x25 x15,x35 773 x28 x18,x38 1114 x14 x24,x34 1275 x26 x16,x36 1526 x32 x12,x22 1867 x31 x11,x21 2208 x13 x23,x33 232

x4,10 = x4,12 =x5,1= x5,4 = x 5,9 = 1 and all other variables= 0.Stage 2 (step-(h))The following Table II shows the changes of the values ofthe variables based on column dominant principle (step-(h) ofDPH algorithm). At the end of Step-9 DPH returns objectivefunction value as 316.Stage3 (step-(i))The following Table III shows the changes of the variablesbased on row dominant principle (step-(i) of DPH algorithm).Finally DPH gives the objective function value as 336, theoptimum one. Thus the total number of iterations required is22.

TABLE IICHANGES MADE BY COLUMN DOMINANT PRINCIPLE

variables value 1 to 0 value 0 to 11 X5,1 X2,12 X2,2 X5,23 X3,3 X4,3

TABLE IIICHANGES MADE BY ROW DOMINANT PRINCIPLE

variables value 1 to 0 value 0 to 11 X5,2 , X5,4 X5,62 X4,6 X4,103 X3,10 X3,44 - X2,2

V. COMPUTATIONAL RESULTSThe DPH has been coded in C language (DELL Core 2 Duo

CPU 1.60GHz). The heuristics were first tested on a set of 12small instances namely GAP instance 1 to 12 (each instanceconsists of 5 problems) used in [3], with sizes m x n, for m5, 8, 10 and n 15, 20, 24, 25, 30, 32, 40, 48, 50, 60. Theheuristic is again tested with a set of 30 large scale instancescoded in MATLAB7, with sizes m � 5, 10, 20 and n � 100,200,400. The data divided into five classes A, B, C, D andE, (each instance consists of 6 problems) were obtained fromthe OR - library [3]. Problems of classes A, B and C presentincreasing knapsacks. Class D and E are specially designedfor minimization, the most difficult correlated problems. Weconsidered Type D and E as maximization problem to testour algorithm’s performance. The result of this heuristic forsmall size [3] is listed in Table 4 and detailed comparativestudy with other state-of-art algorithm is presented in Section6. The results of DPH, TSDL, and RH for large size [32]

International Journal of Computational and Mathematical Sciences 3:8 2009

420

Page 11: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

are presented in Table 5. The percentage of deviation ofDPH solution from the optimum/near optimum ones has beencalculated using the formula

�� ����������������� ��������

����������������� (5)

TABLE IVDPH RESULTS FOR SMALL SIZE GAP

Problem m n Number N.O.P.T A.P.O.D averageset of solution

Problems timeGAP 1 5 15 5 4 0.35 0.1GAP 2 5 20 5 5 0 0.2GAP 3 5 25 5 4 0.03 0.31GAP 4 5 30 5 3 0.09 0.46GAP 5 8 24 5 5 0 0.42GAP 6 8 32 5 4 0.03 0.62GAP 7 8 40 5 4 0.4 0.71GAP 8 8 48 5 3 0.03 0.76GAP 9 10 30 5 3 0.61 0.74GAP 10 10 40 5 4 0.02 0.83GAP 11 10 50 5 5 0 0.88GAP 12 10 60 5 5 0 0.91

A.P.O.D = Average percentage of deviationN.O.P.T = Number of problems for which the DPH finds the optimal

solution

TABLE VDPH RESULTS FOR LARGE SIZE GAP

prob optimum* DPH TSDL RH PD solution/ best solution timesolution

A 1 4456* 4456* 4456* 4456* 0 1.36A 2 8788* 8788* 8788* 8788* 0 1.74A 3 4700* 4700* 4700* 4700* 0 2.52A 4 9413* 9413* 9413* 9413* 0 2.02A 5 4857* 4857* 4857* 4857* 0 1.97A 6 9666* 9666* 9666* 9666* 0 2.86B 1 4026 4026 4026 4008 0 1.69B 2 8502 8502 8505 8502 0 1.19B 3 4633* 4633* 4633* 4633* 0 1.94B 4 9255 9255 9255 9255 0 2.33B 5 4817* 4817* 4817* 4817* 0 2.53B 6 9682 9682 9682 9670 0 2.76C 1 4411* 4389 4411* 4411* 0.05 1.79C 2 8347 8346 8346 8347 0.01 1.46C 3 4535 4535 4535 4528 0 2.12C 4 9258 9258 9258 9247 0 1.9C 5 4790 4790 4790 4784 0 1.94C 6 9625 9625 9625 9611 0 2.89D 1 9147* 9147* 9147* 9147* 0 1.63D 2 18750* 18750* 18750* 18750* 0 1.83D 3 10349* 10349* 10349* 10349* 0 2.32D 4 20562* 20562* 20562* 20562* 0 2.32D 5 10839* 10839* 10839* 10839* 0 2.43D 6 21733* 21733* 21733* 21733* 0 2.77E 1 63228* 63228* - - 0 1.55E 2 128648* 128648* - - 0 1.73E 3 81054* 81054* - - 0 2.21E 4 164317* 164317* - - 0 2.46E 5 316844* 316844* - - 0 2.57E 6 94432* 94432* - - 0 2.45

The first four colums of Table IV indicate the problem set,number of constraints, number of variables and the number ofproblems in the set. The next three columns report that DPHalgorithm performance, like total number of optimum solutionfound by DPH, average percentage deviation from optimumsolution, and average solution time. It is clear that from Table 4DPH finds optimal or near optimal in all 60 test problems andthe average solution time required by DPH is 0.13 seconds.The results of DPH, TSDL, and RH for large size [32] arepresented in Table 5. The first two columns of Table V indicatethat name of the problem and optimum or best solution.The next three columns indicate that DPH, TSDL, and RHsolutions respectively. The percentage of deviation of DPH

solution from the optimum/best solution is presented in fifthcolumn. The last column indicates that the solution time ofDPH. It is clear that, out of 30 large sized problems, 28problems have reached the optimum/best solution. The DPHhas given near optimum solution for the remaining 2 problemswith error 0.05 and 0.01 percentage. So the DPH has identifiedhigh quality solutions for large instances also.The application of DPH for GAP D 5 X 100 ( D1) is

shown in Fig.1 with iterations versus objective function value.100 is fixed to be the maximum number of iterations andthe algorithm is found to reach the best solution (9010). Forfurther iterations, remaining 2 steps-(h) and (i) are executedand the results are depicted in Fig.2. The improved objectivefunction value is 9147, the optimum one.

Fig. 1. Performance of DPH algorithm on GAP D5X100(from step-(a) tostep-(g)

Fig. 2. Performance of DPH algorithm on GAP D5X100 (step-(h) and Step-(i)

As both tables and figures clearly demonstrate, the DPH isable to localize the optimal or near optimal point for all thetest problems in quick time. Our approach is used to reducethe search space to find the optimal/near optimal solutionsof the GAP. The computational complexity is cubic and thespace complexity is O(mn). DPH reaches the optimum ornear optimum point in less number of iterations where the

International Journal of Computational and Mathematical Sciences 3:8 2009

421

Page 12: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

maximum number of iterations is the size of variables. Ourheuristic algorithm identifies the zero value variables quickly.

VI. COMPARISON WITH OTHER HEURISTICS

The comparative study of DPH with other existing heuristicalgorithms ( GA, FJ, MTB, RS, TS1, SPH) has been furnishedin Table VI in terms of the average deviation for each problemset, the average percentage deviation for all problems, theaverage percentage deviation of the best solutions and thenumber of optimal/best solutions obtained (out of a total 60)for each of the small-size problem. The computational timesfor the other algorithms are not given here since it is difficult tocompare different codes and CPU times on different hardwareplatforms and algorithms with different stopping criteria. OurDPH obtains the results in a single execution like FJ, MTBand SPH, but our algorithm takes maximum of 1 second forsmall-sized GAP problems. The other heuristics are giving thesolution from multiple executions. It can be observed that theproposed DPH heuristic performs the best among all heuristicin terms of the solution quality, being capable of finding theoptimal solutions for 49 out of 60 problems.

TABLE VISUMMARIZED RESULTS FOR THE SOLUTION QUALITY(SMALL SIZE)

prob DPH GA FJ MTB RS TS1 SPHsetGap 1 0.35 0 0 0 0 0 0.08Gap 2 0 0 0 0 0 0.1 0.11Gap 3 0.03 0 0 0 0 0 0.09Gap 4 0.09 0 0.83 0.18 0 0.03 0.04Gap 5 0 0 0.07 0 0 0 0.35Gap 6 0.02 0.01 0.58 0.52 0.05 0.03 0.15Gap 7 0.4 0 1.58 1.32 0.02 0 0Gap 8 0.03 0.05 2.48 1.32 0.1 0.09 0.23Gap 9 0.61 0 0.61 1.06 0.08 0.06 0.12Gap 10 0.02 0.04 1.29 1.15 0.14 0.08 0.25Gap 11 0 0 1.32 2.01 0.05 0.02 0Gap 12 0 0.01 1.37 1.55 0.11 0.04 0.1Aver 0.13 0.01 0.84 0.78 0.04 0.03 0.13No.ofopt 60 60 26 24 39 45 40andbestknown

GA: Genetic Algorithm [7]; FJ: Fisher, Jaikumar and VanWassenhove[10],branch-and-bound procedure with an upper CPU limit; MTB: Martello andToth[28],vbranch-and-bound procedure with an upper CPU limit; RS:

Osman [35], hybrid simulated annealing/tabu search;TS1: Osman [35], longterm tabu search with best- admissible selection; SPH: Set Partitioning

Heuristic [6]

For large-size problem set, Narciso[32] and Tai-His Wu[41]have run the program for maximization problems. The compar-ison between DPH, TSDL and RH heuristics has been reportedin Table VII. It can be seen from Table VII the RH and TSDLalgorithms found 15 optimal solutions, whereas DPH obtains20 optimal solutions out of 30. Our DPH takes maximum of3 seconds to reach optimum or near optimum solutions forboth large and small-size GAP problems, but TSDL takesmaximum of 172.911 CPU time to reach best solution forB(20 X 200) [41] and RPH takes maximum of 278.83 CPUtime to complete some of the large-size problems reported in[32].Features of DPH: The heuristic is used to reduce the search

space to find the near-optimal solutions of the GAP. Thecomputational complexity is O(n�m) and the space complexity

TABLE VIISUMMARIZED RESULTS FOR THE SOLUTION QUALITY(LARGE SIZE)

problem set DPH TSDL RhGap A 0 0 0Gap B 0.00 0 0.1Gap C 0.01 0.01 0.09Gap D 0 0 0Gap E 0 - -Average 0.0003 0.002 0.05

Total no of problems 30 25 25No. of optimum 20 15 15No. of best known 8 6 2

TSDL: Dynamic tabu tenure with long-term memory mechanism [41]; Rh:Lagrangian/surrogate relaxation heuristic for generalized assignment

problems [32].

is O(mn). It reaches the optimum or near optimum pointin n+k, (k�n) iterations where n is the number of jobs(variables) and k is any integer such that � � � � �. Dueto dominace principles, this heuristic identifies the zero valuevariables instantaneously. The maximum CPU time for small-size problem is 1 second and 3 seconds for large-size problem.It concludes that DPH algorithm is the effective one.

VII. CONCLUSIONIn this paper, the dominant principle based approach for

tackling the NP-Hard Generalized assignment problem (GAP)is presented. This heuristic has been tested for 90 state-of-artbenchmark instances and found to produce optimal or nearoptimal solutions for all the problems given in literature. Forthe near optimal instances the average percentage of deviationof DPH solution from the optimum solution is very less. Thisheuristic is with complexity O(mn�) and it requires n + k, (� ��) iterations to solve the GAP. The wide range of experimentaldata show that the optimality achieved by this heuristic almost100 percentage. The basic idea behind the proposed schememay be explored to tackle other NP-Hard Problems also.

ACKNOWLEDGMENTThe authors would like to thank Prof.T.R.Natesan(late),

Anna University, Chennai, INDIA, for his motivation towardsto the improvement of this paper.

REFERENCES[1] Amini, M.M., Racer, M, A rigorous comparison of alternative solution

methods for the generalized assignment problem, Management Science40, 868-890, 1994.

[2] Anderson, E.D. and K.D. Andersen, Presolving in linear programming.Math. Prog. Series B.,71:221-245, 1995.

[3] Beasley JE. OR-Library; Distributing Test Problems by Electronic Mail,Journal of Operational Research Society 41, 1069-1072, 1990.

[4] Brearley, A.L., G.Mitra and H.P Williams, Analysis of mathematicalprogramming problem prior to applying the simplex algorithm . Math.Prog., 8: 54-83, 1975.

[5] Cattrysse, D.G., Wassenhove, L.N.V., A survey of algorithms for the gen-eralized assignment problem. European Journal of Operational Research60, 260-272, 1992.

[6] Cattrysse, D., Salomon, M and Van Wassenhove, L.N., A set partitioningheuristic for the generalized problem. European Journal of OperationalResearch., 72, 167-174, 1994.

[7] Chu, P.C., Beasley, JE., A genetic algorithm for the generalized assign-ment problem. Computers and Operations Research 24, 17-23, 1997.

[8] Diaz, J.A., Fernandez, E., A tabu search heuristic for the generalizedassignment problem, European Journal of Operational Research 132, 22-38, 2001.

International Journal of Computational and Mathematical Sciences 3:8 2009

422

Page 13: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

[9] Fisher, M.L., The Lagrangian relaxation method for solving integerprogramming problems. Management Science 27, 1-18,1981.

[10] Fisher, M. L., Jaikumar,R. and Van Wassenhove, L.N, A multiplieradjustment method for the generalized assignment problem. Mgmt Sci.,32, 1095-1103, 1986.

[11] Gottlieb, E.S., Rao, M.R., The generalized assignment problem: Validinequalities and facets. Mathematical Programming 46, 31-52, 1990.

[12] Gowdzio, J.,Presolve analysis of linear program prior to applying aninterior point method .Inform. J.Comput., 9: 73-91, 1997.

[13] Guignard M., Rosenwein M.B. An improved dual based algorithm forthe generalized assignment problem, Operations Research 37 (4), 658-663, 1989.

[14] Haddadi, S., Lagrangian decomposition based heuristic for the general-ized assignment problem. INFOR 37, 392-402, 1999.

[15] Haddadi, S., Ouzia, H., An effective Lagrangian heuristic for thegeneralized assignment problem,. INFOR 39, 351-356, 2001.

[16] Haddadi, S., Ouzia, H., Effective algorithm and heuristic for the gen-eralized assignment problem. European Journal of Operational Research153, 184-190, 2004.

[17] Harald Feltl and Gunther R. Raidl., An improved hybrid genetic algo-rithms for the generalized assignment problem, SAC ’04, Nicosia, Cyprus,march 14-17, 2004.

[18] Higgins, A.J. A dynamic tabu search for large-scale generalized assign-ment problems, Computers and Operations Research 28 (10), 1039-1048,2001.

[19] Ioslovich, I., Robust reduction of a class of large scale linear program.Siam J. Optimization, 12: 262-282, 2002.

[20] Jeet V., Kutanoglu E., Lagrangian relaxation guided problem spacesearch heuristics for generalized assignment problems, European Journalof Operational Research 182, 1039-1056, 2007.

[21] Jornsten K., Nasberg M., A new lagrangian relaxation approach tothe generalized assignment problem, European Journal of OperationalResearch 27, 313-323, 1986.

[22] Karwan, M.H., V. Loffi, J. Telgan and S. Zionts, Redundancy inmathematical Programming: A State of the Art Servey (Berlin: Springer-Verlag), 1983.

[23] Klastorin T.D. An effective subgradient algorithm for generalized assign-ment problem, Computers and Operations Research 6, 155-164, 1979.

[24] Kuhn, H.W. and R.E . Quant, An Experimental Study of the SimplexMethod. In: Metropolis, N. et al.(Eds.). Preceedings of Symposia inApplied Mathematics. Providence, RI: Am. Math. Soc., 15: 107-124,1962.

[25] Laguna, M., Kelly, J.P., Gonzalez Velarde, J.L., Glover, F. Tabu searchfor the multilevel generalized assignment problem, European Journal ofOperational Research 82, 176-189, 1995.

[26] Lorena L.A.N., Narciso M.G., Relaxation heuristics for a generalizedassignment problem, European Journal of Operational Research 91, 600-610, 1996.

[27] Martello, S. P. Toth., An algorithm for the generalized assignmentproblem, operational Research ’81, ed. J.P.Brans. North-Holland, 589-603, 1981.

[28] Martello, S., Toth P. Knapsack Problems: Algorithms and ComputerImplementations, Wiley, New York, 1990.

[29] Matthesiss, T.H., An Algorithm for determining irrelevant constraintsand all vertices in systems of linear inequalities. Operat. Res., 21: 247-260, 1973.

[30] Meszaros. C. and U.H. Suhl, Advanced preprocessing techniques forlinear and quadratic programming, Spectrum, 25: 575-595, 2003.

[31] Monfared. M.A.S and M . Etemadi., The impact of energy functionstructure on solving generalized assignment problem using Hopfieldneural network, European Journal of Operational Research 168, 645-654,2006.

[32] Narciso, M.G., Lorena, L.A.N. Lagrangian/surrogate relaxation for gen-eralized assignment problems. European Journal of Operational Research114 (1), 165-177, 1999.

[33] Nauss, R.M., Solving the generalized assignment problem:An optimiz-ing and heuristic approach. INFORMS Journal of Computing 15 (3),249-266, 2003.

[34] Nauss, R.M., The generalized assignment problem. In: Karlof, J.K. (Ed.),Integer Programming: Theory and Practice. CRC Press, Boca Raton, FL,39-55, 2006.

[35] Osman, I.H., Heuristics for the generalized assignment problem: Simu-lated annealing and tabu search approaches. OR Spektrum 17, 211-225,1995.

[36] Paulraj, S., C. Chellappan and T.R. Natesan, A heuristic approach foridentification of redundant constraints in linear programming models. Int.J. Com. Math., 83(8): 675-683, 2006.

[37] Raja Balachandar. S, Kannan. K, A new polynomial time algorithm for0-1 multiple knapsack problems based on dominant principles, AppliedMathematics and Computation, 202, 71-77, 2008.

[38] Ross, G.T., Soland, R.M., A branch and bound algorithm for thegeneralized assignment problem, Mathematical Programming 8, 91-103,1975.

[39] Savelsbergh, M., A branch-and-price algorithm for the generalizedassignment problem. Operations Research 45 (6), 831-841, 1997.

[40] Srojkovic, N.V. and P.S. Stanimirovic, Two direct methods in linearprogramming. European J. Oper. Res., 131: 417-439, 2001.

[41] Tai- Hsi Wu , Jinn-Yi Yeh, and Yu -Ru Syau., A tabu search approachto the generalized assignment problem, Journal of Chinese institute ofIndustrial Engineers, vol. 21, no. 3, pp. 301-311, 2004.Telgan, J., Identifying redundant constraints and implicit equalities insystem of linear constraints. Manage. Sci., 29: 1209-1222, 1983.

[42] Tomlin, J.A. and J.S Wetch, Finding duplicate rows in a linear program-ming model. Oper. Res. Let., 5: 7-11,1986.

[43] Trick, M.A., A linear relaxation heuristic for the generalized assignmentproblem. Naval Research Logistics 39, 137-152,1992.

[44] Yagiura, M., Ibaraki, T., Glover, F., An ejection chain approach for thegeneralized assignment problem. INFORMS Journal of Computing 16(2),133-151, 2004.

[45] Yagiura.M., Ibaraki.T, Glover, F, A path relinking approach with ejectionchains for the generalized assignment problem. European journal ofOperational Research. 169, 548-549, 2006.

[46] Yagiura.M and T. Ibaraki. Generalized Assignment Problem, in: T.F.Gonzalez, ed., Handbook of Approximation Algorithms and Metaheuris-tics, Chapman and Hall/CRC in the Computer and Information ScienceSeries, Chapter 48 (18 pages), 2007.

International Journal of Computational and Mathematical Sciences 3:8 2009

423

Page 14: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

A Meta-Heuristic algorithm for Vertex coveringproblem Based on Gravity

S. Raja Balachandar and K.Kannan

Abstract—A new Meta heuristic approach called ”Randomizedgravitational emulation search algorithm (RGES)” for solving vertexcovering problems has been designed. This algorithm is found uponintroducing randomization concept along with the two of the four pri-mary parameters ’velocity’ and ’gravity’ in physics. A new heuristicoperator is introduced in the domain of RGES to maintain feasibilityspecifically for the vertex covering problem to yield best solutions.The performance of this algorithm has been evaluated on a largeset of benchmark problems from OR-library. Computational resultsshowed that the randomized gravitational emulation search algorithm- based heuristic is capable of producing high quality solutions. Theperformance of this heuristic when compared with other existingheuristic algorithms is found to be excellent in terms of solutionquality.

Keywords—Vertex covering Problem, Velocity, GravitationalForce, Newton’s Law, Meta Heuristic, Combinatorial optimization.

I. INTRODUCTION

There is a class of problems, whose exponential complex-ities have been established theoretically are known as NPproblems. Designing polynomial time algorithms for sucha class of problems is still open. Due to the demand forsolving such problems, Researchers are constantly attemptingto provide heuristic solutions one after the other focusingthe optimality by introducing several operators with salientfeatures such as (i) reducing the computational complexity,(ii) randomization etc.,Some NP problems are Set covering problem, Travelingsalesman problem, Problem of Hamiltonian paths, Knapsackproblem, Problem of optimal graph coloring. If a polynomialtime solution can be found for any of these problems, thenall of the NP problems would have polynomial solutions. NPcomplete problems are described more detail in [8]. In 1972,in a landmark paper Karp[19] has shown that the vertex coverproblem is NP - complete, meaning that it is exceedinglyunlikely that to find an algorithm with polynomial worst - caserunning time. The minimum vertex cover problem remains NP- complete even for certain restricted graphs, for example, thebounded degree graphs[9] .

Vertex cover problem (VCP) has attracted researchers andpractitioners not only because of the NP - completeness butalso because of many difficult real - life problems which can beformulated as instances of the minimum weighted vertex cover.Examples of such areas where the minimum weighted vertex

S.Raja Balachandar is with the Department of Mathematics, SASTRAUniversity,Thanjavur,INDIA, e-mail: [email protected]

K.Kannan is with the Department of Mathematics, SASTRA Univer-sity,Thanjavur,INDIA, e-mail: [email protected]

cover problem occurs in real world applications are commu-nications, particularly in wireless telecommunications, civil,electrical engineering, circuit design, network flow, problemof placing guards[32] are worthmentioning. Though both exact(optimal) and heuristic approaches have been presented in theliterature, this problem is still a difficult NP-complete problem.A vertex cover for an undirected graph G = (V, E) is set ofvertices such that all the edges in the graph are incident upon atleast one vertex in the cover. The minimum cardinality vertexcover for a graph is a vertex cover with the least number ofvertices. A weighted vertex cover problem(WVCP) is definedas follows: Given G(V, E) and weight function w : V → R,find a cover of minimum total weight. Thus the problem canbe mathematically transformed into the following optimizationproblem

minimize

n∑j=1

wjvj (1)

subject ton∑

j=1

vi + vj ≥ 1, ∀(vi, vj) ∈ E (2)

vj ∈ {0, 1} , j = 1, 2, 3, ..., n (3)

Equation (2) ensures that each edge is covered by at leastone vertex and (3) is the integral of constraint. The costcoefficients wj are equal to 1 the problem is referred to asthe unicost VCP, otherwise,the problem is called the weightedor weighted VCP.

The minimum weighted vertex cover problem is closelyrelated to many other hard probelms and it is of interest tothe researchers in the field of design of optimization andapproximation algorithms. Minimum weighted vertex coverproblem is a special case of set covering problem[5][12][14]and the independent set problem[2][9][19] is similar tothe minimum vertex cover problem because a minimum vertexcover defines a maximum independent set and vice versa.Another interesting problem is closely related to the minimumvertex is the edge cover which seeks the smallest set of edgessuch that each vertex is included in one of the edges.

In this paper, a new optimization algorithm based on the lawof gravity, namely Randomized gravitational emulation searchalgorithm (RGES) is proposed. This algorithm is based on theNewtonian gravity: ” Every particle in the universe attractsevery other particle with a force that is directly proportionalto the product of their masses and inversely proportional to

International Journal of Computational and Mathematical Sciences 3:7 2009

330

Page 15: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

the square of the distance between them”.This article demonstrates that RGES technique is capable ofproducing better quality results for the large size set coveringproblem than other heuristic approaches.This paper is organized as follows: A brief survey of variousapproaches pertaining to this problem is elucidated in sectionII. In section III, we introduce the basic concepts of ouralgorithm. The proposed RGES is presented in section IV.The algorithm’s utility is illustrated with help of benchmarkproblems in section V and we include the extensive compar-ative study of result of our heuristic with existing state-of-artheuristics. Salient features of this algorithm are enumeratedin section VI, finally concluding remarks are given in sectionVII.

II. PREVIOUS WORK

WVCP is known to be NP-Hard, even if all the weights are 1and the graph is planar[8] . Due to computational intractabilityof the MWVC problem, many researchers have instead fo-cused their attention on the design of heuristic/approximationalgorithm for delivering quality solutions in a reasonable time.

Johnson[16] gave the first(greedy) logarithmic ratio ap-proximation for the unweighted uncapacited cover problem.Consider the case where all vertices have the same weight.Since goal becomes the minimization of the cardinality of asubset of V such that each edge (u,v) in E, at least one of uand v is in the subset, it is intuitive to successively select thevertex with the largest degree until all of the edges are coveredby the vertices in subset of V. This straightforward heuristiccan be further generalized as and applied to MWVCP. Thegeneralization proposed and analyzed by Chvatal[3] collects avertex at each stage with the smallest ratio between its weightand current degree. Clarkson[4] presented a heuristic algorithmthat exhibits a performance guarantee of 2.

Pitt[27] gave a randomized algorithm which randomly se-lects an end vertex of an arbitrary edge with a probabilityinversely proportional to its weight. For a comprehensive sur-vey on the analysis of approximation algorithms for MWVC,the reader is referred to Monien and Speckenmeyer[30] ,Motwani[25] , Hastad[13] , Shyu, Yin and Lin[31] , Likasand Stafylopatis [22] . The first fixed parameter tractablealgorithm for k - vertex cover problem was done by Fellows[7]. Recently, Dehne et al[6] have reported that they usedfixed parameter tractable algorithm to solve the minimumvertex cover problem on coarse-grained parallel machines suc-cessfully. Neidermeier and Rossmanith[26] presented efficientfixed parameter algorithm for the minimum weighted vertexcover problem. Shyu[31] presented a meta-heuristic approachAnt colony Optimization Algorithm(ACO) for WVCP andcompared the performance of ACO with other heuristic andmeta-heuristic like, genetic algorithm, tabu search, and simu-lated annealing for random graphs.

In this paper, we have designed a meta heuristic algorithmbased on gravity and we enhanced the performance of RGESthrough feasibility operator to obtain best solutions at lesscomputational cost.

III. THE LAW OF GRAVITY

The gravitation is the tendency of masses to acceleratetoward each other. It is one of the four fundamental inter-actions in nature [29] (the others are: the electromagneticforce, the weak nuclear force, and the strong nuclear force).Every particle in the universe attracts every other particle.Gravity is everywhere. The inescapability of gravity makesit different from all other natural forces. The way Newton’sgravitational force behaves is called ”action at a distance”.This means gravity acts between separated particles withoutany intermediary and without any delay. In the Newton lawof gravity, each particle attracts every other particle witha ’gravitational force’ [29][15] [28]. The gravitational forcebetween two particles is directly proportional to the productof their masses and inversely proportional to the square of thedistance between them [15]:

F =GM1M2

R2(4)

where F is the magnitude of the gravitational force, G isgravitational constant, M1 and M2 are the mass of the first andsecond particles respectively, and R is the distance between thetwo particles. Newton’s second law says that when a force, F,is applied to a particle, its acceleration, a, depends only on theforce and its mass, M [15]:

a =F

M(5)

Based on (4) and (5), there is an attracting gravity forceamong all particles of the universe where the effect of biggerand the closer particle is higher. An increase in the distancebetween two particles means decreasing the gravity forcebetween them as it is illustrated in Fig.1. In this figure, F1j

is the force that acting on M1 from Mj and F1 is the overallforce that acts on M1 and causes the acceleration vector a1.In addition, due to the effect of decreasing gravity, the actualvalue of the ”‘gravitational constant” depends on the actual ageof the universe. Eq. (6) gives the decrease of the gravitationalconstant, G, with the age [23]:

G(t) = G(to) × (to

t)β , β < 1, (6)

where G(t) is the value of the gravitational constant attime t. G(to) is the value of the gravitational constant at thefirst cosmic quantum-interval of time to [23]. Three kinds ofmasses are defined in theoretical physics:Active gravitational mass, Ma, is a measure of the strength ofthe gravitational field due to a particular object. Gravitationalfield of an object with small active gravitational mass is weakerthan the object with more active gravitational mass.Passive gravitational mass, Mp, is a measure of the strengthof an object’s interaction with the gravitational field. Withinthe same gravitational field, an object with a smaller passivegravitational mass experiences a smaller force than an objectwith a larger passive gravitational mass.Inertial mass, Mi, is a measure of an object resistance tochanging its state of motion when a force is applied. Anobject with large inertial mass changes its motion more slowly,and an object with small inertial mass changes it rapidly.

International Journal of Computational and Mathematical Sciences 3:7 2009

331

Page 16: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Fig. 1. Every mass accelerate toward the result force that act it from theother masses

Now, considering the above-mentioned aspects, we rewriteNewton’s laws. The gravitational force, Fij , that acts on massi by mass j, is proportional to the product of the activegravitational of mass j and passive gravitational of mass i, andinversely proportional to the square distance between them.ai is proportional to Fij and inversely proportional to inertiamass of i. More precisely, one can rewrite Eqs. (4) and (5) asfollows:

Fij =GMajMpi

R2, (7)

ai =Fij

Mii

, (8)

where Maj and Mpi represent the active gravitational massof particle i and passive gravitational mass of particle j,respectively, and Mii represents the inertia mass of particlei.

Although inertial mass, passive gravitational mass, and ac-tive gravitational mass are conceptually distinct, no experimenthas ever unambiguously demonstrated any difference betweenthem. The theory of general relativity rests on the assumptionthat inertial and passive gravitational mass are equivalent. Thisis known as the weak equivalence principle [20] [23]. Standardgeneral relativity also assumes the equivalence of inertial massand active gravitational mass; this equivalence is sometimescalled the strong equivalent principle [20].

IV. RANDOMIZED GRAVITATIONAL EMULATION SEARCH

ALGORITHM(RGES)

In this section, we introduce our optimization algorithmbased on the law of gravity [28]. In the proposed algorithm,agents are considered as objects and their performance ismeasured by their masses. All these objects attract eachother by the gravity force, and this force causes a globalmovement of all objects towards the objects with heavier

masses. Hence, masses cooperate using a direct form ofcommunication, through gravitational force. The heavy masses- which correspond to good solutions - move more slowlythan lighter ones, this guarantees the exploitation step of thealgorithm. In RGES, each mass (agent) has four specifications:position, inertial mass, active gravitational mass, and passivegravitational mass. The position of the mass corresponds toa solution of the problem, and its gravitational and inertialmasses are determined using a fitness function. In other words,each mass presents a solution, and the algorithm is navigatedby properly adjusting the gravitational and inertia masses.By lapse of time, we expect that masses be attracted by theheaviest mass. This mass will present an optimum solution inthe search space. The RGES could be considered as an isolatedsystem of masses. It is like a small artificial world of massesobeying the Newtonian laws of gravitation and motion. Moreprecisely, masses obey the following laws:Law of gravity: each particle attracts every other particleand the gravitational force between two particles is directlyproportional to the product of their masses and inverselyproportional to the distance between them, R. We use hereR instead of R2, because according to our experiment results,R provides better results than R2 in all experimental cases.Law of motion: the current velocity of any mass is equal to thesum of the fraction of its previous velocity and the variationin the velocity. Variation in the velocity or acceleration of anymass is equal to the force acted on the system divided by massof inertia.

A. Initiation

Now, consider a system with N agents (masses). We definethe position of the ith agent by:

Xi = (x1i , x

2i , ..., x

di , ..., x

ni ) for i = 1, 2, 3, ..., N, (9)

where xdi presents the position of ith agent in the dth

dimension. At a specific time ’t’, we define the force actingon mass ’i’ from mass ’j’ as following:

F dij(t) = G(t)

Mpi(t) × Maj(t)Rij(t)+ ∈

(xdi (t) − xd

i (t)), (10)

where Maj is the active gravitational mass related to agentj, Mpi is the passive gravitational mass related to agent i, G(t)is gravitational constant at time t, ∈ is a small constant, andRij(t) is the Euclidian distance between two agents i and j:

Rij = ‖Xi(t), Xj(t)‖2 , (11)

To give a stochastic characteristic to our algorithm, wesuppose that the total force that acts on agent i in a dimensiond be a randomly weighted sum of dth components of the forcesexerted from other agents:

F di (t) =

N∑j=1,j �=i

randjFdij(t), (12)

where randj is a random number in the interval [0, 1].Hence, by the law of motion, the acceleration of the agent iat time t, and in direction dth, ad

i (t),is given as follows:

International Journal of Computational and Mathematical Sciences 3:7 2009

332

Page 17: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

adi (t) =

F di (t)

Mii(t), (13)

where Mii is the inertial mass of ith agent. Furthermore,the next velocity of an agent is considered as a fraction ofits current velocity added to its acceleration. Therefore, itsposition and its velocity could be calculated as follows:

vdi (t + 1) = randi × vd

i (t) + adi (t), (14)

xdi (t + 1) = xd

i (t) + vdi (t + 1), (15)

where randi is a uniform random variable in the interval[0, 1]. We use this random number to give a randomizedcharacteristic to the search. The gravitational constant, G, isinitialized at the beginning and will be reduced with time tocontrol the search accuracy. In other words, G is a function ofthe initial value (Go) and time (t):

G(t) = G(Go, t), (16)

B. Evaluation of fitness and updating

Gravitational and inertia masses are simply calculated bythe fitness evaluation. A heavier mass means a more efficientagent. This means that better agents have higher attractions andwalk more slowly. Assuming the equality of the gravitationaland inertia mass, the values of masses are calculated using themap of fitness. We update the gravitational and inertial massesby the following equations:

Mai = Mpi = Mii = Mi, i = 1, 2, 3, ..., N, (17)

mi(t) =fiti(t) − worst(t)best(t) − worst(t)

(18)

Mi(t) =mi(t)∑N

j=1 mj(t), (19)

where fiti(t) represent the fitness value of the agent i attime t, and, worst(t) and best(t) are defined as follows for aminimization problem:

best(t) = min︸︷︷︸j∈1,...,N

fitj(t), (20)

worst(t) = max︸︷︷︸j∈1,...,N

fitj(t), (21)

One way to perform a good compromise between explorationand exploitation is to reduce the number of agents with lapseof time in Eq. (12). Hence, we propose only a set of agentswith bigger mass apply their force to the other. However, weshould be careful of using this policy because it may reducethe exploration power and increase the exploitation capability.We remind that in order to avoid trapping in a local optimumthe algorithm must use the exploration at beginning. By lapseof iterations, exploration must fade out and exploitation mustfade in. To improve the performance of RGES by controllingexploration and exploitation only the Kbest agents will attract

the others. Kbest is a function of time, with the initial valueKo at the beginning and decreasing with time. In such a way,at the beginning, all agents apply the force, and as time passes,Kbest is decreased linearly and at the end there will be just oneagent applying force to the others. Therefore, Eq.(12) couldbe modified as:

F di (t) =

∑j∈Kbest,j �=i

randjFdij(t), (22)

where Kbest is the set of first K agents with the best fitnessvalue and biggest mass.

C. Repair operator

The solutions(agents) may violate constraints. To make allthe solutions feasible an additional operator is needed. Herea proposed heuristic operator consists of two phases namelyADD phase and DROP phase that maintains the feasibilityof the solutions in the neighborhood being generated. Thesteps required to make each solution feasible involve theidentification of all uncovered rows and the addition ofcolumns such that all rows are covered. This is done by theADD phase. Once columns are added, a solution becomesfeasible. DROP phase (a local optimization procedure) isapplied to remove any redundant column such that byremoving it from the solution, the solution still remainsfeasible. In the algorithm, steps (i) and (ii) identify theuncovered rows and add the least cost column to the solutionvector. Steps (iii) and (iv) identify the redundant column withhigh cost and dropped from the solution. The time complexityof this repair operator is O(mn).Different steps of the repair operator are the followings

S1xn = solution vectorBnxm = transpose of the adjacency matrixD1xn = temporary solution vectorC1xm = counter vector ( 0 entry of any position is used toidentify the uncovered rows)(i)C = S × B ( matrix multiplication)(ii) ADD Phase(a) For each 0 entry in C , find the first column j( cost of j isin increasing order)(b) Add j to S ie., S(j) = 1.(c) D = S ( temporary )(iii)DROP Phase(a) Identify the column j ( cost in the decreasing order)(b) Remove j from D, if C = D × B have no zero entry, ie.,D(j) = 0.(c) S=D is a feasible solution for SCP that contains noredundant columns.The different steps of the proposed RGES algorithm are thefollowings:

(a) Search space identification.(b) Randomized initialization.(c) Repair operator.(d) Fitness evaluation of agents.

International Journal of Computational and Mathematical Sciences 3:7 2009

333

Page 18: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

(e) Update G(t), best(t), worst(t) and Mi(t) for i = 1, 2, ..., N.

(f) Calculation of the total force in different directions.(g) Calculation of acceleration and velocity.(h) Updating agents’ position.(i) Repeat steps c to h until the stop criteria is reached.(j) End.

V. EXPERIMENTAL RESULTS AND ANALYSIS

The RGES has been coded in MATLAB7. The heuristictested on 44 instances corresponding to two groups; thefirst group contains the test instances namely mvcp1-12 withweight 1, it is tested by Khuri [21] and second group consistsof 32 test instances namely wvcp1-30 with weight which arerandomly generated [1] [33]and used by shyu[31].

In our experimental study, 10 trials RGES heuristic weremade for each of the test problems with n(number of vertices)random solutions. Each trial was terminated, once 1000 itera-tions are completed or velocity is equal to zero. This algorithmwas implemented in C and tested in P-IV, 3.2GHz processorand 512 MB RAM running under Windows XP.

Experiment 1: Minimum Vertex CoverIn order to bring out the efficiency of the proposed RGES

algorithm the solutions of the same set of test instanceshave been compared with the other approaches (Tabu Search,Genetic Algorithm, Simulated Annealing). TABLE I providesa summary of the solutions obtained by these methods andsolution quality for these different heuristics namely averagegap (average = (solution - BKS)/BKS x 100), number ofoptimum solutions and best solutions. RGES found the optimal/ best-known solutions for all the 12 test instances. From thistable, we can observe that RGES, CFT, and Meta-RaPS havezero deviation from the best-known or optimal solutions forthese test problems.

TABLE ISOLUTIONS OF MINIMUM VERTEX COVER PROBLEM

Instance Opt GA SA TS RGES

Obj

Value

100-01 53 54 55 55 53

100-02 50 53 53 54 50

100-03 55 57 57 55 55

100-04 54 55 55 54 54

100-05 55 55 57 57 55

PS100 34 34 34 34 34

200-01 110 113 113 131 110

200-02 110 120 120 132 110

200-03 110 128 120 130 110

200-04 110 140 136 140 110

200-05 110 110 110 110 110

PS202 68 68 68 68 68

Avg Error - 5.74 4,56 8.33 0.00

Experiment 2: Weighted Minimum Vertex Cover In thisexperiment the parameter set opted like small - large scaleproblems , the number of vertices n is 50,100,150,200,250 or300. For each setting of n, we let m be ranged from 50 to5000. For practical considerations, we assume that the wight

on each vertex is propotional to the degree of the vertex ( moretransportation benefits)on a vertex might induce more weight( more running costs) on it. Let weight w(i) on vertex i berandomly distruted over the interval [1, d(i)2 ], where d(i) isthe degree of the vertex i, 1 and ten randomly generated datainstances for each pair of n an m.

We implemented the following heuristicsREP(pite gr,1985)[27]. The method randomly selects an

end vertex of an arbitrary edge considering the probabilityinversely proportional to its weight.GM ( Chavatal, 1979; Motwani,1992)[3][24]. The methodgreedily selects the vertex with minimum ratio between itsweight and current degree.MGM(Clarkson,1983)[4]. The method greedily selects thevertex with minimum ratio between its weight and currentdegree where the weight is modified as the heuristic progesses.ACO( Shyu, 2004)[31]. Ant Colony Optimization algorithm.SA( johnson )[17][18]. Simulated Annealing.TS( glover)[10][11]. Tabu Search.GA(Khuri, 1994 )[21]. Genetic Algorithm.

The results of RGES,REP,GM,MGM,ACO,TS,GA,SA forsecond set are presented in TABLE II. The first two columnsindicate that number of vertices (V) and number of edges(E). The next SEVEN columns indicated that best solutionsof differnt algorithms. It is clear that RGES found minimumsolutions for all the 32 test instances. So the RGES hasidentified high quality solutions for large instances also.Notethat in the point of solution quality, GM,SA and TS givedeviations of 7.98, 6.50 and 6.53 percentage from RGES,respectively. Since the weights on vertices in this experimentwere randomly generated over the interval [1, d(i)2],a vertexwith larger degree would be cline to having a heavier weight.The heuristic that GM deployed would be less competitivein such a situation as compared to ACO and REGS, evenwhen GM was further improved by simulated annealing ortabu search. Hence, we can see that the quality of the solutiondelivered by RGES are much better than the other heuristicsinvolved in this experiment even though weights on vertexesare proportional to the degrees.

VI. FEATURES OF ALGORITHM

To see how the proposed algorithm is efficient some remarksare noted: Since each agent could observe the performance ofthe others, the gravitational force is an information-transferringtool. Due to the force that acts on an agent from its neighbor-hood agents, it can see space around itself. A heavy mass hasa large effective attraction radius and hence a great intensity ofattraction. Therefore, agents with a higher performance have agreater gravitational mass. As a result, the agents tend to movetoward the best agent. The inertia mass is against the motionand make the mass movement slow. Hence, agents with heavyinertia mass move slowly and hence search the space morelocally. So, it can be considered as an adaptive learning rate.Gravitational constant adjusts the accuracy of the search, so itdecreases with time (similar to the temperature in a SimulatedAnnealing algorithm). RGES is a memory-less algorithm.However, it works efficiently like the algorithms with memory.

International Journal of Computational and Mathematical Sciences 3:7 2009

334

Page 19: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Our experimental results show the good convergence rate ofthe RGES. Here, we assume that the gravitational and theinertia masses are the same. However, for some applicationsdifferent values for them can be used. A bigger inertia massprovides a slower motion of agents in the search space andhence a more precise search. Conversely, a bigger gravitationalmass causes a higher attraction of agents. This permits a fasterconvergence.

VII. CONCLUSION

A feasibility operator based heuristic for the vertex coveringproblem based on RGES has been developed. Randomizationenables the algorithm to escape from the local search andpave a way leading to find optimal solutions. Computationalresults indicate that our heuristic is able to generate optimalsolutions for small size problems in less time. For large sizeproblems the deviation from the optimal solutions are veryless and are much below the deviations obtained by otherexisting algorithms. The successful applications of the RGESapproach to complex optimization problems are extending thestudy of meta heuristic. For further research, it is of potential

TABLE IISOLUTIONS OF MINIMUM WEIGHTED VERTEX COVER PROBLEM

N M REP GM MGM SA TS ACO RGES

50 50 113.1 95.1 98.7 93.5 93.5 83.9 83.9

100 355 305.2 312.7 299.9 299.9 276.2 276.2

250 2319 2051.5 2138.4 1990.9 2006.7 1886.8 1886.4

500 9189.4 8196.6 8635.6 8115.5 8115.5 7915.9 7914.5

750 22246.7 20604.9 21676 20574.6 20604.9 20134.1 20134.1

100 50 90.3 73.2 73.8 71.7 71.7 67.4 67.4

100 224.9 186.1 198.9 183.7 184.1 169.1 169.1

250 1150.2 995.5 1053.9 986.7 983.9 901.7 890.4

500 4740.8 3991.8 4307.7 3937.8 3937.8 3726.7 3725.3

750 10236.1 9256.9 9771.9 9172.4 9172.4 8754.5 8745.5

150 50 88.6 71.2 72.6 70.6 70.6 65.8 65.8

100 196.5 159.6 165.3 157.9 157.9 144.7 144.7

250 848.5 692.5 735.1 679.5 679.1 625.7 624.4

500 3148.9 2577.6 2734.9 2519.4 2526.2 2375 2365.2

750 7441 6236.1 6628.7 6090.8 6105.9 5799.2 5798.6

200 50 74.3 62 63.2 61.2 61.2 59.6 59.6

100 173.5 146.8 151 145.1 145.2 134.7 132.6

250 658.2 543.2 576.4 537 537 488.7 488.4

500 2368.3 2004.6 2157.8 1989 1989 1843.6 1843.6

750 5165.7 4422.9 4727.4 4376.4 4383.7 4112.8 4112.8

250 250 602.7 469.1 492.2 462.1 463.1 423.2 423.2

500 1933.5 1602.6 1697.9 1591.8 1591.8 1457.4 1457.4

750 4332.2 3564.1 3888.6 3512.1 3513.3 3315.9 3315.9

1000 7723.3 6554.6 6954.4 6438.7 6436 6058.2 6058.2

2000 31475.8 27360.2 29130.6 26925.4 26864 26149.1 26149.1

5000 193232.3 176245.2 183612.8 174037.5 173902.9 171917.2 171917.2

300 250 534 447.3 469.9 441.2 441.3 403.9 403.9

500 1648.1 1361.3 1451.1 1348.6 1347.4 1239.1 1239.1

750 3596 2924.2 3121.2 2878.7 2879.4 2678.2 2678.2

1000 6428 5274.3 5718.4 5229.9 5227.4 4895.5 4895.5

2000 26106.6 22432.5 23997.8 22061.2 21983.6 21295.2 21295.2

5000 163003.3 147406.4 154929.6 145276.6 145121.4 143243.5 143243.5

interest to apply the RGES approach to other problems thatare not necessarily based on graphs. Extending the genericRGES model by incorporating specific behaviours of physicalparameters or computer technologies, such as parallel process-ing, to enhance its problem solving capability may be anotherresearch direction.

International Journal of Computational and Mathematical Sciences 3:7 2009

335

Page 20: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

REFERENCES

[1] Bollobas. B : Random graphs (2nd Ed.). Cambridge, UK: CambridgeUniversity press, 2001.

[2] Berman. P and Fujito. T: On approximation properties of the independentset problem for low degree graphs, Theory of Computing Syst., vol. 32,pp. 115 - 132, 1999.

[3] Chvatal, V. (1979). ”A Greedy-Heuristic for the Set Cover Prob-lem.”Mathematics of Operations Research 4, 233-235.

[4] Clarkson, K.L (1983). ”A Modification of the Greedy Algorithm forVertex Cover.” Information Processing Lettters 16, 23-25.

[5] Cormen. T. H, C. E. Leiserson, R. L. R., and Stein. C: Introduction toalgorithms, 2nd ed., McGraw - Hill, New York , 2001.

[6] Dehne. F, et al.: Solving large FPT problems on coarse grained parallelmachines, Available: http://www.scs.carleton.ca/fpt/papers/index.htm.

[7] Fellows. M. R: On the complexity of vertex cover problems, Technicalreport, Computer science department, University of New Mexico, 1988.

[8] Garey. M. R, Johnson. D. S: Computers and Intractability: A Guide tothe theory NP - completeness. San Francisco: Freeman ,1979.

[9] Garey. M. R, Johnson. D. S, and Stock Meyer. L: Some simplified NP- complete graph problems, Theoretical computer science, Vol.1563, pp.561 - 570, 1999.

[10] Glover. F: Tabu Search - Part I, ORSA journal of computing, vol. 1,No.3, (1989), pp. 190 - 206.

[11] Glover. F: Tabu search: A Tutorial, Interface 20, pp. 74 - 94, 1990.[12] Gomes. F. C, Meneses. C. N, Pardalos. P. M and Viana. G. V. R: Exper-

imental analysis of approximation algorithms for the vertex cover and setcovering problems, Journal of computers and Operations Research, vol.33, pp. 3520 - 3534, 2006.

[13] Hastad. J: Some Optimal Inapproximability Results., Journal of theACM, vol. 48, No.4, pp. 798 - 859, 2001.

[14] Hochbaum. D. S: Approximation algorithm for the set covering andvertex cover problems, SIAM Journal on computing, 11(3), 555 - 6, 1982.

[15] D. Holliday, R. Resnick, J. Walker, Fundamentals of physics, John Wileyand Sons, 1993.

[16] Johnson. D.S, Approximation Algorithms for Combinatorial prob-lems,J.Comput.System Sci.9(1974)256-278.

[17] Johnson, D.s., C.R Aragon, L.A. McGeoch, and C. Schevon. (1989).”Optimization by Simulated Anealing: An Experimental Evaluation, PartI: Graph Partitioning.” Operations Research 37, 875-892.

[18] Johnson, D.S., C.R. Aragon, L.A. McGeoch, and C.Schevon. (1989b).”Optimization by Simulated Annealing: An Experimental Evaluation, partII: Graph Coloring and Number Partitioning.” Operations Research 39,378-406.

[19] Karp. R. M: Reducibility among combinatorial problems, Plenum Press,New York, pp. 85 - 103, 1972.

[20] I.R. Kenyon, General Relativity, Oxford University Press, 1990.[21] Khuri S, Back Th. An Evolutionary heuristic for the minimum vertex-

cover problem. 18th Deutche Jahrestagung fur Kunstliche. Max-PlanckInstitut fur Informatik Journal 1994;MPI-I-94-241:86-90.

[22] Likas, A and Stafylopatis, A: A parallel algorithm for the minimumweighted vertex cover problem, Information Processing Letters, vol. 53,pp. 229 - 234, 1995.

[23] R. Mansouri, F. Nasseri, M. Khorrami, Effective time variation of Gin a model universe with variable space dimension, Physics Letters 259(1999) 194-200.

[24] Motwani, R. (1992). ”Lecture Notes on Application. ” Technical Report,STAN-CS-92-1435, Department of Computer Science, Stanford Univer-sity.

[25] Motwani. R: Lecture Notes on Approximation Algorithms, TechnicalReport, STAN-CS-92-1435, Department of Computer Science, StanfordUniversity, 1992.

[26] Neidermeier. R and Rossmanith. P: On efficient fixed-parameter algo-rithms for weighted vertex cover, Journal of Algorithms, vol. 47, pp. 63- 77, 2003.

[27] Pitt. L: A Simple Probabilistic Approximation Algorithm for VertexCover, Technical Report, YaleU/DCS/TR-404, Department of ComputerScience, Yale University, 1985.

[28] E. Rashedi, Gravitational Search Algorithm, M.Sc. Thesis, ShahidBahonar University of Kerman, Kerman, Iran, 2007 (in Farsi).

[29] B. Schutz, Gravity from the Ground Up, Cambridge University Press,2003.

[30] Monien. B and Speckenmeyer. E: Ramsey numbers and an approxima-tion algorithm for the vertex cover problems, Acta Informatica, vol. 22,pp. 115 - 123, 1985.

[31] Shyu. S.J, Yin. P.Y and Lin. B.M.T: An ant colony optimizationalgorithm for the minimum weight vertex cover problem, Annals ofOperations Research, Vol. 131, pp. 283 - 304, 2004.

[32] Weight. M and Hartmann. A. K: The number of guards needed by amuseum - a phase transition in vertex covering of random graphs., Phys- Rev. Lett., 84, 6118, 2000b.

[33] Weight. M and Hartmann. A. K.: Minimal vertex covers on finiteconnectivity random graphs - A hard-sphere lattice-gas picture, Phys.Rev. E, 63, 056127.

International Journal of Computational and Mathematical Sciences 3:7 2009

336

Page 21: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies areencouraged to visit:

http://www.elsevier.com/copyright

Page 22: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

A new polynomial time algorithm for 0–1 multipleknapsack problem based on dominant principles

S. Raja Balachandar *, K. Kannan

Department of Mathematics, SASTRA University, Thanjavur 613 402, India

Abstract

This paper presents a heuristic to solve the 0–1 multi-constrained knapsack problem (01MKP) which is NP-hard. In thisheuristic the dominance property of the constraints is exploited to reduce the search space to find near optimal solutions of01MKP. This heuristic is tested for 10 benchmark problems of sizes up to 105 and for seven classical problems of sizes upto 500, taken from the literature and the results are compared with optimum solutions. Space and computational complex-ity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size testproblems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hardproblems.� 2008 Elsevier Inc. All rights reserved.

Keywords: 0–1 Multi-constrained knapsack problem; Heuristic; Computational complexity; NP-hard problems

1. Introduction

The popularity of knapsack problems stems from the fact that it has attracted researchers from both camps:the theoreticians as wells as the Practicians [15] enjoy the fact that these simple structured problems can beused as subproblems to solve more complicated ones. Practicians on the other hand, enjoy the fact that theseproblems can model many industrial opportunities such as cutting stock and cargo loading.

The multiple constraints 0–1 knapsack problem (01MKP) has various applications in various fields, e.g.economy: Consider a set of projects (variables) (j = 1,2,3, . . . ,n) and a set of resources (constraints)(i = 1,2,3, . . . ,m). Each project has assigned a profit cj and resource consumption values aij. The problem isto find a subset of all projects loading to the highest possible profit and not exceeding given resource limitsbi. Other important applications include cargo loading, cutting stack problems, and processor allocation indistributed systems.

The special case of 01MKP with m = 1 is the classical knapsack problem (01KP), whose usual statement isthe following. Given a knapsack of capacity b and n objects, each being associated a profit a volume

0096-3003/$ - see front matter � 2008 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2007.10.045

* Corresponding author.E-mail addresses: [email protected] (S. Raja Balachandar), [email protected] (K. Kannan).

Available online at www.sciencedirect.com

Applied Mathematics and Computation 202 (2008) 71–77

www.elsevier.com/locate/amc

Page 23: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

occupation, one wants to select k (k 6 n and k not fixed) objects such that the total profit is maximized and thecapacity of the knapsack is not exceeded. It is well known that 01KP is strongly NP-hard because there arepolynomial approximation algorithms to solve it. This is not the case for the general 01MKP. Various algo-rithms to obtain exact solutions of such problems were designed and well documented in the literature[1,4,9,10,16,19,21].

2. 0–1 Knapsack problem

The 0–1 multi-constrained knapsack problem (01MKP) is a well-known NP-hard combinatorial optimiza-tion problem [8] which can be formulated as follows:

Maximize f ðx1; x2; . . . ; xnÞ ¼Xn

j¼1

cjxj ð1Þ

Subject to the constraints

Xn

j¼1

aijxj 6 bi; i ¼ 1; 2; . . . ;m; ð2Þ

xj 2 f0; 1g; j ¼ 1; 2; 3; . . . ; n; ð3Þwith cj > 0; aij P 0; bj P 0: ð4Þ

The objective function f(x1,x2, . . . ,xn) should be maximized subject to the constraints given by (2). For01MKP problems the variable xj can take only two values 0 and 1. Here in a 01MKP, it is necessarythat aij is positive. This necessary condition paves a way for better heuristics to obtain near optimalsolutions.

3. A brief survey of some exact and heuristic algorithm

Exact and heuristic algorithms have been developed for the 01MKP, like many NP-hard combinatorialoptimization problems.

3.1. Exact algorithms

Existing exact algorithms are essentially based on branch and bound method, dynamic programming, sys-tematic approach and 01MKP relaxation techniques such as Lagrangian, surrogate and composite relaxations[1,4,9,10,16,19,21].

Balas’s [1] algorithm is a systematically designed one that begins with a null solution which successivelyassigns certain variables to 1 in such a way that after testing part of all the 2n combinations, we obtain eitheran optimal solution or a proof that no feasible solution exists. The algorithms by Gilmore and Gomory [10],Weingartner and Ness [21] use dynamic programming approach. The dynamic programming method solvedtwo problems of size (n = 28,m = 2) and (n = 105, m = 2) in a forward and backward approach. Shih [19]proposed a branch and bound algorithm. In this method, an upper bound is obtained by computing the lin-ear relaxation upper bound of each of the m single constraint knapsack problems separately and selecting theminimum objective function value among those as the upper bound. Computational results showed that thisalgorithm performed better than the general 0–1 additive algorithm of Balas [1]. Better algorithms have beenproposed by using tighter upper bounds, obtained with other 01MKP relaxation techniques such asLagrangian, Surrogate and composite relaxations which were developed by Gavish and Pirkul [9]. This algo-rithm was compared with the Shih’s method [19] and was found to be faster by at least one order of mag-nitude. Osorio et al. [16] used surrogate analysis and constraint pairing to assign some variables to zero andto separate the rest of the variables into groups (those that tend to zero and those that tend to one). They usean initial feasible integer solution to generate logical cuts based on their analysis at the root of a branch andbound tree. Due to their exponential time complexity, exact algorithms are limited to small size instances(n = 200 and m = 5).

72 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 202 (2008) 71–77

Page 24: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

3.2. Heuristic algorithms

Heuristic algorithms are designed to produce near-optimal solutions for larger problem instances. The firstheuristic approach for the 01MKP concerns for a large part greedy methods. These algorithms construct asolution by adding, according to a greedy criterion, one object each time into a current solution without con-straint violation.

The second heuristic is based on linear programming by solving various relaxations of the 01MKP. Balasand Martin [2] introduced a heuristic algorithm for the 01MKP which utilizes linear programming (LP) byrelaxing the integrality constraints xj = 0 or 1 to xj = 0 to 1. Linear programming problems are not NP-hardand can be solved efficiently, e.g. with the well known Simplex algorithm. The fractional xj is then set to 0 or 1according to a heuristic which maintains feasibility.

In the last decade, several algorithms based on metaheuristics have been developed, including simulatedannealing [7], tabu search [11,12] and genetic algorithms [5,13].

More examples of heuristic algorithms for the 01MKP can be found in [14,15,17,20]. Sahini [18] presented asequence of C approximate algorithm and time estimates of such an algorithm were also presented.

A comprehensive review on exact and heuristic algorithms is given in [5,6].

4. Dominance principle (DP)

The intercept matrix is designed by replacing all entries of the coefficient matrix of constraints by therespective availability in the right-hand side of constraints divided by these entries. Each column minimumof this matrix represents respective axis intercept. The axes intercepts are the corner points of the feasibleregion (polyhedron) and hence dominates the region. The other values of the variables are automatically ruledout of their minimal choice. This avoids the extra computations. These modifications of the constraint matrixupdated at each iteration as shown in the algorithm (Fig. 1), maintains the dominance property throughoutthe computations. This salient principle is known to be ‘‘Dominance Principle’’.

The algorithm is as follows. The row vector to contain coefficients of variables appearing in theobjective function is taken first. Values corresponding to column minimum (eij) are multiplied with

Step – 1 :Initialize the solution set : 0,...,0),0,=Step – 2 :Formation of the constraint matrix A = coefficient matrix of constraints. Form the matrix D = [dij] for i=1,2,3…m

and j =1,2,3,…n as follows

D = >

=otherwisevalueelaM

aifabd

ijiji

ij;arg,

0/

Step-3 : If any one entry of D is less than 1 ( let it be / th column ). Set all values of/th column to large values so that lx does not

enter the solution set ,so set lx =0.

Step – 4 :Encircle the smallest intercept in each column of D Step -5 :Put tick mark before these rows which contain more than one circle . Step – 6 :Multiply encircled entries (eij) with corresponding cost

coefficients in the objective function (cj).

Step – 7: If the product is maximum in kth column, then set kx = 1 .

Step – 8 :Update the constraint matrix as followsmtoiabb ikii 1, ==

mtoiforasetand ik 10 ==

Step – 9: If ija =0 for all i and j , max z = n

k=1kk xc and stop.

Otherwise goto step2

,...,,, 321 n)xxxx( (

⎩⎨⎧

0

Fig. 1. DPH algorithm.

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 202 (2008) 71–77 73

Page 25: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

corresponding cost coefficients (cj) in objective function and the maximum among this product is chosen(i.e., maxjfeijcjg).

The corresponding xj assumes the value 1. Next we update the availability (right-hand side column vector)by the using the relations bi = bi � aij for all i and the coefficients of constraints by replacing aij by 0 for all i.This process is repeated till coefficient matrix becomes null matrix. The values of updated variables xj aresubstituted in the objective function to obtain the optimum value.

5. The proposed dominance principle based heuristic algorithm (DPH)

We present below, the heuristic algorithm for solving 01MKP using the dominance principle approach.

5.1. Computational complexity

The worst case complexity of finding the solutions of a 01MKP using DPH can be obtained as follows.Assume that there are n variables and m constraints. The procedure initialization (Step 1) requires O(n) run-ning time. The formation of D matrix iterates n times, identification of less than one entry in each column,finding smallest intercept in each column, identification rows which consist of more than one smallest interceptand updating of constraint matrix A. Since there are m constraints, Step 2, Step 3, Step 4, Step 5, Step 9,respectively, O(mn). Step 6 and Step 8 require O(n) operations to multiply cost with corresponding smallestintercept and updating the corresponding row of the constraint matrix. The Step 7 requires O(1).The maxi-mum number of iterations required for DPH is n. So the overall running time of the procedure DPH canbe deduced as follows:

nð3OðnÞ þ 5OðmnÞ þOð1ÞÞ ¼ Oðn2mþ n2 þ nÞ:

6. Illustration

We illustrate this procedure to solve knap 28_ 10 which is available in the OR-library by Beasely [3] a multi-dimensional knapsack problem with 28 variables and 10 knapsacks. The DPH begins with initial cost zero atthe end of each iteration the DPH updates the value xi (0 or 1) and the cost. Table 1 gives the iteration wisereport of DPH.

Table 1Iteration wise report of DPH for knap 28_10

Iteration Variable that assumes 1 Variables that assumes 0 f(x1,x2, . . . ,xn)

1 21 – 31002 20 – 56503 23 – 66004 19 – 66605 17 – 71406 16 – 74607 2 – 76808 18 – 77609 15 – 8410

10 27 – 861011 22 – 971012 28 – 10,23013 1 – 10,33014 26 – 10,55015 25 – 10,85016 14 – 12,15017 3 4 12,24018 9 5 12,40019 – 6,7,8,10,11,12,24,13 12,400

74 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 202 (2008) 71–77

Page 26: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

The DPH algorithm terminates the iterative process at 19th iteration, since all the entries are in the con-straint matrix are equal to 0.

At the end of the 19th iteration the DPH reaches the optimum.

7. Computational results

All the procedures of DPH have been coded in C language. We have first tested our method on the 10 clas-sical problems used in [3]. The size of these problems varies from n = 15 to 105 items and from m = 4 to 30constraints. Our results and the optimum ones reported in [3] are compared in Table 2.

The second set of tested instances is constitutes of the first seven (also the largest ones with n = 100 to 500items m = 15 to 25 constraints) of 11 benchmarks (MK_GK problems) proposed very recently by Glover andKochenberger (available at: http://hces.bus.olemiss.edu/tools.html). Table 3 compares our results and the bestknown results taken from the above website.

The application of DPH gives the results for knap_105-2 Weing -7 (Fig. 2).The maximum number of iterations is 105 (items) fixed and the algorithm reaches the best result at 90th

iteration.As both tables and figure clearly demonstrate, the DPH is able to localize the global optimum or near opti-

mal point for all the test problems in quick time.

8. Features of algorithm

Our approach is used to reduce the search space to find the near-optimal solutions of the 01MKP. The com-putational complexity is quadratic and the space complexity is O(mn). DHP reaches the optimum or near opti-mum point in less number of iterations where the maximum number of iterations is the size of projects(variables). Our heuristic algorithm identifies the zero value variables quickly.

Table 2DPH results vs optimum solutions

S.no. Problem n m Optimum DPH solution Percentage of deviation

1 Knap 15-4 15 4 301 301 02 Knap 15-10 15 10 4015 4015 03 Knap 20-10 20 10 6120 6120 04 Knap 28-10 28 10 12,400 12,400 05 Knap 39-5 39 05 10,618 10,429 1.776 Knap 50-5 50 05 16,537 16,323 1.297 Knap 60-30 Santo-1 60 30 7772 7706 0.848 Knap 60-30 Santo-2 60 30 8722 8672 0.579 Knap 105-2 Weing-7 105 2 1,095,445 1,094,806 0.06

10 Knap 105-2 Weing-8 105 2 624,319 621,086 0.52

Table 3Comparison on the latest seven MK_GK problems

S.no. n m Best feasible solution DPH solution Percentage of deviation

1 100 15 3766 3766 02 100 25 3958 3958 03 150 25 5650 5656 0.26 (improved)4 150 50 5764 5767 0.46 (improved5 200 25 7557 7557 0.23 (improved)6 200 50 7672 7672 07 500 25 19,215 19,215 0

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 202 (2008) 71–77 75

Page 27: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

9. Conclusion

In this paper, we have presented the dominance principle based approach for tackling the NP-hard 0–1multi-constrained knapsack problem (01MKP). This approach has been tested on more than 50 state-of-artbenchmark instances and has led to given near optimal solutions for most of the tested instances. Ourapproach is heuristic with O(mn2) complexity and it requires maximum of n iterations to solve the 01MKP.The experimental data show that the optimality achieved by this heuristic lies between 98% and 100%. Thebasic idea behind the proposed approach may be explored to tackle other NP-hard problem.

References

[1] E. Balas, An additive algorithm for solving linear programs with zero–one variables, Operations Research 13 (1965) 517–546.[2] E. Balas, C.H. Martin, Pivot and complement – a heuristic for 0–1 programming, Management Science 26 (1) (1980).[3] J.E. Beasley, OR-library; distributing test problems by electronic mail, Journal of Operational Research Society 41 (1990) 1069–1072.[4] A.V. Cabot, An enumeration algorithm for knapsack problems, Operations Research 18 (1970) 306–311.[5] P.C. Chu, J.E. Beasley, A genetic algorithm for the multidimensional knapsack problem, Journal of Heuristics 4 (1998) 63–86.[6] P.C. Chu, A Genetic Algorithm Approach for Combinatorial Optimization Problems, Ph. D. Thesis at Management School, Imperial

College of Science, London, 1997.[7] A. Drexl, A simulated annealing approach to the multiconstraint zero–one knapsack problem, Computing 40 (1988) 1–8.[8] M.R. Garey, D.S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman and

Company, San Francisco, 1979.[9] B. Gavish, H. Pirkul, Efficient algorithms for solving multiconstraint zero–one problems to optimality, Mathematical Programming

31 (1985) 78–105.[10] P.C. Gilmore, R.E. Gomory, The theory and computations of knapsack functions, Operations Research 14 (1966) 1045–1075.[11] F. Glover, G.A. Kochenberger, Critical event tabu search for multidimensional knapsack problems, in: I.H. Osman, J.P. Kelly (Eds.),

Metaheuristics: The Theory and Applications, Kluwer Academic Publishers, 1996, pp. 407–427.[12] S. Hanafi, A. Freville, An efficient tabu search approach for the 0–1 multidimensional knapsack problem, European Journal of

Operational Research 106 (1998) 659–675.[13] S. Khuri, T. Baeck, J. Heitkoetter, The zero/one multiple knapsack problem and genetic algorithms, in: Proceedings of the 1994 ACM

Symposium on Applied Computing, SAC’94, Phoenix, Arizona, ACM Press, 1994, pp. 188–193.[14] M.J. Magazine, O. Oguz, A heuristic algorithm for the multidimensional zero–one knapsack problem, European Journal of

Operational Research 16 (1984) 319–326.[15] S. Martello, P. Toth, Knapsack Problems: Algorithms and Computer Implementations, John Wiley and Sons, chichester, West

Sussex, England, 1990.[16] M.A. Osrio, F. Glover, P. Hammer, Cutting and surrogate constraint analysis for improved multidimensional knapsack solutions.

Technical Report HCES-08-00, Hearing Center for Enterprise Science, 2000.

Fig. 2. Iteration vs. obj fn value (cost at each iteration).

76 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 202 (2008) 71–77

Page 28: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Author's personal copy

[17] H. Pirkul, A heuristic solution procedure for the multiconstrained zero–one knapsack problem, Naval Research Logistics 34 (1987)161–172.

[18] Sartaj Sahni, Approximate Algorithms for the 0–1 knapsack problem, ACM 29 (1) (1975) 113–124.[19] W.A. Shih, Branch and bound method for the multiconstraint zero–one knapsack problem, Journal of Operational Research Society

30 (1979) 369–378.[20] A. Volgenant, J.A. Zoon, An improved heuristic for multidimensional 0–1 knapsack problems, Journal of the Operational Research

Society 41 (1990) 963–970.[21] H.M. Weingartner, D.N. Ness, Methods for the solution of the multidimensional 0/1 knapsack problem, Operations Research 15

(1967) 83–103.

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 202 (2008) 71–77 77

Page 29: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Randomized gravitational emulation search algorithmfor symmetric traveling salesman problem

S. Raja Balachandar *, K. Kannan

Department of Mathematics, SASTRA University, Thanjavur 613 402, India

Abstract

This paper presents a new heuristic method called randomized gravitational emulation search (RGES) algorithm forsolving symmetric traveling salesman problems (STSP). This algorithm is found upon introducing randomization conceptalong with the two of the four primary parameters ‘velocity’ and ‘gravity’ in physics through swapping in terms of groupsby using random numbers in the existing local search algorithm GELS in order to avoid local minima and thus can yieldglobal minimum for STSP. To validate the proposed method numerous simulations were conducted to compare the qualityof solutions with other existing algorithms like genetic algorithm (GA), simulated annealing (SA), hill climbing (HC), etc.,using a range of STSP benchmark problems. According to the results of the simulations, the performance of RGES isfound significantly enhanced and provided optimal solutions in almost all test problems of sizes up to 76. Also a compar-ative computational study of 11 chosen benchmark problems from TSPLIB library shows that this RGES algorithm is anefficient tool of solving STSP and this heuristic is competitive with other heuristics too.� 2007 Elsevier Inc. All rights reserved.

Keywords: Symmetric traveling salesman problem; Velocity; Gravitational force; Newton’s law; Swapping

1. Scope and purpose

Many and varied are the applications of STSP. To name a few Scheduling problems, vehicle routing prob-lems [9], mixed Chinese postman problems, integrated circuit board chip insertion problem [4], printed circuitboard punching sequence problems [12] and a wing-nozzle design problem in the aircraft design are worthmentioning. Although there exists several heuristic procedures and branch and bound algorithms for it, thisproblem is still a difficult NP-complete problem. The main purpose of this paper is to enhance the GELS algo-rithm to a Global Algorithm by introducing randomization concept in the available parameters of GELS andto show its effectiveness. A justification is also given for the improvement of the GELS through the results, bysolving STSP of various sizes from 5 to 76. Improvement of GELS through randomization has also been

0096-3003/$ - see front matter � 2007 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2007.03.019

* Corresponding author.E-mail addresses: [email protected] (S. Raja Balachandar), [email protected] (K. Kannan).

Applied Mathematics and Computation 192 (2007) 413–421

www.elsevier.com/locate/amc

Page 30: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

established by making a comparative study of other available heuristic algorithms too. The table showing theresults of 11 chosen(for brevity) benchmark problems from TSPLIB library ensure the efficiency of RGES.

2. Introduction

2.1. NP complete problems

There is a class of problems, which have not been theoretically proved to have exponential complexities, butno polynomial algorithms have been designed. Some problems (which are known as NP complete problems)are: traveling salesman problem, problem of Hamiltonian paths, knapsack problem, problem of optimal graphcoloring. If a polynomial solution could be found for any of these problems, then all of the NP problemswould have polynomial solutions. NP complete problems are described in more detail in [8,10,11,13].

2.2. A brief survey of algorithms for solving traveling salesman problem

The traveling salesman problem is a well-known Combinatorial optimization problem, in which the sales-man wants to minimize the total distance traveled to visit all cities in his visiting list.

Direct search algorithms of exponential complexity, which give optimal solutions, but are applicable onlyfor a small number of nodes, are available in literature. Not many heuristic algorithms of polynomial com-plexity, which give solutions near optimal, but are applicable for large number of nodes are found in literature.The symmetric version of the traveling salesman problem, in which bi-directional distances between a pair ofcities are identical, has been studied by many researchers and the understanding of the polyhedral structure ofthe problem has made it possible to optimally solve large size problems [10]. Nonetheless, efforts to developgood heuristic procedures have continued because of their practical importance in quickly obtaining approx-imate solutions of large size problems. Heuristic methods such as the K-opt heuristic by Lin and Kernighan[11] and the Or-opt heuristic by Or [13] are well known for the symmetric traveling salesman problem. Therealso have been a few attempts to solve it by various meta-heuristics such as tabu searches [8,19] and geneticalgorithms [3,5,14,15,17]. Interestingly, Brest and Zerovnik [2] report that a heuristic algorithm, based on‘arbitrary insertion’, performs remarkably well on benchmark test problems in terms of both solution qualityand computation time. Karp [7] also presents a heuristic, patching algorithm, and shows that the heuristicsolution asymptotically converges to the optimal solution of the asymmetric TSP.

2.3. Optimum solution algorithm and heuristic methods

Backtracking can search all possible tours. This method has small memory requirements, but largest exe-cution time. For example, for 20 nodes graph, execution time would be several millenniums. Other famousalgorithm available in literature are dynamic programming, branch and bound method, etc., Prim or Kruskalalgorithm gives us solutions to STSP which is not more than twice of optimal solution length. It’s applicabilityis bounded only to Euclidean TSP. This algorithm works in polynomial time.

This paper is organized as follows. In Section 3, Mathematical description of STSP is discussed brief. InSection 4, a brief description of GELS algorithm proposed by Webster is provided. In Section 5, the RGESalgorithm, based on GELS with concept of randomization is described and a rationale for its anticipated goodperformance is provided. In Section 6, results of computational experiments are given, solution quality is tab-ulated, efficiency of the algorithm is established through comparative study, parameters used in various algo-rithms are tabulated and algorithm features are elaborated. Finally, Section 7 includes merits and concludingremarks.

3. Description of a STSP

Suppose a salesman has to visit n cities. He wishes to start from a particular city, visit each city only onceand then return to his starting point. Minimizing the total travelling distance by selecting a sequence of all n

cities (distance between cities i and j is same as j and i) is known as a symmetric travelling salesman problem.

414 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421

Page 31: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

Consider a symmetric traveling salesman problem with n cities to visit. Let Cij be the distance from city j

(Cij = Cji). Moreover, let xij be 1 if the salesman travels from city i to city j and 0 otherwise. Then, the sym-metric traveling salesman problem can be formulated as follows:

minXn

i¼1

Xn

j¼1

cij xij

such thatXn

i¼1

xij; j ¼ 1; . . . ; n; ð1Þ

Xn

j¼1

xij ¼ 1; i ¼ 1; . . . ; n; ð2Þ

X

i2P

X

j2P ;i6¼j

xij 6 jP j � 1; 8P � f1; 2; . . . ; ng; 2 6 jP j 6 n� 2 ð3Þ

ðsub tour elimination constraintÞxij 2 f0; 1g; i ¼ 1; . . . ; n; j ¼ 1; . . . ; n: ð4Þ

4. Gravitational emulation local search (GELS) algorithm

4.1. Preliminary work done with GELS

For the study of GELS, a conceptual frame work was first developed by Voudouris and Tsang [20] and thatpreliminary designed algorithm was called gravitational local search algorithm (GLSA). As there are a num-ber of places in the literature where abbreviations, GLS are used to refer guided local search [20], this namewas later changed to GELS. Two separate versions of GLSA were implemented using C language. They differby two key differences. The first version, dubbed GLSA1, used as its heuristic Newton’s equation for gravi-tational force between two objects and the pointer object moved through the search space one position ata time, while the second, dubbed GLSA2, used as its heuristic Newton’s method for gravitational field calcu-lation and the pointer object was allowed to move multiple positions at a time. These issues are precisely elab-orated in [1].

Both the procedures had operational parameters that the user could set to tune its performance, the param-eters used were density (DENS), drag (DRAG) [18], friction (FRIC), gravity (GRAV) [18], initial velocity(IVEL), iteration limit(ITER), mass (MASS), radius (RADI), silhouette (SILH) and threshold (THRE)among which GELS uses only four parameters namely, velocity, iteration limit, radius and direction of move-ment. Barry Lynn Webster [1] has beautifully designed this robust algorithm (GELS) which had overcome thefollowing difficulties (i) increased the number of iterations caused by hypothetical search space (ii) poor solu-tion caused by random determination of objective function values of neighboring solution. Webster assumedobjective function values of neighboring solutions as the functions of earlier solutions being randomly deter-mined. Because of this objective function values of the invalid solutions of the problem were tactically avoidedin the beginning itself. He has also avoided the sensitive and redundant parameters of GLSA, because withthese parameters finding the points of equilibrium was extremely difficult.

4.2. GELS algorithm

4.2.1. Parameters used in GELS algorithm

(a) Max velocity: Defines the maximum value that any element within the velocity vector can have used toprevent velocities that became too large to use.

(b) Radius: Sets the radius value in the gravitational force formula; used to determine how quickly the grav-itational force can increase (or) decrease.

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421 415

Page 32: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

(c) Iterations: Defines a number of iterations of the algorithm that will be allowed to complete before it isautomatically terminated (used to ensure that the algorithm will terminate).

(d) Pointer: It is used to identify the direction of movement of the elements in the vectors.

4.2.2. Gravitational force (GF)

GELS algorithm uses the formula for the gravitational force between the two solutions as

F ¼ GðCU� CAÞR2

;

where

G = 6.672 (Universal constant of gravitation) [18]CU = objective function value of the current solutionCA = objective function value of the candidate solutionR = value of radius parameter.

4.2.3. Webster’s algorithmBL Webster designed GELS algorithms by using two methods and two stepping modes. They are as under:

GELS 11:(i) To find the GF between single solution and current solution.(ii) Movement within current local search.

GELS 12(i) To find the GF between single solution and current solution.(ii) Movement to outside of the neighbourhood.

GELS 21(i) To find GF between each solution within neighbourhood.(ii) Movement within current local search.

GELS 22(i) To find GF between each solution within neighbourhood.(ii) Movement to outside of the neighbourhood.

The GELS algorithms have various distinguishing features from other algorithm like GA, SA, HC, in termsof search space, number of iterations, etc.

During the development of GELS setting of afore said parameters have been arrived at through trial anderror and some settings caused repetitive visits in a neighbourhood, others caused large numbers, causing thealgorithms to behave erratically. Webster, after a number of tests had settled the algorithm with 10 for max-imum velocity, 4 for radius and 10,000 for the iterations. After a number of tests we have settled the RGESalgorithm with 7 for max. velocity, 7 for radius and 1000 for iterations. The RGES algorithm and it’s expla-nation is given in Section 5.

4.3. Distinguishing properties of GELS algorithm

1. Introduction of the velocity vector and relative gravitational force in each direction of movement emulatedthe acceleration effect of the process towards its goal.

2. Multiple stepping procedures helped the solution pointer to show the next direction leading to the othersolutions.

416 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421

Page 33: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

3. The algorithm is designed in such a way that it terminates on the two conditions either all the elements inthe velocity have gone to zero or the maximum allowable number of iterations has been completed.

5. Proposed ‘RGES’ algorithm

RGES search technique starts out with a guess of one likely candidate, actually chosen at random in thesearch space. This candidate is called the current solution (CU). The RGES algorithm generates a neighbour-hood. The algorithm is given in Fig. 1. The basic mechanisms are similar to the GELS in the construction ofneighbourhood.

For n city STSP, the total number of cities is divided into l groups by means of parameter called Radius (R)and a random number (ri) to each of l groups such that ri 6 R, i = 1 to l

Here (R � ri) for each i does not undergo any swapping. The number of cities in which the swapping takesplace is

Pri The total number of candidate solutions in the neighbourhood Pri. During swapping within the

group a number of solutions are generated and objective values of the tours are also found.The current solution for the next iteration is obtained by updating the objective function value as under:Find the lowest objective value in the neighbourhood. The algorithm is given in Fig. 2. Update the velocity

of each group of the current solution with the GF (between lowest objective value function (LOVF) and theobjective value of the current solution (CS). If the LOVF found in the iteration one is less than the CS, theLOVF is considered as the CS for the next iteration. If not, we choose one candidate solution as CS randomlyfrom Pri candidates solutions as discussed earlier. The algorithm is given in Fig. 3. we repeat the process till

Fig. 1. Neighboring points generation.

Fig. 2. Best candidate solution.

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421 417

Page 34: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

either velocity vanishes or number of iterations exceed the assigned value. The complete algorithm for RGESis given in Fig. 4.

6. Computational results

6.1. Experimental design

Fischetti et al. [6] describe a branch-cut-algorithm to solve the symmetric generalised TSP,using partition-ing method to 46 standard TSP instances from TSPLIB library [16] since they provide adequate description ofthe partitioning method to provide optimal objective values for each of the problem. Here, we applied ourheuristic to the selected eleven problems in order to ensure the robustness of the proposed GELS algorithmand compare the results with other GELS algorithm proposed by Barry Webster. The results of the problemshaving sizes 5, 6, 10, 11, 15, 20, 21, 24, 51 and 76 are shown in Table 1. Solution of problems having other sizesare not shown here for brevity. The randomization procedure introduced in the GELS algorithm ensure theefficiency of the algorithm to handle STSP of large size up to 76, for each problem we ran RGES 5 times toexamine algorithms performance and its variations from trail to trail. This algorithm was implemented inC++ and tested in Pentium IV, 3.2 GHz processor and 256 MB RAM running under Windows XP.

Fig. 3. Parameters and current solution updating.

Fig. 4. Complete RGES algorithm.

418 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421

Page 35: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

6.2. Solution quality

Table 1 summarized the results for each of the solved problem. The Columns in the table are furnished asfollows:

Problem: The name of the test problem the digits of the beginning of the name give the clusters (m) these atthe end give the number of nodes (n).

Opt obj value: The Optimal objectives value for the problems reported here are taken from TSPLIB [16]some of which have been obtained through direct search method.

#Opt: The number of trials out of 5 in which the RGES found the optimum solution.#Best: The number of trails out of 5 in which the RGES found its best solution (for problems in which the

optimal solution was found, this is equal to # opt column).Mean, Min, Max: The mean, min and maximum objective values returned in the 5 trials (the value column)

and the respective percentages above the optimal value (in pct column) The RGES found optimal solutions in atleast one of the 5 trials for all the 11 tested problems. For 9(81%) of the problems the RGES found the optimalsolution in every trials. The RGES solved 11 problems and in only one case Eil 76 it returns a solution morethan 4% above the optimal. The heuristic to return consistent solutions for smaller size problems, for large sizeproblems if returns solutions that vary a little bit best higher (but close to each other in objective value).

6.3. Comparison with other algorithms

In order to bring out the efficiency of the proposed RGES bring algorithm the same set of 11 bench markproblems have been solved by the other several algorithm (SA, HC, GA, GELS11, GELS 12, GELS 21, GELS22) reported in the literature, these algorithms have been coded in C Language. The results of solutions ofthese problems under various category are shown in Table 3. The values of the parameters into each of thealgorithms to solve the above problems use anticipated in Table 2.

To compare the results of RGES algorithm with other algorithms and to test its efficiency the number ofiterations is fixed 1000, number of trials 5 and only best solutions out of 5 are taken to account. Based on thedata from the experiment and the solutions from Table 3, a number of observations can be made: For sample

Table 1RGES results

Problem Opt obj value #opt #best Mean Minimum Maximum

Value Pct (%) Value Pct (%) Value Pct (%)

5sample5 21 5 5 21 0.0 21 0.0 21 0.06sample6 56 5 5 56 0.0 56 0.0 56 0.010sample10 323 5 5 323 0.0 323 0.0 323 0.010s_ample10 725 5 5 725 0.0 725 0.0 725 0.011eil51 174 5 5 174 0.0 174 0.00 174 0.016eil76 209 5 5 209 0.0 209 0.0 209 0.020kroa100 9711 5 5 9711 0.0 2707 0.0 2707 0.0Gr21 2707 5 5 2707 0.0 2707 0.0 2707 0.0Gr24 1272 5 5 1272 0.0 1272 0.0 1272 0.0Eil51 426 4 4 426 0.0 426 0.0 434 1.8Eil76 538 3 4 538 0.0 538 0.0 560 4.1

Objective values, No. of runs = 5, Max No. of iterations = 1000.

Table 2Parameters set for various methods

Methods Parameters

GA Population size = 10, crossover(1-point) = 0.9, mutation(1-point) = 0.05 Iteration = 1000SA Temp = 2000, reconfig interval = 10 Attempt interval = 10 Annealing rate = 0.01 Threshold = 0.01GELS Radius = 10 Velocity = 4 Iterations = 1000RGES Radius = 7 Velocity = 7 Iteration = 1000

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421 419

Page 36: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

problems of size 5 and 6, all algorithms are alike when size of the problem is 10 the algorithms efficiency differHill Climbing algorithms give best and optimal solutions when size of the problem is greater than 11 the otheralgorithms do provide better solutions or nearing optimal solutions only. However, the RGES algorithmsmaintains both optimality and the quality of the solutions.

Hence the introduction of randomization concept to GELS algorithm is found to be advantageous in twoways.

1. Randomization enables the algorithm to escape from the local search and pave a way leading to globalsearch with the help of established GELS.

2. In all the problems up to the size 76, RGES provide best and optimum solutions.

6.4. Contribution of algorithm features

When examining the relative contribution of the features of RGES algorithm to the heuristic overall per-formance, we find that swapping in terms of groups by using random numbers is effectively utilized in makinglocal search to be the global search, which the GELS in its two by two swapping gets entangled into the neigh-bourhood of the current solution and unable to escape from the local domain. We also find that RGES exhib-its an excellent performance over GELS in terms of obtaining optimum solutions even for larger size problemsat the same computational cost as in GELS.

In our algorithm, if R is constant then there are O(n/R) groups. Hence it appears the product r · i is inO(Rn/R) and so grow exponentially. But since R is chosen as a · n For finite n,O(Rn/R) becomes O(R1/a) somefinite ‘a’ hence RGES becomes polynomial time algorithm.

7. Conclusion

Webster, used two stepping modes and two methods to enhance gravitational emulation properties of alocal search algorithm to compete with other algorithms available in literature to provide best solutions,though not focusing the optimality. This new proposed algorithm is capable of bringing the optimal solutionswithout affecting the quality of GELS and also paves a way for Global search through randomization. In ourattempt we have solved 11 bench mark problems up to the problem of size 76. Attempts of designing RGESlike algorithms for solving ASTSP problems are on going.

Acknowledgement

The authors express their gratitude to Barry Lynn Webster Melbourne, Florida Institute of Technology,USA for his prime directions and help.

Table 3RGES versus other algorithms

Problem Optimum objectivevalue

Simulatedannealing

Hillclimbing

Geneticalgorithm

GELS11 GELS12 GELS21 GELS22 RGES

5sample1 21 21 21 21 21 21 21 21 216sample2 56 56 56 56 56 56 56 56 5610sample3 323 324 323 324 359 343 357 329 32310sample4 725 742 725 761 805 754 786 725 72511eil51 174 177 195 227 195 176 193 183 17416eil76 209 255 247 255 265 265 255 255 20920kroa100 9711 11792 11723 11792 11792 11792 11723 11723 9711Gr21 2707 3333 3303 3333 3333 3333 3303 3303 2707Gr24 1272 1553 1423 1528 1553 1553 1472 1472 1272Eil51 426 497 495 497 497 497 496 496 426Eil76 538 661 651 661 657 661 657 657 538

No. of iterations = 1000 No.of trials = 5 Best solutions out of 5.

420 S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421

Page 37: LIST OF RESEARCH PAPERS PUBLISHED Raja Balachandar, S

References

[1] Barry Lynn Webster, Solving Combinatorial Optimization Problems Using a New Algorithm Based on Gravitational Attraction,Ph.D., thesis, Melbourne, Florida Institute of Technology, May, 2004.

[2] J. Brest, J. Zerovnik, An approximation algorithm for the asymmetric traveling salesman problem, Ricerca Operativa 28 (1998) 59–67.

[3] T. Bui, B. Moon, A new genetic approach for the traveling salesman problem, Proceedings of the First IEEE Conference on EvolutionComputation 2 (1994) 7–13.

[4] D. Chan, D. Mercier, IC insertion: an application of the traveling salesman problem, International Journal of Production Research 27(1989) 1837–1841.

[5] S. Chatterjee, C. Carrera, L. Lynch, Genetic algorithms and traveling salesman problems, European Journal of Operational Research93 (1996) 490–510.

[6] M. Fischetti, J.J. Salazar-Gonza’lez, P. Toth, A branch and- cut algorithm for the symmetric generalized traveling salesman problem,Operations Research 45 (3) (1997) 378–394.

[7] R. Karp, A patching algorithm for the non symmetric traveling-salesman problem, SIAM Journal Computing 8 (1979) 561–573.[8] J. Knox, Tabu search performance on the symmetric traveling salesman problem, Computers and Operations Research 21 (1994) 867–

876.[9] G. Laporte, The vehicle routing problem: an overview of exact and approximate algorithms, European Journal of Operational

Research 59 (1992) 345–358.[10] E. Lawler, J. Lenstra, A. Rinnooy Kan, D. Shmoys, The traveling salesman problem: a guided tour of combinatorial optimization,

Wiley, New York, 1985.[11] S. Lin, B. Kernighan, An effective heuristic algorithm for the traveling salesman problem, Operations Research 21 (1973) 498–516.[12] J. Litke, An improved solution to the traveling salesman problem with thousands of nodes, Communications of the ACM 27 (1984)

1227–1236.[13] I. Or, Traveling Salesman-type Combinatorial Problems and their Relation to the Logistics of Regional Blood Banking, Ph.D. thesis,

Northwestern University, Evanston, IL, 1976.[14] P. Poon, J. Carter, Genetic algorithm crossover operations for ordering applications, Computers and Operations Research 22 (1995)

135–147.[15] J. Potvin, Genetic algorithm for the traveling salesman problem, Annals of Operations Research 63 (1996) 339–370.[16] G. Reinelt, TSPLIB – a traveling salesman problem library, ORSA Journal on Computing 4 (1996) 134–143.[17] L. Schmitt, M. Amini, Performance characteristics of alternative genetic algorithmic approaches to the traveling salesman problem

using path representation: an empirical study, European Journal of Operational Research 108 (1998) 551–570.[18] Francis W. Sears, Mark W. Zemansky, Hugh D. Young, University Physics, seventh ed., Addison – Wesley, Reading, MA, 1987.[19] S. Tsubakitani, J. Evans, Optimizing tabu list size for the traveling salesman problem, Computers and Operations Research 25 (1998)

91–97.[20] Voudouris, chris, Edward Tsang, Guided Local Search. Technical Report CSM-247, Department of Computer Science, University of

Essex, UK, August 1995.

S. Raja Balachandar, K. Kannan / Applied Mathematics and Computation 192 (2007) 413–421 421