Time-Constrained Project Scheduling

  • Upload
    kalgm1

  • View
    226

  • Download
    0

Embed Size (px)

Citation preview

  • 8/2/2019 Time-Constrained Project Scheduling

    1/16

    Time-Constrained Project Scheduling

    T.A. GuldemondORTEC bv, PO Box 490, 2800 AL Gouda, The Netherlands

    J.L. Hurink, J.J. Paulus, and J.M.J. SchuttenUniversity of Twente, PO Box 217, 7500 AE Enschede, The Netherlands, [email protected]

    We study the Time-Constrained Project Scheduling Problem (TCPSP), in which the scheduling of activitiesis subject to strict deadlines. To be able to meet these deadlines, it is possible to work in overtime orhire additional capacity in regular time or overtime. For this problem, we develop a two stage heuristic.The key of our approach lies in the first stage in which we construct partial schedules with a randomizedsampling technique. In these partial schedules, jobs may be scheduled for a shorter duration than required.The second stage uses an ILP formulation of the problem to turn a partial schedule into a feasible schedule,and to perform a neighbourhood search. The developed heuristic is quite flexible and, therefore, suitable forpractice. We present experimental results on modified RCPSP benchmark instances. The two stage heuristic

    solves many instances to optimality, and if we substantially decrease the deadline, the rise in cost is onlysmall.

    1. Introduction

    The problem addressed is a project scheduling problem with strict deadlines on the activities, whichwe call the Time-Constrained Project Scheduling Problem (TCPSP). In many project schedulingproblems from practice, deadlines occur and in order to meet these deadlines, different ways tospeed up the project are given, e.g., by working in overtime, hiring extra resource capacity, oreven outsourcing (parts of) the project. All these options are costly but often not avoidable. Thus,

    the question arises how much, when, and what kind of extra capacity should be used to meet thedeadlines against minimum cost.The TCPSP is a variant of the well studied RCPSP (Resource-Constrained Project SchedulingProblem). For an overview of the literature on the RCPSP see, e.g., Herroelen et al. (1998),Kolisch and Hartmann (1999), and Kolisch and Padman (2001). However, there are fundamentaldifferences between the time-constrained and the resource-constrained variant. In the first, thedeadlines cannot be exceeded and resource profiles may be changed, whereas in the second, theresource availability cannot be exceeded. Moreover, in the TCPSP, a non-regular objective functionis considered. Therefore, most existing solution techniques of the RCPSP are not suitable for theTCPSP. Although the TCPSP occurs often in practice, it has been considered only rarely in theliterature. Deckro and Herbert (1989) give an ILP formulation for the TCPSP and discuss the

    concept of project crashing. In project crashing, it is possible to decrease the processing time of anactivity by using more resources, see also Kis (2005) and Li and Willis (1993). In practice, however,there are only limited possibilities to crash an activity and these possibilities can therefore becaptured in the concept of multi-mode scheduling. Kolisch (1995) discusses a heuristic procedurefor the TCPSP with limited hiring. None of the literature on TCPSP considers the concept ofovertime.The TCPSP has, besides to the RCPSP, some relation to the time driven rough-cut capacityplanning problem, see Gademann and Schutten (2005). The rough-cut capacity planning problemis about finding the appropriate capacity level on a tactical decision level, while the TCPSP is onan operational decision level.In this paper, we fill a gap in the literature on the TCPSP. To the best of our knowledge, for

    1

  • 8/2/2019 Time-Constrained Project Scheduling

    2/16

    Guldemond et al.: Time-Constrained Project Scheduling

    2

    the first time the concept of overtime and multiple forms of irregular capacity is included in themodelling of the TCPSP.The outline of the paper is as follows. Section 2 gives a detailed problem description and presentstwo ILPs to model the TCPSP. The first model takes into account only hiring in regular time. Thesecond model also deals with working and hiring in overtime. To solve the TCPSP, we develop in

    Section 3 a two stage heuristic. The first stage of the heuristic constructs a partial schedule. Thekey of the approach lies in this first stage, where jobs may be scheduled for a shorter duration thanrequired. The second stage turns the partial schedule into a feasible schedule. The goal of this twostage approach is to spread the usage of irregular capacity evenly over the planning horizon. Section4 concerns the computational results. The test instances used are RCPSP benchmark instancesfrom the PSPlib, see Kolisch and Sprecher (1997b), that are modified into TCPSP instances.Section 5 concludes this paper.

    2. Problem description and ILP formulations

    In this section, we describe the Time-Constrained Project Scheduling Problem (TCPSP) and for-mulate it as an integer linear program (ILP). First, we consider the TCPSP with hiring in regulartime only, to get a thorough understanding of the problem. Second, we consider the TCPSP withworking in overtime, and hiring in regular time and in overtime. Due to the complexity of theproblem, we cannot expect to solve the TCPSP via this ILP-model. However, we present the ILPssince the heuristic presented in Section 3 makes use of the ILP formulation to get a feasible solutionand to perform a neighbourhood search.

    2.1. TCPSP with hiring in regular time

    For the TCPSP, a set of n jobs, {J1,...,Jn}, each job Jj with a release date rj and a deadline dj ,has to be scheduled without preemption on a time horizon [0, T]. This time horizon is divided intoT time buckets, t = 0,...,T 1, where time bucket t represents the time interval [t, t + 1]. (In thefollowing, we will use T to refer to both the set of time buckets and the horizon [0 , T].) The releasedate rj gives the first time bucket in which job Jj is allowed to be processed and its processinghas to be finished before dj, i.e., dj 1 is the last time bucket where it is allowed to work on

    job Jj. For processing the jobs, a set of K resources, {R1,...,RK}, is available, where resource Rkhas a capacity Qkt in time bucket t. To hire one extra unit of resource Rk in time bucket t, anamount cHkt has to be paid. The processing of the jobs is restricted by precedence relations, whichare given by sets of jobs Pj , denoting all direct predecessors of job Jj . These precedence relationshave non-negative time lags lij , indicating that the start of job Jj is at least lij time buckets laterthan the completion of job Ji. For each job Jj , a set Mj of different execution modes is given. Eachmode m Mj has a specified processing time pjm and during the processing of job Jj in mode mit requires qjmk units of resource Rk.To model this variant of the TCPSP, we employ one type of binary decision variables xjmt that are

    equal to 1 if job Jj is being processed in time bucket t in mode m. To formulate the problem asan ILP, we use three types of variables that can be deduced from the variables xjmt . First, we usebinary variables jm that are equal to 1 if job Jj is processed in mode m. Second, we use binaryvariables sjmt that are equal to 1 if time bucket t is the first time bucket where job Jj is beingprocessed and mode m is used to perform it. Finally, the nonnegative variables Hkt represent theamount of capacity hired of resource Rk in time bucket t.Now the TCPSP with hiring in regular time can be modelled by the following ILP:

    minimize:K

    k=1

    T1t=0

    cHktHkt (1)

  • 8/2/2019 Time-Constrained Project Scheduling

    3/16

    Guldemond et al.: Time-Constrained Project Scheduling

    3

    subject to:

    mMj

    jm = 1 j (2)

    dj1

    t=rj

    xjmt =pjmjm j, m Mj (3)n

    j=1

    mMj

    qjmkxjmt Qkt + Hkt k, t (4)

    mMj

    djpjmt=rj

    sjmt = 1 j (5)

    xj,m,0 = sj,m,0 j, m Mj (6)xjmt xj,m,t1 + sjmt j, m Mj, t > 0 (7)

    mMi

    dipim

    t=

    ri

    t simt+

    mMi

    pimim + lij

    mMj

    djpjm

    t=

    rj

    t sjmt j, Ji Pj (8)

    sjmt = 0 t / {rj ,...,dj pjm} (9)xjmt = 0 t / {rj ,...,dj 1} (10)jm {0, 1} j, m Mj (11)

    xjmt {0, 1} j, m Mj, t (12)sjmt {0, 1} j, m Mj, t (13)Hkt 0 k, t (14)

    The objective function (1) minimizes the total hiring costs. Due to constraint (2), exactly onemode is selected for each job. Constraint (3) ensures that each job is processed for the requiredduration between the release date and the deadline, and that the variables jm and xjmt areconsistent. Constraint (4) ensures that the amount of required resource does not exceed the amount

    of available resource, using regular capacity and hiring. Constraints (5), (6), and (7) guarantee thenon-preemption requirement, and that the variables sjmt and xjmt are consistent. Each job Jj startsexactly once in one of its modes, and the processing of job Jj in time bucket t is only allowed if jobJj is being processed in time bucket t 1 or it starts in time bucket t. The precedence relationswith time lags are managed in constraint (8); the start time of job Ji plus its processing time andthe time lag lij is less than or equal to the start time of job Jj, whenever job Ji is a predecessorof job Jj. Constraints (9) and (10) put all non-relevant sjmt and xjmt variables to zero. Finally,constraints (11) to (14) define the domain of the variables.

    2.2. TCPSP with working in overtime, and hiring in regular time and in overtime

    In this section, we extend the model of the previous section, such that it can handle the TCPSP

    with working in overtime, and hiring in regular time and in overtime. This extension requires extraeffort, due to the different usage of time buckets. A time bucket in overtime has different propertiesthan a regular time bucket during a work day. Furthermore, we have to handle the following tworequirements. As before, preemption is not allowed during regular time. However, it is allowed towork (without gaps) in some of the time buckets in overtime, then stop and continue the work inthe next regular time bucket or in a subsequent block of overtime, if no regular time is in betweenthese two blocks of overtime. The second requirement is on working in overtime. If personnel workin one time bucket in overtime, they have to work from the beginning of that overtime block untilthat time bucket.To cope with the modified non-preemption requirement, we take a different view on the time buck-ets. We define one chain of regular time buckets T0 = {t01,...,t

    0N0

    }, and L chains of time buckets that

  • 8/2/2019 Time-Constrained Project Scheduling

    4/16

    Guldemond et al.: Time-Constrained Project Scheduling

    4

    are available for working in overtime Tl = {tl1,...,tlNl

    }, l = 1,...,L. For each chain of overtime timebuckets Tl, there is an index l {1,..,N0} that indicates the last regular time bucket (t

    0

    l T0)

    before the start of the chain Tl. Furthermore, we assume that the chains do not overlap in timeand that the overtime chain Tl+1 is later in time than the overtime chain Tl. If the first timebucket is not regular time but overtime, we introduce an artificial regular time bucket to start

    with. This way we have the time horizon [0,

    Ll=0 Nl], and the corresponding set of time buckets

    T is the union of all chains, i.e., T = Ll=0Tl. As a consequence, each time bucket t T belongs to

    one unique chain Tl, l {0,...,L}. Due to the above mentioned constraints, the set of time bucketsitself is also a chain, so we can compare each pair of time buckets, no matter whether these timebuckets are in regular time or overtime. The modified non-preemption requirement can now bestated as follows. If a job is processed on two time buckets of a chain Tl, it is also processed in alltime buckets in between these two time buckets, and if a job is processed in two different overtimechains Tk and Tl, 1 k < l L, than this job has to be processed in all regular time buckets{t0

    k+1, t0

    k+2,...,t0

    l} T0. Note that it is possible that certain regular time buckets are followed by

    more than one chain of overtime time buckets.Consider the following example. For processing, we have 8 regular hours available on Friday and

    Monday, and 4 overtime hours on Friday evening, 8 on Saturday and again 4 on Monday evening.This means that T0 = {F r1,...,Fr8, M o1,...,Mo8}, T

    1 = {F r9,...,Fr12}, T2 = {Sa1,...,Sa8} and

    T3 = {M o9,...,Mo12}. Furthermore, t01 = t0

    2= F r8 and t

    0

    3= M o8. Figure 1 illustrates this exam-

    ple.

    M o1

    T2 : Sa1

    Sa8

    M o8

    M o9T1 : F r9

    F r8F r1T0 :

    T : F r1 F r8 F r9 Sa1 Sa8 M o1 M o8 M o9

    T3 :

    F r12 M o12

    F r12 M o12Figure 1 Chains of time buckets.

    Within this new concept of time chains, using general time lags for the precedence relations leadsto problems. They are not properly defined anymore, since it is not clear if the time lags only referto regular time buckets or also to overtime time buckets. Consider the following example. If thetime lag is a consequence of a lab test that has to be done between two processes, the openinghours of the lab determine to which time buckets the time lag applies. To overcome this problem,it is possible to introduce a dummy job and a dummy resource for each time lag. With the properresource requirements for each dummy job, time lags can have any desired property with respectto regular and overtime time buckets. Therefore, for the problem considered in this subsection, werestrict ourselves to standard precedence relations, indicating that a job Jj can only start after thecompletion of job Ji, i.e., we assume that the corresponding lij values are all 0.We are now ready to model the problem with working in overtime, and hiring in regular time andin overtime. In addition to the parameters already used in the previous model, we have costs cOktand cHOkt for one extra unit of resource Rk in time bucket t as working in overtime and hiring in

  • 8/2/2019 Time-Constrained Project Scheduling

    5/16

    Guldemond et al.: Time-Constrained Project Scheduling

    5

    overtime, respectively.To model the different types of capacities, we introduce variables Okt and HOkt representing theamount of capacity made available through working in overtime and hiring in overtime respectively,for resource Rk in time bucket t. For ease of notation, we define p

    maxj := maxmMj pjm .

    The TCPSP with working in overtime, and hiring in regular time and overtime can be modelled

    by the following ILP:

    minimize:K

    k=1

    tT0

    cHktHkt +

    tT\T0

    (cOktOkt + cHOkt HOkt)

    (15)

    subject to:

    mMj

    jm = 1 j (16)

    dj1t=r

    j

    xjmt =pjmjm j, m Mj (17)

    nj=1

    mMj

    qjmkxjmt Qkt + Hkt k, t T0 (18)

    nj=1

    mMj

    qjmkxjmt Okt + HOkt k, t T\ T0 (19)

    Ok,tl1

    Qk,t0l

    l 1, k (20)

    Ok,tlh

    Ok,tlh1

    l 1, h > 1, k (21)

    mMj

    djpjmt=rj

    sjmt = 1 j (22)

    xj,m,t01 = sj,m,t01 j, m Mj (23)xj,m,t0

    h xj,m,t0

    h1+ sj,m,t0

    h+

    t

    {l|l=h1}Tl

    sj,m,t j, m Mj , h 1 (24)

    xj,m,tl1

    xj,m,t0l

    + sj,m,tl1

    +

    t{t|t 1, l 1,j,m Mj (26)

    pmaxi

    1

    mMj

    sj,m,t

    tt

    mMi

    ximt t T, Jj , Ji Pj (27)

    sjmt = 0 t / {rj ,...,dj pjm} (28)xjmt = 0 t / {rj ,...,dj 1} (29)

    jm {0, 1} j, m Mj (30)xjmt {0, 1} j, m Mj , t T (31)sjmt {0, 1} j, m Mj , t T (32)Hkt 0 k, t T

    0 (33)Okt 0 k, t T\ T

    0 (34)HOkt 0 k, t T\ T

    0 (35)

    The objective function (15) minimizes the total weighted costs. Constraints (16), (17), and (18)are equivalent to (2), (3), and (4), describing the mode selection, the processing time, and theresource requirement, respectively. In addition to the resource requirement constraint (18), we have

  • 8/2/2019 Time-Constrained Project Scheduling

    6/16

    Guldemond et al.: Time-Constrained Project Scheduling

    6

    constraint (19) for the resource requirement in overtime time buckets. The resource used in anovertime time bucket may not exceed the amount that is available through working in overtime andhiring in overtime. Constraints (20) and (21) forces the amount of work in overtime to be decreasingin each chain of overtime, ensuring that if personnel work in one bucket in overtime, they were alsoavailable in the previous time bucket of that overtime chain and, thus, work in overtime from the

    start of that overtime sequence. Constraint (22) ensures that each job starts exactly once in one ofthe available modes. The modified non-preemption requirement is specified in constraints (23) to(26). In constraint (23), we guarantee that job Jj is processed in mode m in the first time bucketof T0 if it also starts in that time bucket and in that mode. For the other regular time buckets,constraint (24) ensures that job Jj is processed in mode m if it is processed in the previous regulartime bucket, or starts in this regular time bucket, or starts in an overtime directly succeeding thelast regular time bucket. For processing in overtime time buckets, constraint (25) states that weare allowed to work on a job Jj in mode m in the first time bucket of an overtime chain if we workon it in the last regular time bucket, or we start the job at this time bucket, or the job starts inan overtime time bucket which succeeds the last regular time bucket, but preceding this overtimetime bucket. Constraint (26) states that it is only allowed to work on a job Jj in mode m in a

    time bucket that is not the start of an overtime chain, if that is done also in the previous overtimetime bucket or it is starts in this overtime time bucket. The precedence relations are managed inconstraint (27). If job Jj starts in time bucket t, then the left hand side of the constraint becomeszero, implying that in none of the time buckets after t there can be worked on job Ji, i.e., job Ji isalready finished before t. On the other hand, if job Jj does not start in time bucket t, constraint(27) gives no restriction. Constraints (28) and (29) put all non-relevant sjmt and xjmt variables tozero. Finally, constraints (30) to (35) define the domain to the variables.The presented model cannot only deal with working in overtime, and hiring in regular time andovertime, but also with the possibility to outsource. Outsourcing can be included by introducinga mode for each job which represents the outsourcing. We introduce for such an outsource modea processing time and a resource requirement for an artificial resource which has to be hired. Theprocessing time of this mode and the cost for hiring correspond to the outsourcing.

    3. Solution Approach

    In the previous section, we presented two ILPs, one to model the TCPSP with hiring and oneto model the TCPSP with working in overtime, and hiring in regular time and in overtime. Sinceboth of these problems are NP-hard, see Neumann et al. (2002), we cannot expect to solve largeinstances with the ILP-formulations. Therefore, we present in this section a heuristic approach. Wefocus on the second problem which takes into account all possibilities for extra capacity. Beforepresenting our two stage heuristic, we indicate which difficulties one encounters when attackingthe TCPSP and thereby motivating our approach.

    3.1. Motivation

    When scheduling jobs, it is preferable to use as few work in overtime, and hiring in regular timeand in overtime as possible, since these add to the costs. A standard serial planning heuristic startsscheduling jobs completely using only regular time and available resources as long as possible. Onlyif a job would miss its deadline, working in overtime and hiring capacity comes into the picture.In our experience, using such a strategy results in bad solutions, if we get a solution at all. Thereason for this is that bottlenecks get shifted towards the end of the horizon. However, the bestsolution might be to hire a small amount at the beginning to avoid costly situations when fittingin the last jobs at the end of the horizon. Scheduling the jobs one by one in a greedy fashion shiftssuch problematic situations towards the end of the horizon, which can make them grow large.Therefore, we propose to schedule the jobs only partially in a first stage, using only regular time and

  • 8/2/2019 Time-Constrained Project Scheduling

    7/16

    Guldemond et al.: Time-Constrained Project Scheduling

    7

    hire extra resource capacity only if necessary. The first stage consists of a serial planning, where onlya certain fraction of each job is scheduled. Afterwards, we gradually increase the fraction the jobsare scheduled, within the available regular time. The resulting schedule does not use overtime andgenerally contains only parts of the jobs. In this way, we try to prevent the shifting of problematicsituations: All jobs are partially admitted in the schedule before we address the bottlenecks. In the

    second stage, we extend the jobs to their required length by using capacity that is still free, workingin overtime, and hiring in regular time and overtime. The goal is to create a feasible schedule, inwhich the usage of irregular capacity is spread evenly over the planning horizon. The final part ofthe second stage consists of a neighbourhood search to improve the constructed schedule.The first stage is realized with a randomized sampling technique. When constructing the partialschedule job by job, jobs and modes are selected with a certain probability. For a given instance,in the first stage a number of different schedules are generated. These will be made feasible in thesecond stage, and on the best of these schedules a neighbourhood search will be performed. Thenext section gives a detailed description of the two stage heuristic.

    3.2. Two Stage Heuristic

    3.2.1. Initialization In the initialization, we calculate modified release dates and modifieddeadlines, which are sharp bounds on the start and completion times of the jobs. From thesemodified release dates and modified deadlines, we can determine in advance, if there exists a feasibleschedule.The modified release date rj of a job Jj is the first time bucket, such that if the job starts inthis time bucket, all its predecessors can be scheduled if there is infinite resource capacity, bothin regular time and overtime. The modified deadline dj of a job Jj is the last time bucket, suchthat if the job finishes the bucket before, all its successors can still be scheduled if there is infiniteresource capacity, both in regular time and overtime. A feasible schedule exists if and only if foreach job Jj the interval [rj , dj ] is large enough to process the job Jj in one of its modes.The modified release dates rj can be calculated by a forward recursion through the precedence

    network:

    rj := max {rj , maxiPj

    {ri + minmMi

    pim}} , j. (36)

    With a backward recursion, we calculate the modified deadlines dj :

    dj := min {dj , min{i|jPi}

    {di minmMi

    pim}} , j. (37)

    All time windows, the time between the modified release date and modified deadline, should belarge enough to accommodate the job in one of its modes:

    dj rj minmMjpjm , j. (38)

    Note that the modified release dates and modified deadlines are calculated with respect to the timehorizon T.

    3.2.2. Stage 1 In the first stage, we generate a schedule containing all jobs, but probablyonly for a shorter duration than necessary, using only regular time and hiring only if necessary.We generate the partial schedule with a randomized sampling procedure and then try to improveit by extending the partially scheduled jobs. For each job Jj, we are going to determine a starttime Sj rj , a completion time Cj < dj, and a mode m Mj such that in [Sj, Cj ] there is enoughtime available to process job Jj in mode m. However, at this first stage we assign only the regular

  • 8/2/2019 Time-Constrained Project Scheduling

    8/16

    Guldemond et al.: Time-Constrained Project Scheduling

    8

    time in [Sj , Cj] to job Jj . We aim to create a partial schedule such that each job is scheduled forat least a fraction a0 [0, 1] in regular time. However, there is no strict guarantee that each jobreaches this fraction. We can only ensure that the assigned starting and completion times allowfor a feasible solution using overtime and hiring extra capacity.In a serial manner, we select jobs and corresponding modes and include them in the schedule. Next,

    we describe this selecting and scheduling in detail.Let Djobs denote the decision set from which we select the job to be scheduled next. The set Djobscontains all jobs for which all predecessors are already in the current schedule. Initially,

    Djobs := {Jj |Pj = }.

    In each iteration, we select a job from the decision set Djobs. For each job Jj Djobs, we determinea priority j . This priority depends on how early the job can start. Let ejm (earliest mode start)denote the earliest start time of job Jj in mode m. The ejm is the smallest time bucket in T,greater than or equal to the modified release date rj and greater than the completion times of jobJjs predecessors, such that for each of the a0 pjm regular time bucket following ejm still enoughresource capacity is available to schedule job Jj in mode m. Recall that we only wish to schedule

    job Jj for a fraction a0, and only in the regular time T0.

    For each job Jj and each mode m Mj , we define the slack, sljm, with respect to T0. Let themaximum of these slack values denote the slack of job Jj, slj (see Figure 2).

    sljm := dj a0 pjm ejm , Jj Djobs, m Mjslj := max

    mMj{sljm} , Jj Djobs.

    It is possible that the slack of a job becomes negative, implying that we cannot schedule the job fora fraction a0 without hiring and working in overtime. As a consequence we either have to schedulethis job for a smaller fraction at this stage or hire a small amount of resources in regular time toenlarge the fraction for which this job is scheduled in regular time. We get back to this problem

    later.If a job has a small slack value, there is not much room to manoeuvre this job and therefore weprefer to schedule this job next. Such a job gets a high priority. More precisely, the priority valueof job Jj becomes:

    j := maxJiDjobs

    {sli} slj , Jj Djobs.

    Note that j 0. To get strictly positive selection probabilities for each job in Djobs, we add 1 tothe priority, before we normalize these priority values to get the selection probability j of selecting

    job Jj:

    j := (j + 1)

    JiDjobs

    (i + 1), Jj Djobs,

    where the value of lies in [0, ], and indicates the importance of the priority value when selectinga job. If equals 0, jobs are selected uniformly at random from Djobs. If tends to infinity, the

    job with the highest priority gets selected deterministically.

    After selecting a job Jj from Djobs, we do not directly select the mode m Mj which led to themaximal slack slj , but use again a randomized sampling to determine the mode. However, noteach mode in Mj is suitable for selection. The decision set Dmodes Mj only contains those modesthat have a duration pjm such that it fits between the maximum of the completion times of the

  • 8/2/2019 Time-Constrained Project Scheduling

    9/16

    Guldemond et al.: Time-Constrained Project Scheduling

    9

    rj dj T0

    slj1 = slj

    slj2

    ej3ej2ej1

    a0 pj1

    a0 pj2

    a0 pj3

    slj3(< 0)

    Figure 2 Derivation of the slack of job Jj .

    predecessors of job Jj and the modified release date, and the modified deadline (with respect toT). Selecting another mode would lead to infeasibility of job Jj, since scheduling this job in such amode causes this job or one of its successors to exceed its deadline. Note that if we guarantee that

    the completion times of the jobs are bounded by the modified deadlines, we always have Dmodes = .We prefer a mode that enables the job to complete early, since a late completion time hindersthe jobs that still have to be scheduled. Thus, we are looking for the modes with a large slack.This leads to priorities m on the modes, which we normalize to get selection probabilities m(comparisons done with respect to T0):

    m := sljm minmDmodes

    {sljm} , m Dmodes

    m :=(m + 1)

    mDmodes

    (m + 1), m Dmodes,

    where lies in [0, ] indicating the importance of the priority. Again, we add 1 to the priorities

    to get strictly positive selection probabilities.Now that we have selected a job and one of its modes, we determine its start and completion time,Sj T and Cj T respectively. Since the job Jj gets scheduled onto the regular time buckets in[Sj , Cj ], for Sj and Cj the following must hold to ensure that we get a feasible interval for job Jjand leave enough room for the remaining jobs to be scheduled feasible:

    rj Sj (39)

    Cj < dj (40)Ci < Sj , Ji Pj (41)

    Cj Sj + 1 pjm , with respect to T. (42)

    There can be many pairs (Sj , Cj) satisfying these constraints. Therefore, we select from these onepair by applying the following criteria, in the presented order:1. Select a pair that hires as less as possible.2. Select a pair that minimizes max{0, a0 pjm Cj + Sj}, with respect to T0.3. Select a pair that minimizes Cj .4. Select a pair that maximizes Sj.

    After the first three criteria there may be still several pairs left, whereas after criterion 4 the valuesfor Sj and Cj are uniquely determined. Due to constraints (39) to (42), it might be necessary tohire resources in regular time, but via criterion 1 we try to avoid this. The second criterion statesthat we select from the remaining start and completion times, those that schedule the job withminimal shortage to the fraction a0. In Figure 3 an example is given, where no shortage occurs.

  • 8/2/2019 Time-Constrained Project Scheduling

    10/16

    Guldemond et al.: Time-Constrained Project Scheduling

    10

    This way we try to schedule each job for the desired fraction. The third criterion gives, from theremaining start and completion times, those that minimize the completion time, not to hinderthe jobs that still have to be scheduled. Finally, we choose from the remaining pairs of start andcompletion time, the pair that maximizes Sj.

    y zx

    pjm

    T

    T0

    Sj Cj

    x + y + z a0 pjmFigure 3 Schedule jobs only within regular time.

    If all jobs are partially scheduled, it might be possible to improve this partial schedule by extendingthe processing time of jobs within regular time, such that they are scheduled for a larger fraction.To prevent the shifting of problematic situations, we would like to spread this increase evenly overall jobs. Therefore, we use a procedure that repeatedly tries to extend the jobs to a higher fraction,going through the schedule alternating from back to the front and from the front to the back. Let(a0, a1, . . . , ak) denote a non-decreasing sequence of fractions that we apply, where a0 equals thefraction used in the randomized sampling. One extension step consists of a backward and a forwardextension. In the ith backward extension, we scan the current schedule from the back to the frontand search for each job Jj new start and completion times satisfying (39) to (42), by the followingfour criteria in the presented order:

    1. Select a pair that requires no more hiring than before.2. Select a pair that minimizes max{0, ai pjm Cj + Sj}, with respect to T0.3. Select a pair that maximizes Sj.4. Select a pair that minimizes Cj .

    The ith backward extension is followed by the ith forward extension, which is a mirrored version ofthe ith backward extension.

    3.2.3. Stage 2 The result of Stage 1 is a schedule containing all jobs in one of their modes,but only scheduled in regular time and not necessarily for the required length. In Stage 2, weuse working in overtime and hiring in regular time and overtime to get a feasible solution of theTCPSP. The main idea to get a feasible solution is the following. Iteratively, for each job Jj that is

    not scheduled for its required duration, we solve an ILP. This ILP is given by a restricted versionof the ILP in Section 2.2, where all jobs but job Jj get frozen as they are in the current schedule,job Jj gets fixed to its current mode, and only the timing of job Jj is left to the solver. Moreprecisely, we deduce a release date and deadline for job Jj , imposed by the current schedule and themodified release date and modified deadline, and we deduce the capacity that is still available. Forthe regular time buckets, this is the original capacity minus that what is used by the other frozen

    jobs in the current schedule. For the overtime time buckets, this is the work in overtime that isimposed by other jobs, but not used as a consequence of constraint (21). We take constraints (17)to (35) from the ILP of Section 2.2. The solver returns the timing of the job Jj and the use ofirregular capacity. This, together with the frozen jobs, gives a schedule that is the same as beforeexcept for the scheduling of job Jj . Job Jj is now scheduled for its required duration. Again, the

  • 8/2/2019 Time-Constrained Project Scheduling

    11/16

    Guldemond et al.: Time-Constrained Project Scheduling

    11

    requirements (39) to (42) on the start and completion times Sj and Cj of job Jj, ensure that thereis always a feasible scheduling of job Jj . At the end of this iterative process, we have a feasiblesolution for the TCPSP.The order in which the jobs are extended to their required duration, can be chosen through numer-ous criteria. Possible orderings are: Smallest scheduled fraction (aj) first, largest unscheduled pro-

    cessing time ((1 aj)pjm) first, and an ordering deduced from the precedence network.Now that we have a feasible schedule for the TCPSP, we apply a neighbourhood search to improveupon this feasible schedule. We use a neighbourhood search based on a method proposed by Pal-pant et al. (2004). This method selects a number of jobs, freezes the remainder of the schedule,and calculates for the resulting ILP an optimal schedule. It is similar to the first part of Stage 2,but now the timing and mode selection of a small number of jobs is left to the ILP-solver. Sincewe optimize on more than one job and we allow for mode changes, we need the complete ILP ofSection 2.2. One iteration of the neighbourhood search goes as follows:

    1. Select a subset of the jobs JNeighbour {J1,...,Jn}.2. Freeze all jobs Jj / J

    Neighbour.3. Determine release dates, deadlines, available capacities and precedence relations for the jobs

    in JNeighbour

    .4. Solve the resulting ILP.There are numerous ways to select a subset JNeighbour of jobs, for example, Palpant et al. propose1) to select a job and then also select all its predecessors, or 2) select a job and then also select all

    jobs scheduled parallel with, and contiguous to it. One can think of many more selection criteria,but the main idea is not to select jobs arbitrarily, but to select jobs that occur close to each otherin the schedule. Otherwise, there is not much to improve, i.e., the neighbourhood is too small.

    4. Computational results

    This section describes the setup of the computational tests and the results for testing the solutionapproach presented in this paper. Since we are the first to attack the TCPSP with working in

    overtime, and hiring in regular time and overtime, we cannot compare the presented heuristic withany existing method. However, there is much known for the RCPSP. Therefore, we take benchmarkinstances of the RCPSP and transform them into instances of the TCPSP. This is done in sucha way that we can draw conclusions towards the quality of the TCPSP solutions. This sectionstarts with describing the transformation of the instances and the setting of the parameters in theheuristic, before presenting the computational results.

    4.1. Construction of TCPSP instances

    For the RCPSP, a set of benchmark instances called PSPlib is generated by Kolisch and Sprecher(1997b), Kolisch et al. (1995), and can be found on the web Kolisch and Sprecher (1997a). Theseinstances have been employed in many studies, for example by Demeulemeester and Herroelen

    (1997), Kolisch and Drexl (1996) and Kolisch et al. (1995). We transform these RCPSP instancesinto TCPSP instances. We restrict ourself to single mode instances.To transform the instances from the PSPlib into TCPSP instances, two additional aspects have tobe introduced. First, we need to define the overtime and hiring possibilities with their associatedcosts, and second we need to introduce deadlines.The TCPSP distinguishes between regular and overtime time units, where the RCPSP has onlyone type of time units. We let each time unit from the RCPSP correspond to a regular time unit inthe TCPSP. In addition, we introduce overtime in a weekly pattern. Day 1 of the week, the Sunday,contains only 8 overtime time units. Day 2 to 6, the weekdays, start with 8 regular time units,followed by 4 overtime time units. Day 7, the Saturday, contains again only 8 overtime time units.This weekly pattern is repeated until the deadline of the instance is reached. All other aspects,

  • 8/2/2019 Time-Constrained Project Scheduling

    12/16

    Guldemond et al.: Time-Constrained Project Scheduling

    12

    like job durations, precedence relations, resource availability, and resource requirements remainunchanged.To get insight in the quality of the solution generated by the two stage heuristic, we compare theTCPSP solution with the RCPSP solution. A comparison can be made if we set all costs equal to1 and let the deadline of all jobs be the (best known upper bound to the) minimum makespan of

    the RCPSP instance. This means that the number of regular time buckets before the deadline inthe TCPSP, is exactly the (best known upper bound to the) minimum makespan of the RCPSPinstance. The quality of the schedule is given by its costs. If a schedule has zero costs, the twostage heuristic gives the optimal (best known) schedule for the RCPSP instance. If not, the costsof the schedule give an indication on how far we are from the best known schedule, i.e., it givesthe amount of irregular capacity used to reach the best known makespan.Setting all costs equal to 1 implies that working in overtime is equally costly as hiring in regulartime or overtime. This does not fit the real world situation, but it allows us to measure the totalamount of irregular capacity needed. Note that due to constraint (21) and all costs equal to 1,hiring in overtime is at least as good as working in overtime. Therefore, the possibility of workingin overtime could be removed from the model. However, we choose not to do this, since it would

    reduce the computational time and thereby give a false indication on the computational time ofreal life instances.Besides choosing the deadline equal to the (best known upper bound the) minimum makespan,which we denote by Cmax, it can be chosen as a fraction ofCmax. By letting the deadline be equalto b Cmax, where b [0, 1], the problem becomes tighter and more irregular capacity will beneeded. The resulting objective value will indicate the costs to complete the project earlier. Sincethe problem becomes tighter with b < 1, we get a better insight in the influence of the differentparameter settings of the two stage heuristic.

    4.2. Parameter setting

    In each stage of the heuristic, there are a number of parameters that need to be set. In the first

    stage, these are the fraction a0, the and of the randomized sampling, and the extend sequencein the improvement. (However, since we have only single mode instances, can be ignored.) In thesecond stage, there is the order in which we extend the jobs to their required duration, and thechoice of the neighbourhood for the improvement. This subsection concerns the setting of theseparameters.Since there are far too many different parameter settings to test them all, we make an assumption.We assume that optimizing one parameter setting is independent of the setting of the other param-eters. The quality of a feasible schedule is judged by the amount of costs that is incurred by hiringand working in overtime. We are going to determine the parameter setting one parameter after theother and evaluate the achieved result after the feasibility step (and not after the neighbourhoodsearch). For each instance, we generate a number of schedules, each by taking a random sample to

    get a partial schedule, extend it, and turn it into a feasible schedule. Out of these schedules, weselect the one with minimum costs to be the solution.For testing the parameter settings, we use a selection of 10 instances with 30 jobs and 10 instanceswith 120 jobs from the PSPlib. Initially, we use a deadline of 90% of the best known upper boundto the makespan, i.e., b = 0.9. This way we are quite sure that the objective value will not equalzero, allowing us a better measurement of the effect of the parameters. For these initial tests, weuse largest unscheduled processing time first in the feasibility step.Starting with the value of , we fix the fraction a0, the extension sequence and the order in whichwe make the jobs feasible. Testing different values for , it turns out that if a small number ofschedules for each instance are generated large values for outperform small values. However, asthe number of schedules generated per instance increases, the random search ( = 0) outperforms

  • 8/2/2019 Time-Constrained Project Scheduling

    13/16

  • 8/2/2019 Time-Constrained Project Scheduling

    14/16

    Guldemond et al.: Time-Constrained Project Scheduling

    14

    Table 4 Different extend sequences, 120 jobs, b = 0.9 and a0 = 0.8.

    Extension seq. Average percentage Average percentage Average costa1, a2, . . . planned after RS planned after extend after feasibility0.8, 0.9, 1 89.4 93.3 386.9

    0.9, 1 89.4 93.8 357.91 89.4 93.6 370.3

    89.4 89.4 539.8

    Table 5 Quality time trade-off, 120 jobs, b = 0.9.

    Strategy Average CT (s) Max CT (s) ObjectiveSingle pass/ No time limit 3, 700.8 32, 919.4 291.0Single pass/ 10s time limit 84.7 192.6 289.1Multi pass/ No time limit 3, 900.9 33, 385.3 264.1Multi pass/ 10s time limit 315.8 750.7 265.6

    slightly better. Therefore, we choose to use smallest scheduled fraction first.

    The neighbourhood search is the most time consuming part of the heuristic, since it has to solveseveral ILPs. Therefore, we select out of the 100 constructed schedules the one with lowest costs,and do only the neighbourhood search on that schedule. Moreover, for the neighbourhood search,it is important to keep the running time low, but still search a large part of the neighbourhood.In each step a number of jobs are removed from the schedule, the remainder is fixed before wereinsert the jobs optimally into the schedule. To do this, we choose a point in time and removeeach job that is contiguous to it. Due to the concept of overtime we define a job contiguous totime t if its start time is at most the first regular time unit after t and its completion time is atleast the last regular time unit before t. By choosing a single point in time, and not multiple oreven an interval, we keep the amount of removed jobs small, and all these jobs are close together.The set of considered points in time is chosen as the set of completion times of the jobs in the

    schedule. We process these points in an increasing order. Always when an improvement occurs, wereplace the current schedule by the new schedule. We do not recalculate the set of time points tobe considered. One option is to go through the schedule only once (a single pass); another option isto go through the schedule several times (a multi pass). If we do a multi pass, we use in each passthe new completion times, and stop if the last pass does not improve the schedule. From our tests,we have seen that it can take a long time to determine the optimal placements of the removed jobs,i.e., solve the ILP. We can restrict the computational time spent on one reinsertion by using timelimits. If we reach such a time limit, we use the best found reinsertion of the removed jobs (this canbe the placement we had before, so we are guaranteed to have a solution that is at least as goodas before). Table 5 displays the quality against time trade-off. (CT(s) stands for computationaltime in seconds.) Multi pass gives higher quality solutions at a price of larger computational times;

    single pass with time limit (we used a 10 second time limit) gives a solution very fast, but might beof low quality. From the second and third column in Table 5, we see that there are a few instanceswith very large computational time. The test results also show that there are only a few instanceswhere the time limit of 10 seconds is reached. Therefore, it pays to limit the computational timespent on one reinsertion. Comparing the values in Tables 4 and 5, we see that the neighbourhoodsearch reduces the objective value by 20% to 25%.From this subsection, we conclude that each step in the presented method has its contribution tofinding a good schedule, and that it is a very flexible method. Depending on the purpose of use,not only the parameters can be chosen appropriately, but there is also the choice in the qualityagainst time trade-off, and on which part of the method to spend the most computational effort.

  • 8/2/2019 Time-Constrained Project Scheduling

    15/16

    Guldemond et al.: Time-Constrained Project Scheduling

    15

    Table 6 Summery of the results.

    b=1.0 b=0.9Work Maximum Average Objective # with Maximum Average Objective

    Jobs Instances Content CT(s) CT(s) value cost=0 CT(s) CT(s) value30 480 2,310.7 26.1 2.9 3.2 294 55.1 4.5 58.660 480 4,562.9 241.7 17.2 13.4 276 303.4 24.7 124.490 480 6,812.2 932.7 53.0 22.6 282 845.3 73.9 180.8

    120 600 9,073.5 2,163.4 308.3 72.4 75 2,798.2 362.2 326.6

    4.3. Computational Results

    In the previous subsection, we have used only a small number of instances to determine the choicesfor the parameters of the two stage approach. In this subsection, we present a summary of thecomputational results for a large set of instances. We have used all single mode instances fromthe PSPlib, and have set the algorithm as discussed in the previous subsection with a multi pass,10 second time limited, neighbourhood search. For each instance, we construct a schedule witha deadline on 100% and 90% of Cmax. Table 6 summarizes the test results. The computationalexperiments were performed on a computer with a Intel Centrino processor running at 2.0 GHz.

    We used Delphi 7 to code the algorithm and CPLEX 9.1 to solve the ILPs. The details of thecomputational tests can be found on our website Guldemond et al. (2006).If we set the upper bound equal to Cmax, we see that our method solves about 60% of the instanceswith zero cost for the 30, 60, and 90 job instances, but only 13% of the 120 job instances. The 30,60, and 90 job instances are generated with similar characteristics, whereas the 120 job instanceshave a relative lower resource availability and are therefore tighter and more difficult. The averageobjective value, the amount of used irregular capacity, is only a small fraction of the total workcontent. In the solutions of the instances with 30 jobs, on average, 0.14% of the required capacityis satisfied with irregular capacity. This percentage goes up to 0.33% for the instances with 90 jobs.For the tighter 120 job instances, the percentage of work done with irregular capacity is 0 .80%.These percentages are very low, so the two stage heuristic gives high quality schedules.Setting the deadline to 90% ofCmax increases the objective value, and we cannot verify whether ornot optimal schedules are found. The 10% reduction of the time horizon results in schedules thathave only 2.5% of the total work content in irregular capacity for the 30 job instances, and up to3.5% for the 120 job instances. So completing a project 10% earlier does not have to be too costly.The computational time grows as the number of activities grows, but the relative resource avail-ability seems more important. The instances that have a low resource availability require morecomputational time than instances with high availability. The instances that require a lot of com-putational time are exactly those for which the corresponding RCPSP instances are also difficultand where the best found makespan is often not proven to be optimal.

    5. Conclusions

    In this paper, a first approach for solving the TCPSP with hiring and working in overtime ispresented. The developed two stage heuristic first schedules fractions of jobs and then extends the

    jobs in the second stage to obtain a feasible schedule. By first scheduling only a fraction of eachjob, the shifting of problematic situations is prevented, keeping them small.Although there are no benchmark instances for the TCPSP, we were able to convert benchmarkinstances from the RCPSP and make a comparison. It turned out that a large amount of theinstances are solved to optimality. Decreasing the deadline of a project by 10% results in schedulesthat have far less than 10% smaller percentage of the work done with irregular capacity. Theschedules generated by the two stage heuristic are of high quality. The computational tests alsoshow that there is a lot of flexibility in our method. The flexibility is not only due to the parameter

  • 8/2/2019 Time-Constrained Project Scheduling

    16/16

    Guldemond et al.: Time-Constrained Project Scheduling

    16

    setting, but also due to the possibility to choose where to spend the computational effort. Therefore,we believe this method is very suited for practical use.

    AcknowledgmentsPart of this research has been funded by the Dutch BSIK/BRICKS project.

    References

    Deckro, R.F., J.E. Herbert. 1989. Resource constrained project crashing. OMEGA International Journal ofManagement Science 17 6979.

    Demeulemeester, E., W. Herroelen. 1997. New benchmark results for the resource-constrained projectscheduling problem. Management Science 43 14851492.

    Gademann, N., M. Schutten. 2005. Linear-programming-based heuristics for project capacity planning. IIETransactions 37 153165.

    Guldemond, T.A., J.L. Hurink, J.J. Paulus, J.M.J. Schutten. 2006. Time-constrained project scheduling;

    details of the computational tests. http://tcpsp.ewi.utwente.nl/.

    Herroelen, W., B. De Reyck, E. Demeulemeester. 1998. Resource-constrained project scheduling: a surveyof recent developments. Computers and Operations Research 25 279302.

    Kis, T. 2005. A branch-and-cut algorithm for scheduling of projects with variable intensity activities. Math-ematical Programming 103 515539.

    Kolisch, R. 1995. Project Scheduling under Resource Constraints . Physica-Verslag.

    Kolisch, R., A. Drexl. 1996. Adaptive search for solving hard project scheduling problems. Naval ResearchLogistics 43 2340.

    Kolisch, R., S. Hartmann. 1999. Heuristic algorithms for solving the resource-constrained project schedulingproblem: Classification and computational analysis. J. Weglarz, ed., Handbook on Recent Advances inProject Scheduling. Kluwer, 197212.

    Kolisch, R., R. Padman. 2001. An integrated survey of deterministic project scheduling. Omega 29 249272.

    Kolisch, R., A. Sprecher. 1997a. Project scheduling library - PSPlib. http://129.187.106.231/psplib/.

    Kolisch, R., A. Sprecher. 1997b. PSPLIB a project scheduling problem library. European Journal of Opera-tional Research 96 205216.

    Kolisch, R., A. Sprecher, A. Drexl. 1995. Characterization and generation of a general class of resource-constrained project scheduling problems. Management Science 41.

    Li, R.K-Y, R.J. Willis. 1993. Resource constrained scheduling within fixed project durations. The journalof the operational research society 44 7180.

    Neumann, K., C. Schwindt, J. Zimmermann. 2002. Project Scheduling with Time Windows and ScarceResources . Lecture notes in economics and mathematical systems (508), Springer.

    Palpant, M., C. Artigues, P. Michelon. 2004. LSSPER: Solving the resource-constrained project schedulingproblem with large neighbourhood search. Annals of Operations Research 131 237257.