19
http://www.jstor.org Assembly-Line Balancing-Dynamic Programming with Precedence Constraints Author(s): Michael Held, Richard M. Karp, Richard Shareshian Source: Operations Research, Vol. 11, No. 3 (May - Jun., 1963), pp. 442-459 Published by: INFORMS Stable URL: http://www.jstor.org/stable/168031 Accessed: 25/08/2008 14:09 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=informs. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact [email protected].

Assembly Line Balancing

Embed Size (px)

Citation preview

  • http://www.jstor.org

    Assembly-Line Balancing-Dynamic Programming with Precedence ConstraintsAuthor(s): Michael Held, Richard M. Karp, Richard ShareshianSource: Operations Research, Vol. 11, No. 3 (May - Jun., 1963), pp. 442-459Published by: INFORMSStable URL: http://www.jstor.org/stable/168031Accessed: 25/08/2008 14:09

    Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available athttp://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unlessyou have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and youmay use content in the JSTOR archive only for your personal, non-commercial use.

    Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained athttp://www.jstor.org/action/showPublisher?publisherCode=informs.

    Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printedpage of such transmission.

    JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with thescholarly community to preserve their work and the materials they rely upon, and to build a common research platform thatpromotes the discovery and use of these resources. For more information about JSTOR, please contact [email protected].

  • ASSEMBLY-LINE BALANCING-DYNAMIC PROGRAMMING WITH PRECEDENCE

    CONSTRAINTS

    Michael Held, Richard M. Karp, and Richard Shareshian International Business Machines Corporation, New York, New York

    (Received October 1, 1962)

    This paper approaches the assembly-line balancing problem as a sequencing problem involving precedence constraints that prohibit the occurrence of certain orderings. This approach permits the formulation of a dynamic programming algorithm for the exact solution of small assembly-line balanc- ing problem-. A generalization of this algorithm combined with a succes- sive approximations technique is used for the solution of large problems. In addition, certain combinatorial problems associated with partially ordered sets are formulated and discussed in detail. These problems arise whenever sequencing problems with precedence constraints are treated by dynamic programming techniques.

    ASSEMBLY-LINE balancing is an example of a sequencing problem in which precedence relations exist among the operations to be se-

    quenced. In an earlier paper[4' the authors suggested a dynamic pro- gramming approach to problems of this type. The present paper specifies in detail a line-balancing problem and its solution, expanding the brief discussion of this problem given in reference 4. The method of solution is presented in two parts: a dynamic programming technique for the exact solution of small problems, followed by an iteration procedure for the approximate solution of large problems. Computational results are also given. The application of dynamic programming methods to sequencing problems with precedence constraints requires the formulation and solution of certain combinatorial problems concerning partially ordered sets. These problems are considered in the concluding section. In the appendix a dynamic programming formulation proposed by JACKSON[61 is compared with the methods of the present paper.

    AN ASSEMBLY-LINE BALANCING PROBLEM AND ITS SOLUTION

    The Problem An assembly line may be viewed as a series of locations called work

    stations, at each of which is performed a subset of the elementary jobs necessary for the production of a unit. The assembly-line balancing prob- lem is that of assigning jobs to work stations so that the number of work

    442

  • Assembly-Line Balancing 443

    stations is minimized subject to certain constraints. These constraints concern the ordering of elementary jobs, the organization of the line, and the required production rate.

    This paper is principally concerned with a simplified form of this prob- lem in which the assembly line is assumed to satisfy the following condi- tions:

    1. Serial Line. Each unit is processed at the work stations in a definite order and no two work stations operate on the same unit simultaneously. Thus, the total assembly line is considered to be serial with no 'feeder' or parallel subassembly lines.

    2. Precedence Relations. All restrictions on the order of execution of jobs may be expressed by precedence relations of the form 'job i must precede job j,' for which we use the notation i

  • 444 Michael Held, Richard Al. Karp and Richard Shareshian

    tions among the jobs. It is convenient to introduce certain concepts asso- ciated with the partial ordering. A subset S={ Ji1, J 2, *-*, Jin(f8j of the jobs is said to befeasibleifJjES andJiL

  • Assembly-Line Balancing 445

    can be shown from the above definitions that the following recurrence relations hold: t

    (a) [n(S)=1]: C({J1I})=t, (b) [n(S) > 11: C(S) = mines [C(S-J ) +4AC(S-JI),tl]I,

    S-iT feasible

    where S- J, is the set obtained by deleting J1 from S. By means of these recurrence relations, it is possible to determine C({ J1, J2, *, Jn} ) by a computation involving only feasible sets, which are far less numerous than feasible sequences.

    The induced assignment for a-= (Ji, Ji2, , Jin) is said to be optimal, if C, = C({ J1, J2, *, J41 ). Such an assignment minimizes the number of work stations used, and therefore constitutes a solution to the line-balancing problem. A necessary and sufficient condition for the optimality of the induced assignment for a- is the following:

    C({JW1 Ji2 ... JI} )= C({ Jil, Ji2, ., Ji} l) (2 < p ? n) (1)

    + A(C(f Jily Ji2> * ip- 1} ) I tip) Thus, once C(S) has been tabulated for all feasible sets S, a feasible sequence o-= (Jil, Ji2. - , J n) having an optimal induced assignment is easily computed from (1); Jiq is obtained first, and then, successively Jinxd Jin 2) Y Jil.

    SUCCESSIVE APPROXIMATIONS THE dynamic programming algorithm in the preceding section represents a significant improvement over exhaustive methods and makes possible the exact solution of line-balancing problems of limited size on existing com- puters. An experimental program for the IBM 7090 solves a 36-job problem with 13,000 feasible sets in approximately 20 seconds. As the problem size increases, direct application of the algorithm requires exces- sive amounts of storage and computation time. Consequently, the solution of large problems requires that the algorithm be supplemented by some other procedure.

    The procedure chosen was one of successive approximations, following to some extent techniques employed successfully by the authors in approxi- mately solving other sequencing problems.[41 The set of all jobs is arranged initially in a feasible sequence m(O). At the ith step of the calculation, the feasible sequence -(j-1) is replaced by a sequence () such that c, (i) < C,(i-1). This is achieved by solving a derived problem that has the structure of a

    t This result may be viewed as an application of the principle of optimality,f1] which is the basis of dynamic programming.

  • 446 Michael Held, Richard M. Karp and Richard Shareshian

    generalized line-balancing problem and can be solved by application of a modified dynamic programming algorithm. This successive approxima- tions procedure often produces a feasible sequence giving an optimal solu- tion to the line-balancing problem. Experience shows that, in any case, a very good approximate solution is obtained.

    In what follows, we develop this method of successive approximations in detail. A dynamic programming algorithm for the solution of a gener- alized line-balancing problem is developed in the following section; the structure of a successive approximations procedure based on the solution of a sequence of such generalized line-balancing problems is then discussed; finally, computational results are given. Generalized Line-Balancing Problem

    In this section we consider a generalized line-balancing problem in which the elements to be ordered are sequences of jobs. These sequences U 1, 0-2) * au are obtained by decomposing a feasible sequence 0-=(Jil, Ji2, , - JJ) containing all jobs to be assigned in a particular line-balancing problem. This decomposition takes the form

    0- (J1, ., Jit)= (Jy1, .. * Jn1) (Jnl+l? ... Jin2) (2) (JVnu-1+I ... Jin) =71 0-2 (.be

    The partial ordering of the jobs induces a partial ordering of the Bi, as follows: af -< a, if there are jobs Ji, E Of and Ji, - o-, such that Jik < Ji, In addition, it proves convenient to impose the artificial constraint that al -< aj -< a- for j= 2, 3, . , u-i1. Any ordering of the aj consistent with this induced partial ordering yields a new feasible sequence; within this class of feasible sequences we seek a -= (a j2) *.*.* 2 , jail oU) such that ca is a minimum (and hence, a fortiori, c,, < ?c).

    The desired ordering is found by a dynamic programming calculation similar to the one given earlier. This calculation depends on the concept of an incremental cost 3(ce, t) = cOq-co, incurred when a sequence = o-j is adjoined to a string 0= a1l 0-12 ... acl- of I-i sequences. Suppose

    =(,*** Jiq+p l) Jiq+p); then, in the assignment induced by the sequence O= o-j1 ai2-... * * the first work station containing jobs from t will contain the jobs JiqJ Jiq+11 .. * * ,+kl where

    k1=max{#lk < p and ZCj. tiq+j < T}. Then a(c0, t) is given by the following:

    (a) If k1=p and JiqE 0, a(co, t)=ct; (b) If ki = p and Jiqe i, a(co, t) T([co/T]+,y) -co+ct,

    where y I if [co/T]

  • Assembly-Line Balancing 447

    It is now possible to specify a dynamic programming algorithm for the optimal reordering of the sequences ai. This algorithm bears a formal similarity to the algorithm given earlier for the exact solution of line- balancing problems. A subset S C'{1 , 02, o-*u is called feasible if oae S and orf -< a,} imply o-f e S. If S is feasible, we define C(S) as the minimum of c.f1l 'i2...j over all feasible sequences 0jh Oj2... rj for which

    'S{:_ , , * *i2 .7 .ir}

    Then it may be shown that the following recurrence relations hold for all feasible sets S:

    (a) C({ i })=c51, (b) C (S) =minl,$ I {, C(S (Al) +6[C(8 OJ!), (Al]} 2

    8-1 feasible

    where S-o ois the set obtained by deleting o- from S. A calculation-similar to that described earlier may then be used to compute an optimum ordering of the as.

    Specification of the Successive Approximations Procedure The method of successive approximations proceeds by solving a se-

    quence of derived problems, each of which is a generalized line-balancing problem of the type introduced in the preceding section. In selecting de- rived problems, it is important to note that arbitrary decompositions of the form given in equation (2) may lead to very restrictive induced precedence relations among the sequences oi, thus allowing little opportunity for re- arrangement and cost improvement. In particular, if o-i and Ad both contain several elements, the existence of a precedence relation between them is likely. Thus, decompositions are selected inwhichthemajorityofjobsare assigned to one or two sequences, with each of the other sequences con- taining only a small number of jobs. Specifically, the following two types of decompositions are used:

    Type I Type II

    0'1= (Jily Ji2l .. * Jis)) 01-:::(Jill Ji2V .. * Ji9)) -2 = (Jis,+ )2 I .9= (Ji8+11 Ji8+2 )Y

    0'3 = (Jis+2) 03= (Ji8? ), . . . 0f~~4 = (Jis+4) Ji8+5))

    (Ju-1= (Ji8+u-2)) 06= (Ji8+6)1 = Jis7 Y2 , tin)

    (a-U =(Jis Fqx Jis3+q+l) .. Jin). The successive approximations procedure consists of four phases. The

    first and third phases employ Type I decompositions, while the second and fourth employ Type II decompositions. Since the treatment of all possible

  • 448 Michael Held, Richard M. Karp and Richard Shareshian

    derived problems of Types I and II is time consuming and unnecessary, the set of problems chosen is further limited in three ways. First, in each derived problem the value of u is chosen as large as possible consistent with storage limitations on the number of feasible sets S. With this restriction, the specification of o- completely determines a derived problem, within a given phase. To further limit the number of problems, each phase treats only the problems determined by the following successive specifications of 01:

    01 = (4)t, 01= (JilY Ji2l .., Ji-), Y = (il Y Ji2l .., Ji2a) ..,

    where a is a parameter of the calculation. t In this way, the derived problems are chosen so as to 'leapfrog' across the over-all ordering. In phases one and two, a further selection criterion is used to avoid the solution of derived problems that demonstrably cannot yield an immediate cost improvement in the over-all ordering. Given a sequence 1 0i2 *2... *U, any reordering of the sequences 0J2 0J3* .o

    -u will yield an over-all sequence of the form c-1 p on. Since cay , > Zy, + E tJ , we have

    C1Y1 P YU>-cy,+ Jie-p ti+5(CO-,+ E Ji,ep tiy O-U). (3) If the lower bound given in (3) is met by the cost c 1 U2. ." associated with the given ordering, then no over-all improvement is possible by considering the sequence 0-1 .2 .*. ou and the corresponding decomposition is not con- sidered further. ?

    In phases three and four, this selection criterion is weakened by applying it as if the ordering ended with the first work station (in the induced assignment for Cr1 2 . .. -u) all of whose jobs belong to the sequence 0u. Thus a sequence cux is obtained by deleting from c-X the jobs occurring after this work station, and the criterion is applied with au' replacing au throughout. This has the effect of allowing the treatment of some derived problems that, though incapable of producing an improvement in the over-all cost, may lead to perturbations of the ordering that prepare the way for eventual cost improvement.

    The details of the method of successive approximations outlined here involve a number of arbitrary decisions that were made mainly on the basis of experimentation. The next section shows that these decisions have led to highly satisfactory results.

    t 4 denotes the empty sequence. T The value of ca determines the number of derived problems considered within

    each phase. Too large a value may lead to poor numerical results, whereas too small a value may lead to unnecessarily long computations. Experimentation shows that a choice of ca=9 is a good compromise between these extremes for most problems.

    ? Note that this lower bound could not have been derived without the artificial constraint a < - -< oN, j=22 3, * * *, u-1.

  • Assembly-Line Balancing 449

    Computer Results An experimental program for the IBM 7090 employing the successive

    approximations technique described in the preceding section was used to solve a number of line-balancing problems. This section gives a summary of the results obtained.

    The program terminates either upon the completion of the four phases of successive approximations, or when it can be established that an assign- merit having a minimum number of work stations has been obtained. This second condition comes about when a feasible sequence ra= (Jil, Ji2, , Jin) is obtained such that [cl/T] = tl t ]/T], where [x] denotes the smallest integer greater than or equal to x.

    Each problem was normalized so that the cycle time T= 1. Except where otherwise stated, a value of 9 was assigned to the parameter a used in the selection of derived problems. In all cases, the initial feasible sequence o-(0) was chosen at random: (i) A 45-job problem due to Kilbridge and Wester.[8]

    Running time Run no. Initial cost Final cost Z ti (minutes)

    1 9.072 8.000 8.000 1 2 9.319 8.000 1

    (ii) A 111-job problem with random ti. Precedence relations were taken from a real assembly line.

    Running time Run no. Initial cost Final cost E ti (minutes)

    1 32.359 26.558 26.253 2 2 31.808 26.982 2

    (iii) A 180-job problem with randomly generated execution times and precedence relations.

    Running time Run no. Initial cost Final cost E t, (minutes)

    1 68.439 55.198 54.486 5 2 69.716 55.277 7

    A run with a = 6 produced a solution with cost 54.964 in 5 minutes. (iv) A 400-job problem with arbitrarily chosen ti and severe precedence constraints.

    Running time Run no. Initial cost Final cost E ti (minutes)

    1 95.905 84.952 84.295 3 2 97.905 84.905 3

    (v) A 555-job problem. The jobs are partitioned into five sets of 111 jobs each, with no precedence relations between jobs in different sets. The jobs in each set have the same precedence relations and execution times as in (ii).

    Running time Run no. Initial cost Final cost E ti (minults)

    1 160.519 132.549 131.267 31 2 162.824 133.269 26

  • 450 Michael Held, Richard M. Karp and Richard Shareshian

    From one of the solutions to (ii), it was possible to construct by inspection a solu- tion to (v) requiring only 132 work stations. (vi) A 612-job problem with randomly generated execution times and precedence relations.

    Running time Run no. Initial cost Final cost E ti (minutes)

    1 174.541 149.701 148.291 24 2 175.629 149.585 24

    A run with a = 6 produced a solution with cost 148.970 in 30 minutes.

    COUNTING AND MAPPING PROBLEMS FOR FEASIBLE SETS

    IN THIS section we discuss two problems concerning the feasible subsets of a partially ordered finite set:

    (a) Counting-Determine the number of feasible sets. (b) llapping-Determine a simply computed one-to-one mapping A (S) of the

    feasible sets onto a set of consecutive integers.

    These problems first arose in connection with computer implementation of the line-balancing algorithm given above. One storage location must be associated with each feasible set S to store the number C(S). Thus, the solution to the counting problem gives the total number of locations re- quired and supplies a rough measure of the time required for the calcula- tion. The solution to the mapping problem determines the assignment of consecutive storage addresses to feasible sets. In the experimental program these problems are solved by storing a list of the feasible sets in a lexico- graphic order. The quantities C(S) are stored in a second list, with S and C(S) occupying the same positions relative to the initial addresses of the two lists. The location of C(S) in the second list may be determined by performing a binary search to locate S in the first list. The calculation is arranged so that sharp initial bounds are set on the range of each binary search performed. Consequently, this naive approach yields a program that operates rapidly but is wasteful of storage.

    An alternate method, which does not require the storage of two lists, determines A (5) by an algorithm based on the structure of partial order- ings, rather than by a search procedure. Assuming the existence of such an algorithm, the first step is to list the feasible sets in increasing order of A (S). The dynamic programming calculation proceeds by going through the list, replacing each set S by C(S). For the calculation of C(S) the quantities C(S-I) are required, and their locations are determined by the calculation of A(S-1). Since C(S-i) must be computed before C(S) iscomputed,A must be defined so that A(S-i)

  • Assembly-Line Balancing 451

    storage locations are considered successively, and the set S associated with a storage location k is computed as S=A-1(k).

    In what follows we present algorithms for counting and addressing. These algorithms are of intrinsic interest, and their use would be mandatory in case of severe storage limitations.

    Counting We begin by relativizing the concept of feasible set introduced earlier.

    Let P=11, 2, * , n} be a set with a given partial ordering O, Bi = {j J i or j

  • 452 Michael Held, Richard M. Karp and Richard Shareshian

    partially ordered set P by a directed graph in which there is an edge from i to j if i is a direct predecessor of j. The components of P are then pre- cisely the connected components of its associated graph. The role of com- ponents in counting feasible sets depends on the following: THEOREM 2. Let P= { 1, 2, * * *, n} be a partially ordered set with components P1, P2, ', Pr. Then [P] =flt7i [Pi].

    Proof. The theorem follows immediately from the observation that there is a one-to-one correspondence between sets S feasible with respect to P and r-tuples (S1, S2, , Sr) where Si is feasible with respect to Pi. This correspondence is such that S= S1U S2 U.. U Sr

    Before specifying a general procedure for counting feasible sets, we give a counting technique applicable only to a special type of partially ordered set. This technique will later prove useful as part of the general proce- dure. Let T be a partially ordered set with the following properties:

    (a) There is exactly one element (called the 'root' of T) with no direct prede- cessor;

    (b) every other element has exactly one direct predecessor. The directed graph associated with T is a tree with all directed arrows pointing away from the root.

    THEOREM 3. [T] may be determined recursively as follows: (a) Label with the number 2 each element having no direct successor; (b) successively label each element with a number equal to one plus the product of the numerical labels associated with its direct successor(s) [see Fig. (1)]; (c) [T] is equal to the label associated with the root.

    This theorem may be proved inductively by noting that the label associated with each element gives the number of feasible sets for the partially ordered set consisting of that element and all its successors. Example 1:

    Figure 1

    We note that if P is any partially ordered set and if P' is the partially ordered set derived from P by reversing the partial ordering (i.e., reversing

  • Assembly-Line Balancing 453

    the arrows in the associated directed graph), then [P] = [P']. We shall apply this observation mainly to the case of 'reversed trees.'

    For a general partially ordered set P, the counting of feasible sets is accomplished by an algorithm based upon the repeated application of Theorems 1, 2, and 3. At a general stage of this counting procedure, a list of partially ordered sets is considered. Initially, this list contains only the set P. At each step that set S with the largest number of elementst is removed from the list. If S corresponds to a tree or a reversed tree, [S] is determined using Theorem 3. Failing this, it is determined whether S has two or more components Si; if so, these components are added to the list and the information that [S]=I [Si] is noted. If S cannot be split into components, the basic complements Bf with respect to S are added to the list, and it is noted that [S] = E [Bf]. Eventually, the list must be exhausted, since the subsets added at each step (if any) contain fewer elements than the set removed, and, clearly, any set with one element can be treated by Theorem 3. Upon exhaustion of the list, Theorems 1 and 2 may be applied to retrace the process and determine [P]. This procedure may prove difficult to program efficiently for a computer because of its list processing structure and demands for pattern recognition. It is, however, extremely well suited for hand computation as the following example demonstrates.

    Example 2: Let P be represented by the following graph:

    Figure 2

    Initially, the list contains the single set { 1,2,3,4,5,6,7,8,9,10}. Since Theorems 2 and 3 are not applicable, Theorem 1 is applied, giving

    [{ 1,2,3,4,5,6,7,8,9,10}] =[f1?[41+Ffl?[f1+[fl+[{2,3,4}1 + [{ 2,3,4,5} ] + [{ 2,3,4,5,6} ] + [{ 2,3,4} ] +?[2,3,4}1 +[.b]

    t Ties are broken arbitrarily. I D denotes the empty set.

  • 454 Michael Held, Richard M. Karp and Richard Shareshian

    At this point, the list contains the following: {2,3,4,5,6},{2,3,4,5},{2,3,4},1. The set {2,3,4,5,6} is selected and deleted from the list. Theorem 3 is not applicable, but Theorem 2 yields the following: [{ 2,3,4,5,6}] [{12,3,4} ] [{5f]l[{6f]. The list now becomes {2,3,4,5},{2,3,4},{5},{6},f. This pro- cess continues as follows:

    [{2,3,4,5} ] ={2,3,4f][{5} ], (Theorem 2), [{2,3,4I= 4, (Theorem 3), [{ 5} I = 2, (Theorem 3), [{ 6f I = 2, (Theorem 3), [f] = 1.-

    Retracing the steps, we find that

    [{2,3,4,5}] =4X2=8, [{2,3,4,5,6}I=4X2X2= 16,

    and, finally,

    [{ 1L,2,3,4,5,6,7,8,9,89l0= = L+:I+ 1+ 1+1++4+8+16+4+4+1 = 42. The amount of work which this procedure requires depends upon the

    way in which the elements are numbered. A good rule is to number con- secutively elements that occur in series, always maintaining the condition that if j

  • Assembly-Line Balancing 455

    The same type of decomposition is then applied repeatedly to the various subtrees of P, until, eventually, only single-element trees remain to be considered. Reversed trees are treated analogously. This process can be implemented by a simple labelling procedure; we illustrate this with reference to the partially ordered set of Example 1. Example -3: The graph in Fig. 3a simply reproduces the labelling process used in determining [P]. In Fig. 3b each node is labelled with the product of those labels in 3a that are to the right of the given node and have the same direct predecessor. t With the convention that subtrees are numbered from right to left, this process yields the weights [P1], [P1] [P2j, * for the mixed number representations. In Fig. 3c the nodes connected by solid arrows represent a feasible set S. The nodes of S are labelled as follows: Label all terminal nodes of S with a 1; the label for a node of S all of whose successors have been labelled is obtained by multiplying the labels of the successors with the corresponding labels in 3b, summing these products, and then adding one. Then, in this example, A(SJP) = 113.

    Case 2 (Corresponding to Theorem 2). Suppose P has components P1, P2, ... , Pr. Then, for a set S feasible with respect to P:

    A (SIP) = A (sn PrIPr)HZ] [P=]+A I1 [P] + **+A (SnP2IP2)[PI]+A (SnPuI Pi).

    Case 3 (Corresponding to Theorem 1). If P has only one component and does not correspond to a tree, then

    A(SIP) = 2-o [Bj]+A sn flRIBE), where i is the highest index in S and the Bk are the basic complements with respect to P.

    Repeated applications of these recursive definitions will always yield a numerical value for A (SIP). Example 4: Using the partially ordered set P of Example 2 AQ 1,2,3,4,5,6} [P) is computed as follows:

    A ({ 1,2,3,4,5,6} { 1,2,3,4,5,6,7,8,9,10}) =9+A({2,3,4,5} 1{2,3,4,5}) (Case 3),

    A ({ 2,3,4,5} 1{2,3,4,5} ) =A((5} {1 )[{12,3,4} ] +A({2,3,4} {2,3,4}) (Case 2)

    =4A({5} 1{5} )+A({2,3,4} 1{2,3,4} ), A ({ 2,3,4} 1{2,3,4} ) =3 (Case 1),

    A ({5}I{5} )=i (Case 1). t This product is taken equal to 1 if there are no such nodes.

  • 456 Michael Held, Richard M. Karp and Richard Shareshian

    Figure 3a

    Figure 3b

    Figure 3c

    Thus, A({2,3,4,5}1{2,3,4,5}) =4 .1+3=7, and A({1,2,3,4,5,6}1{1,2,3,4,5,6,7,8,9,10} ) =9+7-=16.

    The given procedure always yields a function A (SIP) that is one-to-one onto the set of integers 0, 1, 2, *, [P] -1, and has the property that A(S1IP)

  • Assembly-Line Balancing 457

    puting the function A-', and for enumerating the feasible sets with respect to P in increasing order of A (SIP). It should be noted that all these procedures depend upon arbitrary orderings of elements, subtrees, and components. In particular, the definitions of the addressing function A (SIP) and its inverse will vary according to which conventions are chosen; but any choice will yield functions having the required properties.

    APPENDIX: AN ALGORITHM DUE TO JACKSON IN THIS appendix we describe an alternate dynamic programming algorithm for the exact solution of the line-balancing problem. This procedure is essentially the one given by Jackson in reference 6, with certain computa- tional details filled in by the authors. A comparison of Jackson's algorithm with our algorithm is also given.

    Let us associate with each feasible t set SC { J1, J2, * *, Jj a quantity D(S), the smallest integer greater than or equal to C(S). Thus D(S) gives the minimum number of work stations for a line-balancing problem requiring only the assignment of the jobs in S. A feasible set S is called k-maximal if D (S) = k and there does not exist a feasible set TD S such that D(T)=k. In particular, if D({J1, J2, . v ., Jn})=k*, then {J1, J2? ***X Jn is the only k-maximal set, and the value of k* gives the minimum number of work stations required for the assembly line. The calculation proceeds by systematically enumerating the 1-maximal, 2-maximal,-- ., k*-maximal sets. Once this has been accomplished, an assignment that minimizes the number of work stations used may be obtained by a pro- cedure somewhat analogous to that described earlier [cf.(3)].

    To specify the calculation, it is only necessary to state how the (k+ 1)- maximal sets are obtained from the k-maximal sets. We call R 1-maximallS (to be read '1-maximal given S') if S is k-maximal and: (1) SUR is feasible, (2) ZcDJiERt < T. and (3) theredoes not exist a set R'D R satisfying (1) and (2). For each k-maximal set S, the list of sets R that are 1-maximallS may be obtained using the algorithm given in Table I. It is then easy to obtain a list, Q, of all sets of the form SUR, where S is k-maximal and R is 1-maximal |S. Since every (k + 1)-maximal set can be written as SUR where S is k-maximal and R is 1-maximall S, Q includes all (k+ l)-maximal sets, but not every set in 2 is (k+ l)-maximal. Thus Q must be 'trimmed' to delete sets included in other sets and repeated in- stances of a given set; the resulting list consists precisely of the (c+ 1)- maximal sets. :

    t All feasible sets considered in this appendix are taken with respect to {J1, J2, *. , Jn $ Jackson (op. cit.) shows by a 'dominance' argument that, under special cir- cumstances, some (k + 1)-maximal sets can be dropped, but remarks that the re- sulting saving may be outweighed by the calculation required to identify these sets.

  • 458 Michael Held, Richard M. Karp and Richard Shareshian

    It does not seem possible to give a simple general procedure for com- paring the amounts of calculation required by Jackson's procedure and our algorithm presented earlier. The length of the calculation required by the earlier algorithm is determined in essence by the number of times the quantity Q=C(S-Jj)+A[C(S-J1),t1] is evaluated, and this is precisely the number of pairs of feasible sets of the form (S,S- J). For the special case in which there are no precedence constraints, the number

    TABLE I ALGORITHM FOR OBTAINING SETS WHICH ARE I-MAXIMALIS

    No. Step Comments

    I. Initialize 2 to {1q}. = list of partially formed i-maximal sets given S.

    (P= empty set. Note that I(} is not the empty list.

    2. Choose any VC2. Set k equal to If V=I,k=o. max {i|fiVI.

    3. Delete V from ?.

    4. For every J1 EjS such that I >-k Property P: (i)SUVU J1} is feasible. and property P is satisfied, put (ii)G v U {} t?57. VUIJ1} in oI. If no such Jf exists, go to 5; otherwise go to 2.

    We assume the jobs are numbered so that if Jm < Jn. then m -< n.

    5 Put V in t.M=list of sets which are i-maximallS.

    6. If 2 has been exhausted, stop; otherwise, go to 2.

    of such pairs isZ5-l k(e)=n 2-1, where n is the number of jobs. It should be noted that, in evaluating Q, a considerable proportion of the time is spent not in calculation, but in determining the storage cell in which C(S-J1) is located (cf. the concluding section).

    In estimating the amount of calculation required for Jackson's algorithm, it is necessary to consider both the execution of the algorithm of Table I and the process of trimming the list U2. The amount of time spent in executing the algorithm of Table I is essentially proportional to the number of times that property P [cf. Table I] is tested. For the special case where there are no precedence constraints and each job has

  • Assemnbly-Line Balancing 459

    an execution time of T/q, where q is a positive integer, the number of tests of property P is equal to

    Hqi (nn,-k\1q

    For most values of n and q, this greatly exceeds the n 2n1 evaluations of Q required by our algorithm. The trimming of the list Q can also prove quite time consuming. Taking n=12 and q=3 in the above example, the number of 2-maximal sets is (12) =924; however, before reduction, the list Qi con- tains 20 instances of each set. It may be estimated conservatively that the number of comparisons required in the reduction would exceed 10O.

    The comparison is not uniformly in favor of our method, and, par- ticularly in small examples amenable to hand calculation, Jackson's method is sometimes more efficient. It is felt, however, that our algorithm is superior in most cases. Moreover, it is possible to predict in advance the storage required in applying the algorithm to a given problem, merely by counting the feasible sets. Such a simple prediction does not seem possible in the case of Jackson's algorithm.

    REFERENCES

    1. R. BELLMAN, Dynamic Programming, Princeton University Press, Princeton, New Jersey, 1957.

    2. E. H. BOWMAN, "Assembly-Line Balancing By Linear Programming," Opns. Res. 8, 385-389 (1960).

    3. J. W. BURGESON AND T. E. DAUM, "Production Line Balancing," 62 pp., File Number 10.3.002, I. B. M. Corp., Akron, Ohio, 1958.

    4. M. HELD AND R. M. KARP, "A Dynamic Programming Approach to Se- quencing Problems," J. Soc. Indust. Apple. Math. 10, 196-210 (1962).

    5. W. B. HELGESON AND D. P. BIRNIE, "Assembly Line Balancing Using the Ranked Positional Weight Technique," J. Indust. Eng. 12, 394-398 (1961).

    6. J. R. JACKSON, "A Computing Procedure for a Line Balancing Problem," Management Sci. 2, 261-271 (1956).

    7. M. D. KILBRIDGE AND L. WESTER, "The Balance Delay Problem," Manage- ment Sci. 8, 69-84 (1961).

    8. -, "A Heuristic Method of Assembly Line Balancing," J. Indust. Eng. 12, 292-298 (1961).

    9. , "Heuristic Line Balancing: A Case," J. Indust. Eng. 13, 139-149 (1962). 10. M. E. SALVESON, "The Assembly Line Balancing Problem," J. Indust. Eng. 6,

    18-25 (1955). 11. F. M. TONGE, "Summary of a Heuristic Line Balancing Procedure," Manage-

    ment Sci. 7, 21-39 (1960).

    Article Contentsp. 442p. 443p. 444p. 445p. 446p. 447p. 448p. 449p. 450p. 451p. 452p. 453p. 454p. 455p. 456p. 457p. 458p. 459

    Issue Table of ContentsOperations Research, Vol. 11, No. 3 (May - Jun., 1963), pp. i-xii+303-476+xiii-xxviFront Matter [pp. i-xii]Some Queuing Problems with the Service Station Subject to Breakdown [pp. 303-320]On Poisson Queue with Two Heterogeneous Servers [pp. 321-330]Location-Allocation Problems [pp. 331-343]Multi-Commodity Network Flows [pp. 344-360]A Note on Exponential Smoothing and Autocorrelated Inputs [pp. 361-367]The Multi-Index Problem [pp. 368-379]Further Results for the Queue with Poisson Arrivals [pp. 380-386]Operations Research and Industrial Engineering: The Applied Science and Its Engineering [pp. 387-398]Generalized Lagrange Multiplier Method for Solving Problems of Optimum Allocation of Resources [pp. 399-417]Theory of Combat: The Probability of Winning [pp. 418-425]An Allocation Problem with Applications to Operations Research and Statistics [pp. 426-441]Assembly-Line Balancing-Dynamic Programming with Precedence Constraints [pp. 442-459]Letters to the EditorA Note on Highway Transparency [pp. 460-461]A Simple Proof of a Theorem in Exponential Smoothing [pp. 461-463]

    The Analyst's BookshelfBooksReview: untitled [pp. 464-466]Review: untitled [pp. 466-467]Review: untitled [pp. 467-468]Review: untitled [pp. 468-470]Books Received [pp. 470-471]Comment [pp. 472-473]Editor's Note [p. 473]

    Periodicals [pp. 473-474]

    Back Matter [pp. 475-xxvi]