15
J Intell Manuf DOI 10.1007/s10845-014-0956-x Simulation-based optimization of sampling plans to reduce inspections while mastering the risk exposure in semiconductor manufacturing M’hammed Sahnoun · Belgacem Bettayeb · Samuel-Jean Bassetto · Michel Tollenaere Received: 18 February 2014 / Accepted: 5 August 2014 © Springer Science+Business Media New York 2014 Abstract Semiconductor manufacturing processes are very long and complex, needing several hundreds of individ- ual steps to produce the final product (chip). In this context, the early detection of process excursions or product defects is very important to avoid massive potential losses. Metrology is thus a key step in the fabrication line. Whereas a 100% inspection rate would be ideal in theory, the cost of the metrol- ogy devices and cycle time losses due to these measurements would completely inhibit such an approach. On another hand, the skipping of some measurements is risky for quality assur- ance and processing machine reliability. The purpose is to define an optimized quality control plan that reduces the required capacity of control while maintaining enough trust in quality controls. The method adopted by this research is to employ a multi-objective genetic algorithm to define the optimized control plan able to reduce the used metrology capacity without increasing risk level. Early results based on one month of real historical data computation reveal a possible reallocation of controls with a decrease by more than 15 % of metrology capacity while also reducing the risk M. Sahnoun IRISE/CESI, 76130 Mont St Aignan, France e-mail: [email protected] B. Bettayeb (B )· S.-J. Bassetto Department of Mathematics and Industrial Engineering, Polytechnique Montréal, Montreal, Canada e-mail: [email protected] S.-J. Bassetto e-mail: [email protected] M. Tollenaere University of Grenoble Alpes, G-SCOP, 38000 Grenoble, France e-mail: [email protected] M. Tollenaere CNRS, G-SCOP, 38000 Grenoble, France level on the processing machine (expressed by the wafer at risk (W@R)) by 30 %. Keywords Genetic algorithm · Sampling · Control plan · Wafer at risk · Simulation · Multi-objective optimization Introduction Semiconductor manufacturers seek to optimize their control plan for reliable production. In this framework, accurate sam- pling policies have to be elaborated in order to: (1) reduce cycle time losses, and consequently work in process (WIP) at inspection steps (Chien et al. 2012), (2) monitor and reduce process variability, (3) reduce the risk level of processing machines, measured in terms of potential faulty products (Shanoun et al. 2011), and (4) better manage the evolution of technologies in terms of maturity and global mix. The fact remains that, optimized sampling policies are difficult to implement in high-mix semiconductor facilities owing to: (1) permanent changes of the product mix, which sometimes lead to significant fluctuations in the WIP level, (2) a high level of interaction among a number of organiza- tions with a stake in the fab, and (3) a huge number process operations and technologies processed in the same step (Liu et al. 2005; Bassetto and Siadat 2008). Due to the complexity of the semiconductor manufac- turing system, it is important to continuously monitor the state of the processing machines and control the quantity of uncertain-quality products. Previous works by the authors were focused on defining an operational index for the evalua- tion of the risk exposure level (Sahnoun et al. 2010; Shanoun et al. 2011; Bettayeb et al. 2014). This index, called wafer at risk (W @R), is computed for each processing machine and is expressed by the number of products (wafers) processed 123

Simulation-based optimization of sampling plans to reduce

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

J Intell ManufDOI 10.1007/s10845-014-0956-x

Simulation-based optimization of sampling plans to reduceinspections while mastering the risk exposure in semiconductormanufacturing

M’hammed Sahnoun · Belgacem Bettayeb ·Samuel-Jean Bassetto · Michel Tollenaere

Received: 18 February 2014 / Accepted: 5 August 2014© Springer Science+Business Media New York 2014

Abstract Semiconductor manufacturing processes arevery long and complex, needing several hundreds of individ-ual steps to produce the final product (chip). In this context,the early detection of process excursions or product defects isvery important to avoid massive potential losses. Metrologyis thus a key step in the fabrication line. Whereas a 100 %inspection rate would be ideal in theory, the cost of the metrol-ogy devices and cycle time losses due to these measurementswould completely inhibit such an approach. On another hand,the skipping of some measurements is risky for quality assur-ance and processing machine reliability. The purpose is todefine an optimized quality control plan that reduces therequired capacity of control while maintaining enough trustin quality controls. The method adopted by this research isto employ a multi-objective genetic algorithm to define theoptimized control plan able to reduce the used metrologycapacity without increasing risk level. Early results basedon one month of real historical data computation reveal apossible reallocation of controls with a decrease by morethan 15 % of metrology capacity while also reducing the risk

M. SahnounIRISE/CESI, 76130 Mont St Aignan, Francee-mail: [email protected]

B. Bettayeb (B)· S.-J. BassettoDepartment of Mathematics and Industrial Engineering,Polytechnique Montréal, Montreal, Canadae-mail: [email protected]

S.-J. Bassettoe-mail: [email protected]

M. TollenaereUniversity of Grenoble Alpes, G-SCOP, 38000 Grenoble, Francee-mail: [email protected]

M. TollenaereCNRS, G-SCOP, 38000 Grenoble, France

level on the processing machine (expressed by the wafer atrisk (W@R)) by 30 %.

Keywords Genetic algorithm · Sampling · Control plan ·Wafer at risk · Simulation · Multi-objective optimization

Introduction

Semiconductor manufacturers seek to optimize their controlplan for reliable production. In this framework, accurate sam-pling policies have to be elaborated in order to: (1) reducecycle time losses, and consequently work in process (WIP) atinspection steps (Chien et al. 2012), (2) monitor and reduceprocess variability, (3) reduce the risk level of processingmachines, measured in terms of potential faulty products(Shanoun et al. 2011), and (4) better manage the evolutionof technologies in terms of maturity and global mix.

The fact remains that, optimized sampling policies aredifficult to implement in high-mix semiconductor facilitiesowing to: (1) permanent changes of the product mix, whichsometimes lead to significant fluctuations in the WIP level,(2) a high level of interaction among a number of organiza-tions with a stake in the fab, and (3) a huge number processoperations and technologies processed in the same step (Liuet al. 2005; Bassetto and Siadat 2008).

Due to the complexity of the semiconductor manufac-turing system, it is important to continuously monitor thestate of the processing machines and control the quantityof uncertain-quality products. Previous works by the authorswere focused on defining an operational index for the evalua-tion of the risk exposure level (Sahnoun et al. 2010; Shanounet al. 2011; Bettayeb et al. 2014). This index, called wafer atrisk (W @R), is computed for each processing machine andis expressed by the number of products (wafers) processed

123

J Intell Manuf

since the last time that machine was validated or correctedby a quality control.

This work focuses on judicious inspection allocation.Specifically, we are seeking to reduce used metrology capac-ity, while at the same time respecting predefined W @R lev-els.

A sampling plan impacts all the organizations involvedin the fab. It directly influences the inspection buffer size(Bulgak et al. 1995) and the metrology time delay (Garvinet al. 2002), the latter being an important parameter in com-puting the risk level (W@R) (Sahnoun et al. 2012) and themaintenance plan (Chan and Wu 2009).

Several studies have been carried out on the risk-basedoptimization of the sampling plan and the dynamic allocationof inspections in semiconductor fabs (Good and Purdy 2007;Lee 2002).

Recently, Dauzère-Pérès et al. (2010) proposed a sam-pling indicator to choose, in real time, which lot has to bemeasured. Their sampling heuristic is based on a sum ofthe ratios between the number of wafers processed and someassociated target limits for processing machines. The lot cho-sen for the metrology is the one that will reduce this indicatorthe most. Its counterpart, at the planning level, defines tar-get limits for operations and tools, and has been defined byBettayeb et al. (2012a, b). A target control plan is computedusing a two step greedy method that allocates the availablemetrology capacity to ensure a level of W@R that does notexceed a predefined limit for all processing machines.

In more general terms, the sampling problem can be seenas an inspection allocation problem, as metrology capacitiesare limited. Some five decades ago, an economic algorithmwas developed for inspection allocation (Lindsay and Bishop1964), based on a cost function per unit produced and tak-ing into account the inspection cost and its location in theprocess. Since their paper was published, the field of inspec-tion allocation has become an area of intense research. Werecommend the surveys in Raz (1986) and Tang and Tang(1994) for a complete picture of the field. Several of themore remarkable works on the subject have been of particu-lar importance in the effort to improve production reliability.In Villalobos et al. (1993), for example, a flexible inspectionsystem for serial and multi-stage production systems in thefield of printed circuit boards is presented. The authors pro-vide a dynamic programming algorithm to optimize globalgoals (like costs), while taking into account some local con-straints, like inspection tool availability. An interesting caseof information-based inspection allocation is presented inVerduzco et al. (2001), in which a cost function is modeledtaking into account Type I and Type II errors, and the informa-tion gain achieved by each measurement. They also formu-late the inspection allocation problem as a Knapsack Prob-lem (KP), and propose a greedy algorithm to solve it. Theirsimulations reveal that the information-based solution per-

Fig. 1 Sampling plan based on operations

forms better than static inspections, in terms of classificationerrors. In Rabinowitz and Emmons (1997) and Emmons andRabinowitz (2002), an inspiring non-linear modeling of theinspection allocation problem is presented, in which the timeelapsed since the last inspection is introduced, and the pro-portion of defect-free items is linked to that time period. Thisconceptualization is strongly related to the concept of [email protected] interesting model of inspection allocation effort is alsopresented in Kogan and Raz (2002), where the authors seekto minimize a cost function influenced by inspection costs,as well as multiple failure modes that can affect each produc-tion step. They also suggest multiple layers of detection thatcan be used for every product, at every production stage.

An optimal sampling plan can be defined as a highlycombinatorial search problem in which the solution spaceincreases exponentially with the number of decision vari-ables. To cope with the combinatorial explosion of thesesolutions, evolutionary algorithms, like genetic algorithms(GA), can be used. These are techniques capable of evaluat-ing hundreds of thousands of possible solutions and makingthem converge toward the best alternatives. GAs are highlysuitable for sampling plan optimization problems, because ofthe ease with which codification can be performed to definechromosomes (cf. “Codification and population generation”section; Fig. 1). In addition, several researchers have shownthe effectiveness of GAs for the optimization of samplingplans in various types of industry (Gen and Lin 2013). Rauand Cho (2009) propose a GA modeling for the control planin reentrant production systems. Zhang et al. (2013) propose ahybrid sampling strategy-based multiobjective evolutionaryalgorithm to deal with the process planning and schedulingproblem. Vinod et al. (2004) use a GA approach to reduce therisk of radiation exposure in the nuclear industry by avoidingunnecessary inspections. Kancev et al. (2011) optimize testintervals for aging equipment by a multi-objective geneticalgorithm (MOGA) approach. Lin et al. (1998) has designeda random inspection rate for a flexible assembly system basedon the GA approach.

The works cited above propose several GAs aiming tooptimize some operational performances of industrial sys-tems. However, they did not consider more than two objec-tives and the risk criterion is never combined with otherobjectives. Moreover, authors did not find any use of GAs

123

J Intell Manuf

to reduce risk exposure in semiconductor manufacturingbecause of the difficulty to define its corresponding objec-tive function. That is why a simulation-based evaluation ofthe objective function is used in this paper.

Our goal in this paper is to present a MOGA designed tooptimize the control plan at the operational level, i.e. guid-ing real time decisions at the lowest organizational level inthe fab. At this level of decision, it is necessary to decidefor each produced lot if it has to be inspected or not basedon the current sampling rate. MOGA gives to experts thepossibility of redesigning the control plan according to somespecific needs in terms of metrology capacity usage and riskexposure mastery. This GA will find a near-optimal samplingplan, while at the same time reducing the risk level (W@R)of processing machines and the metrology capacity used. Aside-effect is that the metrology time delay is also reduced.In order to process this algorithm, a model of the samplingplan based on processing operations is proposed. In orderto perform the MOGA, a sub-simulator has been developedto compute several production system indicators: the W@Rfor each tool, and the inspection capacity used for each sam-pling plan. This sub-simulator is used to estimate the impactof sampling plans on fab performances and as an evaluationfunction for the GA.

This paper is organized as follows. “Context and problemstatement” section presents the context of this research andstates the problem of interest. The model of sampling planis formalized in “Formalization of the sampling plan designproblem” section. “GA-based sampling plan optimization”section presents the MOGA used to optimize the samplingplan, and the simulator used for the evaluation of each chro-mosome is briefly presented. The results obtained are pre-sented and discussed in “Implementation, experiments, anddiscussion” section. “Conclusions and perspectives” sectionprovides our conclusions and perspectives.

Context and problem statement

The authors were privileged to participate in the EuropeanIMPROVE project, led by STMicroelectronics. This 42-months project (2009–2012) brought together 36 academicsand representatives of the semiconductor industry to Imple-ment Manufacturing science solutions to increase equipmentproductivity and fab performance. This paper is based on asub workpackage of this project aiming at improving controlplans. During this project, strong interactive relationshipswere formed and joint developments were achieved betweenthe authors and researchers at STMicroelectronics from the300 mm research and production front-end fab, located inCrolles-France. In this plant several ranges of integrated cir-cuits (IC) are designed, prototyped and produced. They areemployed in various applications: medical, defense, automo-

tive and communications, among others. Each IC is made ofhundreds of millions of transistors that belong to a technol-ogy, which, in the CMOS industry, is labeled according tothe key characteristic of a transistor: the width of the oxidegate. During the project, that width ranged from 32 to 90 nm.Process control operations focus on the set-up of the vari-ous tools that make up the production system, and it is theseoperations that are responsible for inspection allocation andthe definition of the control plan. The practice in this fab isto rely on internal process experts, who flag operations thatcan be controlled and define their sampling rates. In orderto justify a control plan improvement, a dedicated changemanagement committee has to approve the change based onfactual analysis based on a before/after evaluation of someperformance indicators related to metrology capacity usageand risk mitigation. Manufacturing data are extracted man-ually from manufacturing databases, sorted and pretreated.This is a task that often takes more than a working day tocomplete. The next step is to build a simulation for eachsampling plan, in order to compute what-if scenarios. Eachof these scenarios retrieves new performance indicator val-ues. This operation takes at least half a day to complete. Forthe engineer, the main challenge is to choose the samplingplan that provides the most gain and satisfies some predefinedconstraints. This sampling plan will constitute a sequenceof non-zero sampling rates, each of which corresponds to apairing in the process flow model that can be sampled (tech-nology, operation). The objective is to avoid the possibility ofleaving any operation or technology without a control duringthe production time considered (Sahnoun et al. 2012).

This operation is by far the most complex from a combina-tory point of view. Sampling rate per (technology,operation)pair ranges from 1 to 100 %. For instance, by considering theprocessing operations in the Lithography and Etching areasalone, more than 1,200 pairs (technology, operation) havebeen identified, which generates a huge number of possiblesampling plan combinations: 1,200100.

The methodology of meta-heuristic optimization is themost suitable for solving problem formulated in this paper,where a knowledge system and optimization should beconsidered simultaneously (Takahara and Miyamoto 1999).These problem belongs to the multi-objective optimizationclass of problems (Fonseca and Fleming 1995), and GAshave proven their efficiency in solving this kind of prob-lem (Cheshmehgaz et al. 2012; Simaria and Vilarinho 2004).Because of the complex dynamics and the random behav-ior of lots in the fab, it is not easy to define the W@R witha mathematical function. Simulation is considered the bestway to compute the exact value of W@R, in spite of the con-siderable computation time involved. For all these reasonsand because the GA is easily applicable to the model of thecontrol plan, we propose using MOGA-based simulation todefine the optimized sampling plan. The operations that can

123

J Intell Manuf

Fig. 2 The approach: a tool to support decision making for samplingplan optimization

be sampled are selected by an expert, and the GA is run usingsimulation as an evaluation function. Populations are gener-ated until the final test on of the best chromosome of the lastpopulation has been passed as shown in Fig. 2.

Formalization of the sampling plan design problem

This section explains how the sampling plan problem hasbeen formalized, providing details of the objectives and con-straints to be considered in constructing the best solution. Asmentioned in the previous section, in semiconductor manu-facturing, any quality control plan design or change has tobe negotiated in order to avoid generating undesirable effectson fab systems. This is a major challenge for every qualitycontrol planner. As metrology is always considered a neces-sary evil in manufacturing fabs (Bunday et al. 2007), thereis great pressure to reduce the number of wafers sampled forinspection. Moreover, everyone is conscious of the effects ofsampling on the stability of the manufacturing system and theinvestigation capacity. Consequently, the controls are subjectto a daily tradeoff, in order to maintain a given level of data.For some products, reducing sampling can release metrologycapacity without undesirable consequences. Fortunately, dataare sometimes redundant (Shanoun et al. 2011) and/or could

be obtained in other ways, such as by Virtual Metrology (VM)(Su et al. 2008; Pan and Tai 2011). However, for others, skip-ping measurements could increase risk (an increase in W@R,which is the number of products processed since the lastinspection, cf. “Introduction” section) because some opera-tions are never controlled or because controls focus only onspecific products. It is important to notice that VM is compli-mentary to our approach, because it also gives the possibilityto reduce the number of inspection. However, this method isnot always applicable and real measurements are needed toadjust its mathematical model.

Let x denote a given sampling plan that is composed ofthe sequence of sampling rates of the (technology, operation)pairs (Fig. 1). Each sampling plan is characterized by its ownsampling plan saturation (SPSx ) value, which is the propor-tion of the metrology capacity used when applying samplingplan x relative to the metrology capacity needed for a 100 %sampling plan, and is defined as follows:

SPSx = CAPAx

CAPA100 % (1)

where, CAPAx is the metrology capacity used when the sam-pling plan x is applied, and CAPA100 % is the metrologycapacity needed for a 100 % sampling plan, which is sup-posed to be equal to the total available capacity.

Note that it is possible to define several sampling plans fora given SPS. For example, consider a process flow composedof two process operations. If the available metrology capacityallows inspection of the first operation 6 times, and the secondone 10 times, an SPS of 50 % can be associated with thefollowing list of feasible sampling plans:

S1 =(

s11 = 1/6, s1

2 = 7/10)

;S2 =

(s2

1 = 2/6, s22 = 6/10

);

S3 =(

s31 = 3/6, s3

1 = 5/10)

;S4 =

(s4

1 = 4/6, s41 = 4/10

);

S5 =(

s51 = 5/6, s5

1 = 3/10)

;S6 =

(s6

1 = 6/6, s61 = 2/10

).

Consequently, each SPS is representative of a class of feasiblesampling plans.

GA-based sampling plan optimization

As mentioned in “Context and problem statement” section oncontext, the quality control plan of a complex manufacturingsystem needs to be adapted to dynamic operational changesin the fab. The use of automatic methods to search for an

123

J Intell Manuf

optimized sampling rate for each technology-operation pairis essential, because of the very large number of combina-tions that make up the solution space. We maintain that GAcould be easily applied to our model to define near-optimalsampling plans.

The approach

The proposed approach, presented in Fig. 2, consists of pro-viding a decision making support tool that makes it possibleto: (1) define different scenarios relative to metrology capac-ity usage and to risk exposure objectives and constraints; and(2) optimize sampling plans using a MOGA, where the eval-uation function is based on simulations that use historicalprocess and metrology data.

The approach begins with pretreatment of the historicaldata. Data are extracted from process and metrology data-bases covering a predefined period of time. In this step, theactual sampling plan is evaluated through performance indi-cators. This step also enables rebuilding of the process flowmodel, the resource capacities/qualifications and the pro-duction mix. Based on the performance indicators, expertsanalyze these results, and may ask the following questions:(1) Is the current sampling plan optimized? (2) Could themetrology capacity used be reduced with limited impact onrisk exposure? and (3) What happens if some operations aresampled with lower sampling rate thresholds? These ques-tions are incorporated into the support tool in the form ofthe constraints and objectives to be taken into account in theoptimization process that will use the MOGA.

The MOGA is then run to find the sampling plan (SP)that best fits the predefined constraints and objectives. Theevaluation of each solution generated by the GA operators isperformed using simulation based on the historical data. Thebest SP obtained by the MOGA is displayed and the expertdecides whether or not the result is satisfactory in terms ofmeeting objectives. If it is not, the expert may choose tomodify his initial objectives and constraints, and try again.

Objectives and constraints

The objectives of the optimization process include twoaspects: Metrology capacity allocation and Risk exposuremastery.

Metrology capacity allocation

Metrology or measurement operations are often perceived ashaving no “value-added”, which increases operational costsand slows down the rate of production. Moreover, they gen-erally need expensive metrology resources, which is whymetrology capacity needs to be carefully allocated, in orderto balance the underlying measurement costs with the objec-

tives of risk exposure reduction. As metrology machines areexpensive, ranging from $1 to 3 million, any effective sav-ings realized are welcome. Therefore, the first criterion con-stituting the objective function is CAPAx , which refers to themetrology capacity used over the entire production periodduring which sampling plan x is applied.

Risk exposure mastery

To reduce W@R, several indicators have been computed toadd an insurance perspective to the control plan. In order toavoid a scrap crisis, a MaxW@R indicator has been created,which is the maximum number of wafers that can be lost atonce in the production system, after a possible drift. Reduc-ing this value has a direct impact on the likelihood of a majorscrap event. The second criterion is then MaxW@Rx , whichis the value of the highest W@R reached over the entire pro-duction period during which sampling plan x is applied. It isexpressed as follows:

MaxW@Rx = maxz

MaxW@Rxz (2)

where, z ∈ {1, . . . , Z}denotes the processing machine index;with Z the number of processing machines, and MaxW@Rx

zdenotes the maximum value of W@R reached by the process-ing machine z during the period of time considered.

The MeanW@R value is an average value of the risks,in number of products, over the production system consid-ered. It is used to seek W@R improvement, even if thereis no relevant MaxW@R peak. The third criterion is thenMeanW@Rx , which is the mean W@R over the entire pro-duction period. It is expressed as follows:

MeanW@Rx = 1

Z

Z∑z=1

MeanW@Rxz (3)

where MeanW@Rxz denotes the average value of W@R of the

processing machine z during the period of time considered.The MeanMaxW@R is the mean maximum loss over the

tools. The fourth criterion is then MeanMaxW@Rx , which isthe mean MaxW@R reached by all the processing machines(denoted z) over the entire production period. It is computedas follows:

MeanMaxW@Rx = 1

Z

Z∑z=1

MaxW@Rxz (4)

Each of the presented criterion derived from the W@R isrelated to a specific phenomenon, which cannot be observedeasily by another criterion. For instance, it is possible to havein the same time a small value of meanW@R and a big valueof the MaxW@R and vice versa, which have different impactsin terms of potential losses.

123

J Intell Manuf

The sampling plan to be optimized can be subject to someconstraints that define the solution space from which the opti-mized sampling plan should be chosen. These constraintsconcern the variation of the performance indicators relative tothose of the initial sampling plan. The following constraintsare:

– Constraint on the metrology capacity used:The user defines his needs with the parameter�CAPAc(%)

which represents the lower bound of the percentage ofreduction of the capacity used. The corresponding con-straint on that capacity is defined as follows:

�CAPAx (%) ≥ �CAPAc(%)

⇒ CAPAinit − CAPAx

CAPAinit × 100 ≥ CAPAinit − CAPAc

CAPAinit

×100

⇒ CAPAx ≤ CAPAc

where, CAPAc is the value of the constraint on the capac-ity used, CAPAinit is the capacity used of the initial sam-pling plan, CAPAx , is the capacity used of the samplingplan x .

– Constraint on MeanMaxW@RThis constraint defines the tolerance related to the meanof MaxW@Rz over all the processing machines con-sidered. An acceptable solution under this constraintrespects the following relation:

�MeanMaxW@Rx (%) ≤ �MeanMaxW@Rc(%)

⇒ MeanMaxW@Rx ≤ MeanMaxW@Rc

where:

�MeanMaxW@Rx (%)

= MeanMaxW@Rx − MeanMaxW@Rinit

MeanMaxW@Rinit× 100

�MeanMaxW@Rc(%)

= MeanMaxW@Rc − MeanMaxW@Rinit

MeanMaxW@Rinit× 100

MeanMaxW@Rc is the constraint on MeanMaxW@R,MaxW@Rx

z is the maximum of W@R reached by process-ing machine z over the simulated period using samplingplan x , and MaxW@Rinit

z : is the maximum of W@Rreached by processing machine z over the simulatedperiod using the initial sampling plan.

– Constraint on MeanW@RThis constraint defines the tolerance regarding the aver-age value of W@R for all processing machines. An

acceptable solution under this constraint respects the fol-lowing relation:

�MeanW@Rx (%) ≤ �MeanW@Rc(%)

⇒ MeanW@Rx ≤ MeanW@Rc

where:

�MeanW@Rx (%) = MeanW@Rx − MeanW@Rinit

MeanW@Rinit

×100;�MeanW@Rc(%) = MeanW@Rc − MeanW@Rinit

MeanW@Rinit

×100;

MeanW@Rc is the constraint on the mean of W@R,MeanW@R is the average value of W@R over the simu-lated period using sampling plan x , and MeanW@Rinit

is the average value of W@R over the simulated periodusing the initial sampling plan.

– Constraints on MaxW@R per machine:This constraint defines the tolerance of MaxW@R foreach processing machine. An acceptable solution underthis constraint respects the following relation:

�MaxW@Rxz (%) ≤ �MaxW@Rc,z(%)

∀z ∈ {1, .., Z}⇒ MaxW@Rx

z ≤ MaxW@Rc,z

∀z ∈ {1, .., Z}

where:

�MaxW@Rxz (%) = MaxW@Rx

z − MaxW@Rinitz

MaxW@Rinitz

×100;�MaxW@Rc,z(%) = MaxW@Rc,z − MaxW@Rinit

z

MaxW@Rinitz

×100;

MaxW@Rc,z is the constraint on the MaxW@R forprocessing machine z, MaxW@Rx

z is the maximum ofW@R for processing machine z over the simulated periodusing sampling plan x , and MaxW@Rinit

z is the maxi-mum of W@R for processing machine z over the simu-lated period using the current sampling plan.

Remark For the MOGA, each of the precedent constraintscan be either positive or negative, depending on the user’sexpectations. For instance, a positive �MeanMaxW@Rc(%)

implies the user’s tolerance to W@R increase. If the user

123

J Intell Manuf

requires W@R reduction, �MeanMaxW@Rc(%) is settednegative.

Codification and population generation

The implementation of GAs requires codification of the solu-tion space into a finite set of finite-length strings, which arethe chromosomes (see Fig. 1). In the proposed GA, eachchromosome is a feasible sampling plan where a gene cor-responds to the sampling rate of an operation that can besampled, i.e. a (technology, operation) pair. A gene is codedby a number varying from 1 to 100 %, in increments of 1 %.The size of the chromosomes depends on the selected pairsthat can be sampled, which are defined by the expert. At eachiteration of the GA, a new population is generated using clas-sical genetic operators with different proportions (selection,mutation, crossover and random generation), except the ini-tial one, which is generated completely randomly.

Evaluation by simulation

A simulator has been developed to apply each new samplingplan and then evaluate the W@R indicators for each process-ing machine and the corresponding value of the metrologycapacity used, CAPAx (Fig. 3).

The input data of the simulation are based on two sets ofhistorical data extracted for the same period of time. The firstset contains information about the production system: the lotname, the equipment on which it has been processed, thenumber of wafers in the lot, the date of processing, the oper-ation name, and the technology. The second set of historicaldata is related to metrology events, which are characterizedby the metrology equipment, the lot name, the inspection date(entrance), the process operation measured, and the associ-ated technology.

The initial sampling plan computed from the inputs isdenoted by Sinit = (sini t

1 , .., sini ti , .., sini t

I ), and is character-

ized by its SPS (Sampling Plan Saturation) indicator:

SPSinit = CAPAinit

CAPA100 % (5)

As illustrated in Fig. 3, the simulation is performed in foursteps, which are detailed below.

Step 1-Flag potential lots for inspectionThis step consists of selecting, from the metrology his-

torical data, the subset of measured lots when samplingplan x is applied. It should guarantee the desired samplingrates sx

i for all the pairs (technology, operation) indexed byi ∈ {1, . . . , I }.

For each lot l in the metrology historical data, a partialsampling rate is computed, as follows:

s pi(l) = N insp

i(l) + 1

N proci(l) + 1

(6)

where, i(l) is the index of the operation corresponding to thelot l ∈ {1, . . . , L}, L is the number of lots in the metrologyhistorical data, N insp

i(l) is the number of inspected lots belong-

ing to operation i(l) up to the current lot l, and N proci(l) is the

number of processed lots belonging to operation i(l) up tothe current lot l.

This partial sampling rate (s pi(l)) is then compared to sx

i(l). Ifthe partial sampling rate is lower than sx

i(l), the lot l is flaggedand the number of inspected lots relative to operation i(l) isincreased by 1 (N insp

i(l) = N inspi(l) + 1). Whatever the result of

this test, the number of processed lots relative to operationi(l) is increased by 1 (N proc

i(l) = N proci(l) + 1).

At the end of this step, the new sampling plan x is pre-sented by the set of flagged lots in the metrology historicaldata.

Step 2-Flag the process dataAfter defining the lots to be inspected, they must be flagged

in the historical processing data. In this step, a lot is flaggedwhen it has already been flagged in the inspection data forthe same operation. The processing dates of flagged lots areused in the next steps.

Fig. 3 Simulation of W@R and τ for a sampling plan x

123

J Intell Manuf

Fig. 4 Inspection queue management

Step 3-Manage the inspection queuesThe measurement operations of interest in this study are

performed by stand-alone metrology equipments. Each hasa separate queue in which the lots are placed before beingmeasured.

This step defines the metrology schedule for each sampledlot (flagged in the previous step), according to sampling planx , without changing the initial dispatching of the sampledlots to the metrology equipments.

As the number of flagged lots is less than or equal to thenumber of measured lots in the initial sampling plan, somelots in the historical metrology data are skipped. These lotsare removed and replaced by the new sampled lots in themetrology queues (see Fig. 4). The new measurement dates ofthese lots are advanced in order to fill the empty slots. Thesedates should not be earlier than the date of processing plusthe transportation time (Tt ). This condition is represented inthe following equation:

Dnm(l) ≥ Dpro(l) + Tt (l) (7)

where, Dnm(l) is the new measurement date of lot l, andDpro(l) is the processing date of lot l.

The resulting maximum wait time gain is the sum of themeasurement times of all the skipped lots, as illustrated inFig. 4. At the end of this step, the measurement dates of thelots selected according to sampling plan x are defined.

Step 4-Computation of W@RIn this step of the simulation, the indicators based on W@R

are computed separately for each processing machine. Foreach process event, i.e. a lot l is processed by the processingmachine z at time Dproc(l), W @Rz is increased incremen-tally by the number of wafers in lot l. If lot l is flagged inStep 2, it is saved in a metrology buffer. Then, the algo-rithm checks for a measurement event in the period of timebetween the processing date of the current lot (Dproc(l))and the processing date of the next lot (Dproc(l + 1)).If a measurement event is found, W@Rz is decreased bythe value W@Rreduction(l ′) of the measured lot l ′ and theW@Rreduction of all the lots contained in the measurementbuffer is decreased by W@Rreduction(l ′) (see Shanoun et al.2011 for more details).

This simulation also gives us the opportunity to computethe time delay τ between processing and measurement, in

order to study the behavior of lots between these two stepsand the variation of τ for each sampling plan.

Implementation, experiments, and discussion

The implementation of the proposed approach and its subse-quent algorithms have been developed with MS Excel VBA2010. This choice was made in order to facilitate case studytesting and data exchange with industrial partners. Beforelaunching this tool, historical processing and metrology datashould already be loaded into two different sheets. A graph-ical interface (Fig. 5) allows the user to set the optimizationparameters and run the MOGA. This interface is also used toselect the (technology, operation) pairs to be considered inthe sampling plan, using one of three possible alternatives:

– Selection of samplable process operations: For each oper-ation selected, all the pairs where this operation is presentare added to the set of pairs that can be sampled.

– Selection of samplable technologies: For each technol-ogy selected, all the pairs where this technology is presentare added to the set of pairs that can be sampled.

– Selection of samplable (technology, operation) pairs: Inthis case, only the selected pairs are added to the set ofpairs that can be sampled.

Moreover, as the technology mix can be different for eachproduction time period, some operations and/or technologiesmay not be well represented in the historical metrology dataextracted. If sampling is reduced for these pairs, there is a riskthat control breaches will be generated or too few data willbe used in the Statistical Process Control. That is why thereis the option to set a threshold that restricts the selectionof operations that can be sampled to those for which thehistorical data contain a number of measurements greaterthan or equal to this threshold. The value of the threshold isentered in an input box that appears once the checkbox “UseThreshold” is activated.

After selecting the operations that can be sampled, the userselects and sets the constraints, the criteria to be minimized(objectives), and the MOGA stopping tests. Constraints arenot mandatory, but at least one criterion and one stopping test

123

J Intell Manuf

Fig. 5 Graphical interface of the optimization tool developed

should be selected in order to start the optimization algorithm.Some additional options, which are outside the scope of thispaper, are used to configure the algorithm, depending on theformat of the input data or the desired results.

MOGA implementation

The MOGA we implemented is described in Fig. 6. The basicprinciple of a GA is to generate successive populations ofcandidate solutions and to ensure they evolve toward bettersolutions. The generation of new populations is performedusing techniques inspired by natural evolution (such as selec-tion, mutation, and crossover) and they are repeated until apredefined “stopping condition” is satisfied.

Initial sampling characterization

Before starting the optimization, an initial simulation isneeded to compute the initial W@R (W@Rinit ) to definethe initial sampling plan Sinit and the number of measure-ments for each operation N init . This latter represents anupper bound for the corresponding operation of the samplingplan to be optimized. The values of MeanMaxW@Rinit ,MeanW@Rinit , CAPAinit , and MaxW@Rinit for eachprocessing machine are also deduced from this simulation.Based on the parameter values selected by the user, these

Fig. 6 Genetic algorithm

123

J Intell Manuf

results are then used to compute the effective values of theconstraints:

MeanMaxW@Rc = (�MeanMaxW@Rc + 1)

×MeanMaxW@Rinit ;MeanW@Rc = (�MeanW@Rc + 1)

×MeanW@Rinit ;CAPAc = (�CAPAc + 1) × CAPAinit ; and

MaxW@Rc,z = (�MaxW@Rc + 1)

×MaxW@Rinit ∀z ∈ {1, .., Z}.

Initial population generation

The initial population is composed of 50 chromosomes, gen-erated randomly using the “random” function of Excel VBA.

Evaluation and validation

Each chromosome is evaluated by simulation, as explainedin “Evaluation by simulation” section. In the case where oneor more constraints are active, each individual (chromosome)is validated with respect to the constraints. Any chromosomethat does not fulfill the constraint will be downgraded towardthe evaluation criterion (denoted CR) by increasing its value.

CRx = CRx0 + kx (8)

with

kx =i=n∑i=1

| Const (i) − Constc(i) | ×tr(i) (9)

where, CRx0 is the initial evaluation of the criteria (before

downgrading) of sampling plan x , CRx is the value of thedowngraded criteria of sampling plan x , kx represents thedistance between the evaluation of the chromosome and theconstraints, n is the number of the constraints considered,Const (i) is the value of the constraint i for the chromosomeconcerned, Constc(i) is the effective value of the constraint i ,| · | is the absolute value function, tr(i) is a Boolean variableequal to 1 if the constraint i is transgressed, and 0 otherwise.

Stopping test

The GA creates new populations until the condition of thestopping test is satisfied. Three kinds of stopping test areproposed.

1. A stopping test based on the value of the criteria: the usercan select one or more conditions on the criteria. Theoptimization will be stopped when the value of criteriaof the best chromosome of a population satisfies all thespecified values.

2. A stopping test based on the time of optimization: thealgorithm stops when a specified time limit is reached.

3. A stopping test based on the number of generations: thealgorithm stops when a specified number of generationsis reached.

If the user defines a number of conditions, the optimizationwill stop once the first condition has been satisfied. If thestop condition is not fulfilled, the algorithm goes back tovalidation, evaluation and evolution.

Multi-criteria classification

When mono-objective optimization is deployed, the chromo-somes are ranked accordingly and the best individuals areselected. When multi-objective optimization is deployed, byselecting two or more criteria for the optimization, the algo-rithm classifies chromosomes according to each criterion.The best individuals are those with the smallest sum of ranksof all criteria.

Figure 7 illustrates the method adopted for multi-criteriaclassification through a simple example, where 10 chromo-somes are classified according to three criteria (C1, C2, andC3). First, the chromosomes are evaluated by simulation andclassified for each criterion separately. Then, a global clas-sification is performed, based on the best rank for all thecriteria.

Formation of a new population

New individuals are generated using methods of selection,mutation, crossover and random generation. Various trialsconducted with the case study datasets led us to the followingscheme for evolution methods:

Selection 5 % of the new individuals are defined by selection.The best individuals are selected after the classifi-cation step.

Mutation 12 % of the new population is obtained by themutation of chromosomes from the old popula-tion. In order to explore different regions of thesolution space, 60 % of these chromosomes arechosen randomly from the old population. For alocal exploration of the solution space, 40 % of themutated chromosomes are chosen randomly fromthe group of selected chromosomes (best) chromo-somes. The number of genes mutated is definedrandomly between 1 and 25 % of the size of thechromosome.

Random 9 % of the new individuals are defined randomly(as with the initial population generation step)

Crossover 74 % of the new population is obtained by crossover.Chromosomes are “crossed” in order to create the

123

J Intell Manuf

Fig. 7 Multi-objective classification of a population of 10 chromosomes

same number of new chromosomes in the newgeneration. For each crossover, two chromosomes,called parents, are chosen randomly. The positionof the crossover is chosen randomly, and each chro-mosome is divided into two segments. Two newchromosomes, called children, are made up of theseparts. The first child is composed of the first partof the first parent and the second part of the secondparent. The second child is composed by the firstpart of the second parent and the second part of thefirst parent.

Experiments and discussions

The proposed approach was tested through a case study usingone month of real data provided by STMicroelectronics inCrolles, France. It focused on optimization of the CD mea-surements used to control the Lithography and Etch work-shops.

The simulation was executed considering two constraints:(1) 10 % of tolerance in the increase of MeanMaxW@Rfor all processing machines, and (2) a minimum reductionof 10 % of the capacity used. In other words, an acceptablesolution x should satisfy the following relations:

MeanMaxW@Rx ≤ 1.1 × MeanMaxW@Rinit

and

CAPAx ≤ 0.9 × CAPAinit

The goal is then to minimize each of the following crite-ria: MeanMaxW@R, MeanW@R, CAPA, and the globalMaxW@R. The classification of chromosomes is doneaccording to the method explained in the “Multi-criteria clas-sification” section. Each chromosome used for optimiza-

tion corresponded to the combined sampling rates of alloperations inspected more than 10 times during the timeperiod considered, which means that the chromosomes werecomposed from 342 genes. Each population was composedof 50 chromosomes, which evolved with the proportionsdefined above, using various techniques. The optimizationwas stopped after 300 iterations.

Each chromosome was evaluated by simulation, whichtook about 1 second for a 1 month period. Although the sim-ulation provided a good evaluation of the sampling plan, toomuch time was required to simulate 300 generations (about4 h using a computer with a processor Intel(C) CoreTM 2 DuoCPU P8600 @ 2.40 GHz).

Figure 8 presents the evolution of the optimization criteriawith iterations. The constraints were satisfied after the 50thiteration, and the criteria continued to improve until the 225thiteration, where they stabilized.

Due to the huge size of the chromosomes (342 genes each),the variation in criteria by generation was slow. Improvingthe results would require many more generations and a greatdeal more computing power. The selected criteria did notvary in harmony; for instance, the reduction in the capacityused (CAPA) increased MeanMaxW@R and vice versa. Thismade optimization difficult, requiring several iterations.

The chromosome of the final result presented in Fig. 9,represents the optimized sampling rate of each selected oper-ation (gene). It is the best solution present in the last genera-tion (generation obtained when the stopping test is verified).The optimized sampling plan corresponds to an SPS equal to82.9 %.

Table 1 summarizes the final results obtained for the fourcriteria considered. In the “ini t” line, the values of the crite-ria are taken from the initial sampling plan. The second line(“gen”) presents the values obtained after optimization by the

123

J Intell Manuf

Fig. 8 Convergence of the optimization criteria

MOGA. The third line presents the improvements as a per-centage, and the last line presents the value of each selectedconstraint.

The capacity is expressed in terms of the number of mea-surements performed during the observation period. The ini-tial value of 16,600 drops off by 17.1 %, which represents adecrease of 2,792 measurements. As one tool is given foraround 1,300 measurements, the improvement saves 2.14tools. Because a tool can cost between $1 and $3 million,the savings are considerable (the cost drops from $6.84 to$2.14 million).

At the same time, the risk of scrapping products was alsoreduced: on average (second column), a decreases of 30 %from 133.9 products to 93.7 products. This means that 94wafers were potentially at risks after MOGA application,instead of 134. This represents a potential savings of 42.2wafers. With a wafer costing between $2,000 and $10,000,this means that the insurance coverage could be decreasedby $84,400, on average, to $422,000.

Table 1 Results abstract

CAPA MeanW@R MaxW@R MeanMaxW@R

init 16,660 133.9 720 352.5

gen 13,808 93.7 695 329.1

� −17.1 % −30 % −3.5 % −6.6 %

�c −10 % – – 10 %

The third criterion was decreased by 3.5 % which repre-sents 25 wafers. This resulted in a potentially massive scrapreduction, valued at between $50,000 and $250,000.

The fourth criterion was decreased by 6.6 %, even throughthe constraint was released by 10 % by the manufacturers.This represents a gain of 23 wafers, or $46,000–$230,000.

Operationally, depending on the scrap insurance policy inplace to cover wafer scraps, one of the gains mentioned abovecan be selected to estimate the global gain. For instance, if thescrap insurance policy covers an average loss, then the gainassociated with MeanW@R can be chosen. In this case, theMOGA generates savings between $2.14 million + $84,400= $2,224,400 and $6.84 million + $422,000 = $7,262,000.

Globally, the results presented in Table 1 represent veryimportant savings, which can be easily realized by modifyingthe sampling plan based on the MOGA results.

The results regarding risk exposure reduction are detailedin Figs. 10 and 11. Figure 10 shows the percentage reductionin MeanW@R and MaxW@R for each processing machinebrought about by the optimized sampling plan computed bythe MOGA. It shows the significant gap between the opti-mized solution and the current sampling plan regarding theMeanW@R and MaxW@R of some processing machines.In fact, these two indicators were reduced for almost allthe processing machines, by on average, 30 and 3.5 %,respectively. This figure also provides analysis from the

Fig. 9 Optimized sampling rates (best chromosome)

123

J Intell Manuf

Fig. 10 max W@R and meanW@R per machines usinggenetic optimization

Fig. 11 Reduction of W@R over time

workshop point of view. The first 23 processing machinesbelong to Workshop1 (Etching) and the others belong toWorkshop2 (Photolithography). Results show that the aver-age of �MeanW@R is better for Workshop2. This can beexplained by the fact that Workshop2 processing machineswere globally more loaded than those of Workshop1, andthe reduction in sampling rates is more efficient in terms ofrisk exposure reduction with an increase in the workload ofthe processing machines.

Figure 11 represents the temporal evolution of the over-all (all tools together) MaxW @R and MeanW @R with theinitial sampling plan compared to the plan computed bythe MOGA (MaxW@Rgen and MeanW@Rgen). The resultsdrawn in this figure confirm the potential improvementsregarding risk exposure mastery. Moreover, these improve-ments are effective over the entire production period con-sidered, especially for the MeanW@R. However, in someperiods of time the MaxW @R is hardly reduced becauseof the influence of the scheduling decisions which were notconsidered in this study.

The time delay was not a decision criterion in our algo-rithm, because it is not the primary concern of the manufac-

Fig. 12 Comparison of the distribution of τ in the initial sampling andin genetic sampling

turers in this case study. However, it has been demonstratedthat variation of the metrology time delay impacts the varia-tion in W@R (Sahnoun et al. 2012). That is why the metrologytime delay τ was recorded in these experiments. The obser-vation of τ before and after the optimized sampling plan wasobtained by the MOGA confirms this relation. The metrol-ogy time delay has been improved, as shown in Fig. 12: themean and standard deviation of τ are reduced by 40 and 70 %,respectively.

Conclusions and perspectives

In this paper, a decision support tool has been proposed tooptimize sampling plans in complex manufacturing systems.The optimization method uses the MOGA, with which theevaluation function is obtained by simulation using historical

123

J Intell Manuf

data. The objectives optimized are related to risk exposurecontrol and metrology capacity allocation.

The results gained in this study show clearly the valueof multi-objective optimization. The MOGA provided veryinteresting results. Very significant savings could have beenmade by using an optimized sampling plan. The first eval-uation shows a range of potential savings between $2.22and $7.25 million, confirming that optimization based onsampling is a good avenue for improving operational perfor-mances.

This research also shows that there is an opportunity tointegrate the dispatching of lots for the inspection into thesimulation process. The study was focused on the implemen-tation of a multi-objectives optimization algorithm to definean optimized sampling plan, without regarding the best algo-rithm to use.

Future works will be focused on the development of analgorithm for the automatic selection of the operations to besampled, leading to improvement of the chromosomes usedin the genetic approach. Moreover, it will be an opportunityto position our MOGA compared to other multi-objectivealgorithms such as NSGA-II (Nondominated Sorting GeneticAlgorithm-II), for example. The aim is to find other meta-heuristic algorithms that can improve the results obtained bythe MOGA proposed in this paper.

Acknowledgments This article describes the research conductedwithin the context of the IMPROVE and INTEGRATE Projects. Ithas been finalized with the support of dis544 covery Grants foundedby NSERC, under the number RGPIN418674-12 CRSNG NIP392730.The authors warmly thank STMicroelectronics for providing data andsupport for their research. Special thanks to the industrial and processcontrol team for their daily involvement and support in helping usachieve sound results in the projects. Authors thank also the RhoneAlpes Region.

References

Bassetto, S., & Siadat, A. (2008). Operational methods for improvingmanufacturing control plans: case study in a semiconductor industry.Journal of Intelligent Manufacturing, 20(1), 55–65.

Bettayeb, B., Bassetto, S., & Sahnoun, M. (2014). Quality control plan-ning to prevent excessive scrap production. Journal of Manufactur-ing Systems, 33(3), 400–411.

Bettayeb, B., Bassetto, S., Vialletelle, P., & Tollenaere, M. (2012a).Quality and exposure control in semiconductor manufacturing. PartI: Modelling. International Journal of Production Research, 50(22–24), 6835–6851.

Bettayeb, B., Bassetto, S., Vialletelle, P., & Tollenaere, M. (2012b).Quality and exposure control in semiconductor manufacturing. PartII: Evaluation. International Journal of Production Research, 50(22–24), 6852–6869.

Bulgak, A., Diwan, P., & Inozu, B. (1995). Buffer size optimization inasynchronous assembly systems using genetic algorithms. Comput-ers & Industrial Engineering, 28(2), 309–322.

Bunday, B., Allgair, J., Caldwell, M., Solecky, E., Archie, C., Rice, B.,et al. (2007). Value-added metrology. IEEE Transactions on Semi-conductor Manufacturing, 20(3), 266–277.

Chan, L., & Wu, S. (2009). Optimal design for inspection and main-tenance policy based on the CCC chart. Computers & IndustrialEngineering, 57(3), 667–676.

Cheshmehgaz, H., Haron, H., Kazemipour, F., & Desa, M. (2012). Accu-mulated risk of body postures in assembly line balancing problemand modeling through a multi-criteria fuzzy-genetic algorithm. Com-puters & Industrial Engineering, 62(2), 503–512.

Chien, C.-F., Hsu, C.-Y., & Hsiao, C.-W. (2012). Manufacturing intel-ligence to forecast and reduce semiconductor cycle time. Journal ofIntelligent Manufacturing, 23(6), 2281–2294.

Dauzère-Pérès, S., Rouveyrol, J., Yugma, C., & Vialletelle, P. (2010).A smart sampling algorithm to minimize risk dynamically. InAdvanced semiconductor manufacturing conference (ASMC), 2010IEEE/SEMI (pp. 307–310).

Emmons, H., & Rabinowitz, G. (2002). Inspection allocation for mul-tistage deteriorating production systems. IIE Transactions, 34(12),1031–1041.

Fonseca, C., & Fleming, P. (1995). An overview of evolutionary algo-rithms in multiobjective optimization. Evolutionary Computation,3(1), 1–16.

Garvin, C., Chen, X., & Hankinson, M. (2002). Advanced process con-trol of overlay with optimal sampling. In Proceedings of SPIE (vol.4689, p. 817).

Gen, M., & Lin, L. (2013). Multiobjective evolutionary algorithm formanufacturing scheduling problems: State-of-the-art survey. Journalof Intelligent Manufacturing. doi:10.1007/s10845-013-0804-4.

Good, R., & Purdy, M. (2007). An MILP approach to wafer samplingand selection. IEEE Transactions on Semiconductor Manufacturing,20(4), 400–407.

Kancev, D., Gjorgiev, B., & Cepin, M. (2011). Optimization of testinterval for ageing equipment: A multi-objective genetic algorithmapproach. Journal of Loss Prevention in the Process Industries,24(4), 397–404.

Kogan, K., & Raz, T. (2002). Optimal allocation of inspection effortover a finite planning horizon. IIE Transactions, 34(6), 515–527.

Lee, J. (2002). Artificial intelligence-based sampling planning systemfor dynamic manufacturing process. Expert Systems with Applica-tions, 22(2), 117–133.

Lin, C., Yeh, J., & Ding, J. (1998). Design of random inspection rate fora flexible assembly system: A heuristic genetic algorithm approach.Microelectronics Reliability, 38(4), 545–551.

Lindsay, G., & Bishop, A. (1964). Allocation of screening inspectioneffort: A dynamic-programming approach. Management Science,10(2), 342–352.

Liu, W., Chua, T., Cai, T., Wang, F., & Yan, W. (2005). Practical lotrelease methodology for semiconductor back-end manufacturing.Production Planning & Control, 16(3), 297–308.

Pan, J., & Tai, D. (2011). A new strategy for defect inspection by thevirtual inspection in semiconductor wafer fabrication. Computers &Industrial Engineering, 60(1), 16–24.

Rabinowitz, G., & Emmons, H. (1997). Optimal and heuristic inspec-tion schedules for multistage production systems. IIE Transactions,29(12), 1063–1071.

Rau, H., & Cho, K. (2009). Genetic algorithm modeling for the inspec-tion allocation in reentrant production systems. Expert Systems withApplications, 36(8), 11287–11295.

Raz, T. (1986). A survey of models for allocating inspection effortin multistage production systems. Journal of Quality Technology,18(4), 239–247.

Sahnoun, M., Bettayeb, B., Bassetto, S., & Tollenaere, M. (2012). Smartsampling for risk reduction and delay optimisation. In Internationalsystem conference (SYSCON) (pp. 1–6).

Sahnoun, M., Vialletelle, P., Bassetto, S., Bastoini, S., & Tollenaere,M. (2010). Optimizing return on inspection trough defectivity smartsampling. In 2010 International Symposium on Semiconductor man-ufacturing (ISSM) (pp. 1–4).

123

J Intell Manuf

Shanoun, M., Bassetto, S., Bastoini, S., & Vialletelle, P. (2011). Opti-misation of the process control in a semiconductor company: modeland case study of defectivity sampling. International Journal of Pro-duction Research, 49(13), 3873–3890.

Simaria, A., & Vilarinho, P. (2004). A genetic algorithm based approachto the mixed-model assembly line balancing problem of type II. Com-puters & Industrial Engineering, 47(4), 391–407.

Su, A., Yu, C., & Ogunnaike, B. (2008). On the interaction betweenmeasurement strategy and control performance in semiconductormanufacturing. Journal of Process Control, 18(3), 266–276.

Takahara, S., & Miyamoto, S. (1999). Adaptive algorithms of meta-heuristics with application to optimal allocation problems. In 1999IEEE international conference on systems, man, and cybernetics,1999. IEEE SMC’99 conference proceedings (vol. 3, pp. 545–550).

Tang, K., & Tang, J. (1994). Design of screening procedures: A review.Journal of Quality Technology, 26(3), 209–226.

Verduzco, A., Villalobos, J., & Vega, B. (2001). Information-basedinspection allocation for real-time inspection systems. Journal ofManufacturing Systems, 20(1), 13–22.

Villalobos, J., Joseph, W., & Ralph, L. (1993). Flexible inspection sys-tems for serial multi-stage production systems. IIE Transactions,25(3), 16–26.

Vinod, G., Kushwaha, H., Verma, A., & Srividya, A. (2004). Optimi-sation of ISI interval using genetic algorithms for risk informed in-service inspection. Reliability Engineering & System Safety, 86(3),307–316.

Zhang, W., Gen, M., & Jo, J. (2013). Hybrid sampling strategy-basedmultiobjective evolutionary algorithm for process planning andscheduling problem. Journal of Intelligent Manufacturing. doi:10.1007/s10845-013-0814-2.

123