12
Chiang Mai J. Sci. 2013; 40(4) : 701-712 http://it.science.cmu.ac.th/ejournal/ Contributed Paper Classification Techniques for Control Chart Pattern Recognition: A Case of Metal Frame for Actuator Production Wimalin Laosiritaworn* and Tunchanit Bunjongjit Department of Industrial Engineering, Faculty of Engineering, Chiang Mai University, Chiang Mai 50200, Thailand. *Author for correspondence; e-mail: [email protected] Received: 5 June 2012 Accepted: 4 October 2012 ABSTRACT Statistical process control (SPC) plays a significant role in hard-disk drive manufacturing as there is a crucial need to constantly improve of productivity. Control chart is one of the SPC tools that have been widely implemented to identify whether nonrandom pattern caused by assignable cause exists in the production process. Decision rules are usually used for detecting nonrandom patterns on control chart. However, recent research has shown that these rules had tendency of producing false alarm. This is a problem occurred in the case study company, who is a manufacturer of metal frame for actuator. The company is adopting technologically advanced equipment for its quality assurance system and computer software for data analysis and control chart. Currently, the company use decision rules for detecting nonrandom patterns on control chart - for example, if 6 or more consecutive data inputs found to be in an increasing or decreasing order, these data contain trend pattern. In attempt to improve the accuracy of data analysis, this research investigated the application of 3 classification techniques, namely neural network, k-nearest neighbor and rule induction, in discretion of nonrandom patterns. By considering the control charts of 3 different product lines, 3 types of nonrandom patterns, which are Trend, Cycle and Shift, are to be observed. Based on the real data inputs, the percentage of accuracy in error detection by each technique of each product line is compared. It is found the accuracy of k-nearest neighbor is highest with the percentage of correctly prediction between 96.99 - 98.7%. Keywords: control chart, pattern recognition, neural network, K-Nearest neighbor, rule induction 1. I NTRODUCTION The hard disk drive producers have constantly relocated theirs major production hub to Thailand. This began in around 1983 when Seagate Technology moved their head-stack assembly out of Singapore due to the relatively low-wage labors and encouraging investment environment of Thailand. Incumbent expanded from then on, including part supplier and other hard disk drive producers [1]. The shift in production trend has resulted in the country’s remarkable export volume. The company in this case study - whose name needs to be confidential for protection of trade secret - manufactures

Classification Techniques for Control Chart Pattern

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Classification Techniques for Control Chart Pattern

Chiang Mai J. Sci. 2013; 40(4) 701

Chiang Mai J. Sci. 2013; 40(4) : 701-712http://it.science.cmu.ac.th/ejournal/Contributed Paper

Classification Techniques for Control Chart PatternRecognition: A Case of Metal Frame for ActuatorProductionWimalin Laosiritaworn* and Tunchanit BunjongjitDepartment of Industrial Engineering, Faculty of Engineering, Chiang Mai University,Chiang Mai 50200, Thailand.*Author for correspondence; e-mail: [email protected]

Received: 5 June 2012Accepted: 4 October 2012

ABSTRACTStatistical process control (SPC) plays a significant role in hard-disk drive manufacturing

as there is a crucial need to constantly improve of productivity. Control chart is one of theSPC tools that have been widely implemented to identify whether nonrandom pattern causedby assignable cause exists in the production process. Decision rules are usually used for detectingnonrandom patterns on control chart. However, recent research has shown that these ruleshad tendency of producing false alarm. This is a problem occurred in the case study company,who is a manufacturer of metal frame for actuator. The company is adopting technologicallyadvanced equipment for its quality assurance system and computer software for data analysisand control chart. Currently, the company use decision rules for detecting nonrandom patternson control chart - for example, if 6 or more consecutive data inputs found to be in anincreasing or decreasing order, these data contain trend pattern. In attempt to improve theaccuracy of data analysis, this research investigated the application of 3 classification techniques,namely neural network, k-nearest neighbor and rule induction, in discretion of nonrandompatterns. By considering the control charts of 3 different product lines, 3 types of nonrandompatterns, which are Trend, Cycle and Shift, are to be observed. Based on the real data inputs,the percentage of accuracy in error detection by each technique of each product line is compared.It is found the accuracy of k-nearest neighbor is highest with the percentage of correctlyprediction between 96.99 - 98.7%.

Keywords: control chart, pattern recognition, neural network, K-Nearest neighbor,rule induction

1. INTRODUCTIONThe hard disk drive producers have

constantly relocated theirs major productionhub to Thailand. This began in around 1983when Seagate Technology moved theirhead-stack assembly out of Singapore dueto the relatively low-wage labors andencouraging investment environment of

Thailand. Incumbent expanded from then on,including part supplier and other hard diskdrive producers [1]. The shift in productiontrend has resulted in the country’s remarkableexport volume. The company in this casestudy - whose name needs to be confidentialfor protection of trade secret - manufactures

Page 2: Classification Techniques for Control Chart Pattern

702 Chiang Mai J. Sci. 2013; 40(4)

metal frame for actuators, an importantcomponent of computer’s hard-disk, as itsmajor product line. The company’s QualityControl Department has stored large amountof quality inspection data. Various techniqueshave been used to analyze them for instancedata mining [2] and statistical process control(SPC)[3]. One of SPC techniques currentlyadopt is the control chart, where andx - s control charts along with decision rulesfor detecting nonrandom patterns areimplemented. Rules for example zone testsor run rules have been widely used in industryto detect control chart pattern [4], [5].However, these rules can yield an excess offalse alarms as the patterns are normallydistorted by common cause variation ofunknown factors in manufacturing process [6].

Many research has used the neuralnetwork (NN), which is a computer modelthat resembles the biological characteristics ofhuman neural network and simulates thefunction of human brain, for recognitionpattern in control chart (for example [7], [8]).This is because NN is capable of learning,adapting, determining and recognizing variouspatterns. Such NN is capable of recognizingpatterns and responding to difficult problems,discerning relationship between complexset of data, and forming a sophisticatedmathematical model.

Apart from NN, other classificationmethods have also been implemented incontrol chart pattern recognition for exampledecision tree [9], k-nearest neighbor [10] ,[11].Yet limited work has been reported in thecomparison of those techniques in order tofind a suitable technique for control chartpattern recognition. The comparative studyhelps to identify appropriate technique forcontrol chart pattern recognition which notonly applied to the case study company butalso has a potential to be applied to otherprocesses. As a result, this work attempts to

compare NN with two other classificationmethods namely k-nearest neighbor and ruleinduction in terms of their classificationaccuracy. Actual inspection data from the casestudy company was used to compare thosemethods in detecting patterns including Trend,Cycle and Shift.

2. LITERATURE REVIEW2.1 Neural Network

Artificial neuron or processing element(PE) constitutes the basic component ofneural network. The input signals are receivedthrough dendrite-like weight connection. Eachcord is assigned a connection weight, whichresembles the change in magnitude of synapsesignal transmission. Each PE attains outputby calculating weighted sum (Sj) from iaiwij,where ai is the activation level of unit i, and wijis the weight from unit i to unit j. Activationfunction was applied Sj tocalculate output of that PE [12].

PE needs to be connected into a networksimilar to the network of human brain.Although there are many types of connectionof PE, multi-layered neural network is thetype implemented in this research. Figure 1shows a multi-layered neural network, whichrepresents a comprehensive connectionnetwork. In the beginning of the learningprocess, connection weight is randomlyassigned, starting from small weight values.After a data pair is received from the trainingdata set, the NN adjust the weights to minimizeerror function. Learning algorithm used in thisresearch is the Back propagation. Number ofhidden layer and hidden node can havesignificant impact on output accuracy.Appropriate number of hidden layer andhidden node are usually set by trial-and-error.Alternatively optimum setting of theseparameters can be identified by using designof experiment technique [13], [14].

f (x) = 11+e-x

x - r

Page 3: Classification Techniques for Control Chart Pattern

Chiang Mai J. Sci. 2013; 40(4) 703

Table 1. Comparing the use of neural network recognition of control chart pattern

Author Type of network Pattern for detect Data usePham andWani [7]

Cheng [8]

Guh andTannock [15]Guh andHsieh [16]

Perry et al. [17]

Al-Assaf [18]

Pham andSahran [19]

Guh [20]

back propagation

back propagation

back propagation

back propagation

back propagation

back propagation

Spiking

back propagation

Normal , Cyclic,Increasing or decreasing trends,Upward or downward shiftsTrend, Cycle, Systematic variable,Mixture, Sudden ShiftNatural, Shift, Trend, Cycle

Normal, Upward Shift,Downward Shift,Upward Trend,Downward Trend, CycleTrend, Cycle, Systematic variable,MixtureNatural, Shift, Trend, Cyclic,Shift &Cyclic, Shift &Trend,Trend &CyclicNormal, Upward Shift,Downward Shift, Upward Trend,Downward Trend, CycleNatural, Cycle, Increasing Trend,Decreasing Trend, Upward Shift,Downward Shift, Systematic,Mixture

Experiment

Model

Model

Model andExperiment

Model

Experiment

Model andExperiment

Actual Data

From Table 1, NN is now regarded asan acceptable approach to recognition ofcontrol chart pattern. Mostly, backpropagation neural network is used, exceptthe research conducted by Pham andSahran[19], which adopted the Spiking

method in error detection of control chart.Their work showed that spiking neuralnetwork gives high pattern recognitionaccuracies. Various of error patterns wereinvestigated in literature, for example,Normal, Trend, Cycle, Shift, Systematic

Figure 1. Multi-layered Neural network.

Page 4: Classification Techniques for Control Chart Pattern

704 Chiang Mai J. Sci. 2013; 40(4)

Variable, and Mixture, yet the most commonones are Normal, Trend, Cycle and Shift. Dataused for training of NN include experimentaldata, simulation data and actual data. Phamand Wani [7] and Al-Assaft [18] usedexperimental data in training of NN in theirresearch, while Guh and Hsieh [16] and Phamand Sahran [19] applied both experimental dataand simulation data in NN training. In contrast,Cheng [8], Guh and Tannock [15] and Perry[17] used only simulation data, whereas Guh[20] used only real data for training of NN.Most research results obtained demonstratedmore than 90% accuracy, for example, Phamand Wani[7]’s pattern observation andrecognition technique yielded as much as 99%accuracy, whereas Guh and Tannock[15]’saccuracy measurement of control chart patternby NN found 95% accuracy for Normalpattern and 90% accuracy for Cycle pattern.

2.2 k-Nearest Neighbor ClassificationThe k-nearest neighbor (kNN) is one of

the tools for pattern classification. It utilizedthe method of ‘instance-based learning’, inwhich the training data is memorized and theclassification of a new record is done bycomparing it to the most similar record. Thetraining samples are described by n attributeand represent a point in and n-dimensionalspace. A kNN classifier classifies a new recordby searching the pattern space for the ktraining samples that are closest to theunknown record. Euclidean distance is themost common index to define the similarityof the pattern. The Euclidean distancebetween two input represented inn-dimensional space X1 =(x11, x12, …, x1n)and X2 = (x21, x22, …, x2n) denoted by

d(x1, x2) =

As the kNN relies on distancemeasurement, all input data should benormalized to prevent inputs with large value

from outweighing inputs with lower value.In order to normalize numeric attribute A tov which has the range between [0,1],Min-max normalization is calculated from ,

Where minA and maxA are

the minimum and maximum value of attributeA.

The kNN is capable of producing highaccuracy model for complex decision. It isone of the most widely used classifier dueto its good generalization and easyimplementation [21]. However, this methodis computationally expensive [22] andcalculations time is usually long. Several workshave been reported in using kNN for controlchart pattern recognition. For instance,He and Wang [10] proposed the adaptivekNN rule called fault detection methodusing kNN rule (FD-kNN) for fault detectionin statistical process monitoring forsemiconductor manufacturing process.However, the FD-kNN constructed onlyfrom the learning sample without assumptionon their distribution so later on, Verdier andFerreira [11] proposed a new adaptive distancefor kNN rule for fault detection insemiconductor manufacturing. The resultssuggested that this new rule is more reliablethan the Euclidean distance as it takes intoaccount the local structure of dependence ofthe variables.

2.3 Rule InductionRule induction is one of the machine

learning or artificial intelligence techniques.Rule learners have been adapted fromdecision tree learning by growing a complextree, which overfits the data and then utilizepruning algorithm to simplify the tree. Ruleinduction algorithm implemented in this workis similar to the Repeated Incremental Pruningto Produce Reduction (RIPPER) by Cohen[23] which is the improvement of IREP

Page 5: Classification Techniques for Control Chart Pattern

Chiang Mai J. Sci. 2013; 40(4) 705

(Incremental Reduced Error Pruning) byF rnkranz and Widmer [24].

RIPPER consists of two phases, growingphase and pruning phase. In growing phase,Starts with empty ruleset, RIPPER iterativelyadding rules to the ruleset until all positiveexamples are covered. Rules are greedilyadded to ruleset until 100% accuracy isachieved. Every possible value of eachattribute is tried and the condition with highestinformation gain is selected. In the pruningphase, pruning metric is used. Thealgorithm grows and prunes rules until thereare no positive examples left or the error rateis greater than 50%.

Industrial application of rule inductionfor classification purpose has been reported.For example, Markham et al. [25] used a ruleinduction approach, specifically CARTalgorithm, to determine the number ofkanbans in a just-in-time production system.The results suggested that CART was able toaccurately predict the number of kanbans.Arinze et al. [26] used rule induction to improveforecasting accuracy. In their work inducedrules were created from time series training

data sets to predict the most appropriateforecasting method. Though applications ofrule induction in industries have been reported,there was no report on the application of thistechnique in control chart pattern recognition.

3. RESULTS AND DISCUSSIONSResearch methodology is summarized in

Figure 2. Starting with data collection, ActualBore Hole Diameter measurement data werecollected from the quality control departmentin the case study company. Three types ofproduct were involved in the data collectionprocess but due to the commercialconfidentiality, the 3 products thereafter calledproduct X, product Y and product Z. Thenpatterns were generated for model trainingand testing. Training data was used for modelconstruction and testing data was used to testthe model accuracy. Data for training wassimulated separately between Upward-Downward Trend, Upward-DownwardShift, and Cycle pattern. While mixing ofpatterns were generated for testing purpose.The experimental results are explained asfollows:

pp+n

Figure 2. Research Methodology

Page 6: Classification Techniques for Control Chart Pattern

706 Chiang Mai J. Sci. 2013; 40(4)

3.1 Training and Testing Data PreparationThe data collected were prepared by

firstly, calculated the average and standarddeviation of the data. Then the mean andstandard deviation of the actual data wereused to simulate training data from theEquation below:

x(t) = A(t)+d(t) (1)

Whereas A(t) represents the real dataobtained from the case-study factory, t is atime variable, x(t) is the data at time t and d(t)represents an error pattern at time t (whichequals 0 when error is not detected).

Calculation of d(t) for different errorpatterns is described as:

- Shift pattern: d(t) = us when u is avariable for position of shift ( u= 0 before a

shift, u=1 after a shift) and s represents themagnitude of the resulting shift (-0.2 S

+2.5).- Trend pattern: d(t) = d t when d is the

slope of the trend (-0.22 d +0.22).- Cycle pattern: d(t) = a sin(2t/) when

a represents the amplitude of cycle (1.0 a2.5), and is the period of the cycle.

After data with nonrandom pattern weresimulated, A corresponding function graphneeds to be plotted (Figure 3), and then itspattern was observed for error. Figure 3shows different pattern in control chart fortraining of three techniques; Upward-Downward Trend, Upward-DownwardShift, Cycle and Normal. The X-axisrepresents the number of each of 36 dataentries: the Y-axis represents the length of BoreHole Diameter.

Figure 3. Examples of graphs plotted for different data types of product X used in trainingof neural network

Page 7: Classification Techniques for Control Chart Pattern

Chiang Mai J. Sci. 2013; 40(4) 707

Another set of data were simulated fortesting of model performance. Acorresponding function graph (Figure 4) wasplotted, and its configuration was analyzed asto whether or not a nonrandom pattern wasdetected. Figure 4 shows nonrandom dataof product X, used for testing of accuracy.The X-axis represents the number of each of

144 data entries: the Y-axis represents the lengthof Bore Hole Diameter. It is observed thatsimulation of nonrandom data for testing ofthree techniques’ accuracy is described asNormal, Increasing Trend, Shift, Normal,Cycle, Normal, Shift, Normal and thenDecreasing Trend, respectively.

Figure 4. Plotted graphs showing error patterns of product X for testing of accuracy3.2 Models Construction

Simulation data were used to constructmodels using three techniques; NN, kNN andRule Induction.

3.2.1 Models input and outputsInputs of artificial intelligence techniques

were data of Bore Hole Diameter fromcontrol charts. 6, 9 and 12 input data, called“Window Size,” are subject to continuousmeasurement of this experiment. For

example, Window Size of 12 refers to asequence of 12 data entries such as (X1, X2,X3, X4,..., X12). After the first 12 inputs wereinvestigated, the new data input will queue upafter X12 term, which subsequently becomesthe X12 as the first data term (X1) of the existingseries was discarded. The new set of 12-dataseries continues to be detected for error, asshown in Figure 5. The cycle of errormeasurement then repeats until all data wereinvestigated.

Figure 5. Window size of 12 data entries

Page 8: Classification Techniques for Control Chart Pattern

708 Chiang Mai J. Sci. 2013; 40(4)

Figures 4 shows Window Size of 12 dataentries for product X. Each series of 12historical data were processed at a time. Thesedata entries, in turn, become inputs in AlyudaNeuroIntelligence Program for NN whichoperates as pattern recognition program forartificial intelligence techniques, and becomeinputs in Rapid Miner Program whichoperates as pattern recognition program fork-nearest neighbor and rule inductiontechniques.

Outputs were 4 types of error formats– Normal, Trend, Cycle and Shift pattern.

Outputs of pattern recognition of NN arecoded, such as 1000 denotes a Normalpattern; 0100 for Trend; 0010 for Cycle, and0001 for Shift pattern.

3.2.2 Neural network modelExhaustive Search was conducted

identify number of the hidden layers andhidden nodes. The number of hidden layerswas searched between 1-3 hidden layers, giventhat the lowest possible PE for each layer is 1,and the highest possible PE is 25. Part of thesearch results is demonstrated in Table 2.

Table 2. Example of appropriate architecture for neural network from alyuda neuroIntelligenceprogram, in case of 6-data series for product X

Architecture Number ofWeights

Fitness Train Error ValidationError

Test Error

6-1-46-2-46-3-46-4-46-5-46-6-46-7-46-14-20-46-15-20-46-16-20-46-17-20-46-18-20-46-19-20-46-20-20-46-7-10-9-46-8-10-9-46-9-10-9-46-10-10-9-46-11-10-9-46-12-10-9-46-13-10-9-4

15263748597081482509536563590617644268285302319336353370

2.2708332.6265063.3538463.9636363.2537313.7586213.573776.2903236.964286

8.12510.2631585.416667

8.1256.9642865.0697673.3538464.541667

4.364.5416674.8444444.954545

0.5480040.6105720.7270770.7659120.7346280.775620.792880.9604320.9724220.9784170.962830.9724220.9772180.9916070.879180.8241640.9244880.8673140.8910460.9018340.898598

0.5550460.6100920.7201830.7614680.7155960.7477060.7889910.8410260.8717950.8666670.856410.8717950.8717950.8769230.8486240.7706420.8440370.8256880.8440370.8348620.816514

0.5596330.6192660.7018350.7477060.6926610.7339450.7201830.8410260.856410.8769230.9025640.8153850.8769230.856410.8027520.7018350.7798170.7706420.7798170.7935780.798165

From Table 2, the number of weights isthe number of all weight connection. Ifnumber of hidden layers and nodes areincrease, the number of weights will

consequently increase due to the increasing innumber of connections between neurons. Asfor the Fitness value, the AlyudaNeuroIntelligence Program provides the

Page 9: Classification Techniques for Control Chart Pattern

Chiang Mai J. Sci. 2013; 40(4) 709

Table 3. Optimal architecture design of neural network for each product type

Type of Product Number of node in each NN layer

XYZ

Window Size = 6 Window Size = 9 Window Size = 126-17-20-46-20-9-46-18-16-4

9-13-13-49-20-19-49-19-9-4

12-18-8-412-18-19-412-14-14-4

following explanation – “Fitness is calculatedusing network error on the test set and unitpenalty, Unit penalty is a parameter thatspecifies a percentage of dataset to use byfeature selection algorithm.” Train Errorrefers to the error resulted from data usedfor the training of NN. Validation Errorrepresents the error resulted from the dataused for validation – preventing an ‘overfitting’during the training. Test Error is the erroroccurred after the training process of NNterminates. The Fitness value is used todetermine the optimal hidden layer; the higherFitness value signifies the better hidden layer.

Therefore, in considering the optimal hiddenlayer of product X according to the analysisof 6-data series of Alyuda NeuroIntelligencesoftware, it is found that the optimal hiddenlayer is the second layer with 6-17-20-4formation and Fitness value equaling10.263158, which is the highest among allother hidden layers. The same rule applies toproduct Y and Z, and their results arepresented in Table 3. The four numbers inTable 3 represent number of hidden nodesin the input, first hidden, second hidden, andoutput layer respectively.

3.2.3 K-Nearest Neighbor modelThe same data set used for NN training

is used to construct kNN model. Severalparameters has to be set before constructingthe model. The number of nearest neighbor,k, was set to 1, the measure type was set tobe mixed measure using mixed Euclideandistance.

3.2.4 Rule induction modelRule induction model was developed

using Rapid Miner Software with the samedata set used to train NN and kNN. Severalparameters need to be specified in the learningprocess. Firstly, Criterion used for selectingnumerical split was information gain.Secondly, ‘Sample ratio’ of training data usedfor growing and pruning was set to 0.9. Next,‘Pureness’ which is the necessary amount ofmajor class in a covered subset in orderbecome pure, was set to 0.9. Finally ‘Minimal

prune benefit’ or the minimum amount ofbenefit which must be exceeded overunpruned benefit in order to be pruned wasset at 0.25.

3.3 Model Accuracy ComparisonAnother set of data was simulated for

Normal, Trend, Cycle and Shift patterns, inorder to assess the accuracy of threetechniques. The models accuracy weremeasured using the following equation

Percentage accuracy =

The historical data used in training of NN– 6-data, 9-data, and 12-data series – is thencompared for each product type in order toconclude as to how much historical data isneeded to achieve the highest possibleaccuracy. Models accuracy was summarizedin Table 4.

Number of correctly classified data

Number of total datax 100 (2)

Page 10: Classification Techniques for Control Chart Pattern

710 Chiang Mai J. Sci. 2013; 40(4)

Table 4. Model accuracy of the three classification techniques at three window size.ProductName

Neural network6

input9

input12

input

K-nearest neighbor Rule Induction6

input9

input12

input6

input9

input12

inputXYZ

79.8682.7387.77

88.2488.2494.85

91.7384.2190.23

96.2696.4798.06

96.0396.9998.38

97.4496.8498.57

84.3285.5484.10

85.5983.0190.66

85.4986.7787.52

From table 4, model accuracy dependslargely on window size. This is especially truein NN model where the accuracy altersignificantly with window size. For productX, the highest accuracy NN model is atwindow size of 12 at 91.73%, while kNNachieved the highest at 97.44% at windowsize of 12 and rule induction at 85.59% at thewindow size of 9. As a result, kNN modelhas the highest accuracy. Similar results can beobserved in product Y and Z that kNN hasthe highest accuracy at 96.99% and 98.57%respectively.

4. CONCLUSIONSThe main purpose of this research is to

explore the application of classificationtechniques; Neural network, k-NearestNeighbor and Rule Induction in patternrecognition for detection of nonrandompatterns such as Trend, Cycle and Shift. TheWindow Size variables set in this experimentare sequences of 6-data, 9-data and 12-data,and 4 types of observable pattern areNormal, Trend, Cycle and Shift. Actual dataof 3 products from a case study companywas combined with the simulated nonrandompatterns. It is found that the accuracy of themodel depends largely on window size andthe model with the highest accuracy is themodel trained with kNN techniques.

The three techniques used in this researchhave certain advantages and limitations. NNhas the ability to detect complex nonlinearrelationships between input and outputparameters with less statistical assumption.

However, it suffers from its ‘black box’-likeproperties, in which the relationship betweeninput and output parameters cannot easily beexplained. NN also prone to over fitting andthe training process is slow as numbers ofparameters affecting network performanceneed to be tried in trial and error fashion.kNNs are simple to understand and easy toimplement. However, kNN can be slow forlarge training examples and sensitive toirreverent parameters. Rule induction, unlikeNN and kNN, can be used to describecomplex data with relatively simple rules thatare easy to understand. Limitation of rule isthat it does not perform well with noisy data.

The case study has demonstrated thatthe proposed method can be used withactual data. However, the current systemimplemented is stand-alone system which isnot linked to the company database. To furtherenhance the performance of the system, itshould be linked to the company database sothat the detection in online mode is possible.

ACKNOWLEDGEMENTSThis project is financially supported by

the Industry/University Cooperative ResearchCenter (I/UCRC) in HDD Component, theFaculty of Engineering, Khon Kaen Universityand National Electronics and ComputerTechnology Center, National Science andTechnology Development Agency.

REFERENCES

[1] Archanun K., Hard Disk Drive Industryin Thailand: International Production

Page 11: Classification Techniques for Control Chart Pattern

Chiang Mai J. Sci. 2013; 40(4) 711

Networks Versus Industrial Clusters,Discussion Paper No. 23 presented atSingapore Economic Review Conference(SERC’09), 2009.

[2] Laosiritaworn W., andHolimchayachotikul P., Metal framefor actuator manufacturing processimprovement using data miningtechniques, Chiang Mai J. Sci., 2010; 37(3):421-428.

[3] Laosiritaworn W., and Banjongjit T.,Visual Basic Application for StatisticalProcess Control: A Case of Metal Framefor Actuator Production Process,Proceeding of the InternationalMultiConference of Engineers andComputer Scientists, Hong Kong., 2010:1878-1883.

[4] Western Electric Company, StatisticalQuality Control Handbook, Indianapolis:Western Electric, 1958.

[5] Montgomery D.C., Introduction toStatistical Quality Control, 4th Edn., Wiley,New York, 2001.

[6] Guh R-S., A hybrid learning-based modelfor on-line detection and analysis ofcontrol chart patterns, Comput. Ind. Eng.,2005; 49: 35-62.

[7] Pham D.T., and Wani M.A., Feature-based control chart pattern recognition,Int. J. Prod. Res., 1997; 35(7): 1875-1890.

[8] Cheng C.-S., A neural network Approachfor the analysis of control chart patterns,Int. J. Prod. Res., 1997; 35(3): 667-697.

[9] Wang C.H, Guo R.S, Chiang M.H, andWong J.Y. Decision tree based controlchart pattern recognition, Int. J. Prod. Res.,2008; 46(17): 4889-4901.

[10] He Q.P., and Wang J., Fault detectionusing the k-nearest neighbor rule forsemiconductor manufacturing, IEEE T.Semiconduct. M., 2007; 20(4): 345-354.

[11] Verdieir G., and Ferreira A., AdaptiveMahalabonis distance and k-nearestneighbor rule for fault detection insemiconductor manufacturing, IEEE T.Semiconduct. M., 2011; 1: 59-68.

[12] Dayhoff, J. E., Neural NetworkArchitectures, Van Nostrand Reinhold,New York, 1990.

[13] Laosiritaworn, W., andChotchaithanakorn, N., Artificial neuralnetworks parameters optimization withdesign of experiments: An application inferromagnetic materials modeling,Chiang Mai J. Sci., 2009; 36(1): 83-91.

[14] Sukthomya W. and Tannock J. D. T.,The optimisation of neural networkparameters using taguchi design ofexperiments approach: An applicationin manufacturing process modeling,Neural. Comput. Appl., 2005;14(4):337-344.

[15] Guh R-S., and Tannock J. D. T.,Recognition of control chartconcurrent patterns using a neuralnetwork approach, Int. J. Prod. Res., 1999;37(8): 1743-1765.

[16] Guh R-S., and Hsieh Y-C, A neuralnetwork based model for abnormalpattern recognition of control charts,Int. J. Comput. Ind. Eng., 1999; 36:97-108.

[17] Perry M. B., Spoerre J. K., and VelascoT., Control chart pattern recognitionusing back propagation neural networks,Int. J. Prod. Res., 2001; 39(15): 3399-3418.

[18] Al-Assaf Y., Multi-resolution waveletsanalysis approach for the recognition ofconcurrent control chart patterns, Int. J.Qual. Eng., 2004; 17: 11-21.

[19] Pham D.T., and Sahran S., Control ChartPattern Recognition Using Spiking NeuralNetworks, Proceeding of the SecondVirtual International Conference on

Page 12: Classification Techniques for Control Chart Pattern

712 Chiang Mai J. Sci. 2013; 40(4)

Intelligent Production Machines andSystems, Elsevier Science Ltd., Oxford.,2006: 319-325.

[20] Guh R-S, Real-time recognition of controlchart patterns in autocorrelated processesusing a learning vector quantizationnetwork-based approach, Int. J. Prod. Res.,2008; 46(14): 3959-3991.

[21] Garcia-Pedraja N and Ortiz-Boyer D.,Boosting k-nearest neighbor classifier bymeans of input space projection, xpert.Syst. Appl., 2009; 36(7): 10570-10582.

[22] Wu Y., Ianakiev K., and Govindaraju, V.,Improved k-nearest neighborclassification, Pattern Recogn., 2002;35: 2311-2318.

[23] Cohen W.W., Fast effect rule induction,Proceeding of International Conference

on Machine Learning, Lake Tahoe, CA.,1995: 115-123.

[24] F rnkranz, J and Widmer, G.,Incremental Reduced Error Pruning,Proceeding of the Eleventh AnnualConference on Machine Learning, NewBrunswick, NJ., 1994: 70-77.

[25] Markham I.S., Mathieu R.G., and WrayB.A., A rule induction approach fordetermining the number of kanbans in ajust-in-time production system,computers Ind. Eng., 1998; 34(4): 717-727.

[26] Arinze B., Kim S-L., and Anandarajan M.,Combining and selecting forecastingmodels using rule based induction,computers Ops. Res., 1997; 24(5): 423-433.