7
Dynamic voltage collapse prediction in power systems using support vector regression Muhammad Nizam, Azah Mohamed * , Aini Hussain Department of Electrical, Electronic and Systems Engineering, University Kebangsaan Malaysia, Bangi Selangor 43600, Malaysia article info Keywords: Dynamic voltage collapse Prediction Artificial neural network Support vector machines abstract This paper presents dynamic voltage collapse prediction on an actual power system using support vector regression. Dynamic voltage collapse prediction is first determined based on the PTSI calculated from information in dynamic simulation output. Simulations were carried out on a practical 87 bus test system by considering load increase as the contingency. The data collected from the time domain simulation is then used as input to the SVR in which support vector regression is used as a predictor to determine the dynamic voltage collapse indices of the power system. To reduce training time and improve accuracy of the SVR, the Kernel function type and Kernel parameter are considered. To verify the effectiveness of the proposed SVR method, its performance is compared with the multi layer perceptron neural network (MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapse prediction compared with the MLPNN. Ó 2009 Elsevier Ltd. All rights reserved. 1. Introduction In recent years, voltage instability which is responsible for sev- eral major network collapses have been reported in many coun- tries (Hasani & Parniani, 2005). The phenomenon was in response to an unexpected increase in the load level, sometimes in combination with an inadequate reactive power support at crit- ical network buses. Voltage instability phenomenon has been known to be caused by heavily loaded system where large amounts of real and reactive powers are transported over long transmission lines or lines are overloaded. It may also occur at the operating loading condition when a system is subjected to the contingency (Balamourougan, Sidhu, & Sachdev, 2004; Nizam, Mohamed, & Hussain, 2006). In this situation, it is important to as- sess voltage stability of power systems by developing tools that can predict the distance to the point of collapse in a given power system. Much effort is currently been put into research on the phenomenon of voltage collapse and many approaches have been explored. However, there is still a need for reducing the computa- tional time in dynamic voltage stability assessment (Kundur, 1994). Presently, the use of artificial neural network (ANN) in dy- namic voltage collapse prediction has gained a lot of interest amongst researchers due to its ability to do parallel data processing with high accuracy and fast response. Several voltage stability pre- diction studies have been carried out by using multi layer percep- tron neural network (NN) model (Bettiol, Souza, Todesco, & Tesch, 2003; Izzri & Yahya, 2007; Pothisarn & Jiriwibhakorn, 2003). Sharkawi and Neibur (1996) and Musirin and Rahman (2004) pro- posed the use of radial basis function (RBF) and recurrent NN (Celli, Loddo, & Pilo, 2002) for voltage stability assessment. Another method to assess power system stability using ANN is by means of classifying the system into either stable or unstable states for several contingencies applied to the system (Krishna & Padiyar, 2000). Support Vector Machine (SVM) is another method used for solving classification problems (Moulin, daSilva, El-Sharkawi, & Marks, 2004; Ravikumar, Thukaram, & Khincha, 2008; Wang, Wu, Li, & Wang, 2005) in which the method has several advantages such as automatic determination of the number of hidden neurons, fast convergence rate and good generalization capability. Beside for classification, SVM can be applied for solving prediction problems (Pelckmans et al., 2003) named Support Vector Regression (SVR). In this paper, a new method for dynamic voltage prediction is proposed by using SVR for fast and accurate prediction of voltage collapse. The procedures of dynamic voltage collapse prediction using SVR are explained and the performance of the SVR is com- pared with the multilayer perceptron neural network (MLPNN) so as to verify the effectiveness of the proposed method. The MLP NN was developed using the MATLAB Neural Network Tool- box, whereas SVM were developed using the LSSVM Matlab Tool- box (Pelckmans et al., 2003). Initially, the work focused on the development of a new dy- namic voltage collapse indicator named as the Power Transfer Sta- bility Index (PTSI). The index is calculated by using information of total apparent power of the load, Thevenin voltage and impedance 0957-4174/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2009.11.052 * Corresponding author. Tel.: +60 3 89216590; fax: +60 3 89256579. E-mail addresses: [email protected] (M. Nizam), [email protected] (A. Mohamed). Expert Systems with Applications 37 (2010) 3730–3736 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Dynamic voltage collapse prediction in power systems using support vector regression

Embed Size (px)

Citation preview

Page 1: Dynamic voltage collapse prediction in power systems using support vector regression

Expert Systems with Applications 37 (2010) 3730–3736

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

Dynamic voltage collapse prediction in power systems using supportvector regression

Muhammad Nizam, Azah Mohamed *, Aini HussainDepartment of Electrical, Electronic and Systems Engineering, University Kebangsaan Malaysia, Bangi Selangor 43600, Malaysia

a r t i c l e i n f o

Keywords:Dynamic voltage collapsePredictionArtificial neural networkSupport vector machines

0957-4174/$ - see front matter � 2009 Elsevier Ltd. Adoi:10.1016/j.eswa.2009.11.052

* Corresponding author. Tel.: +60 3 89216590; fax:E-mail addresses: [email protected] (M.

(A. Mohamed).

a b s t r a c t

This paper presents dynamic voltage collapse prediction on an actual power system using support vectorregression. Dynamic voltage collapse prediction is first determined based on the PTSI calculated frominformation in dynamic simulation output. Simulations were carried out on a practical 87 bus test systemby considering load increase as the contingency. The data collected from the time domain simulation isthen used as input to the SVR in which support vector regression is used as a predictor to determine thedynamic voltage collapse indices of the power system. To reduce training time and improve accuracy ofthe SVR, the Kernel function type and Kernel parameter are considered. To verify the effectiveness of theproposed SVR method, its performance is compared with the multi layer perceptron neural network(MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapseprediction compared with the MLPNN.

� 2009 Elsevier Ltd. All rights reserved.

1. Introduction

In recent years, voltage instability which is responsible for sev-eral major network collapses have been reported in many coun-tries (Hasani & Parniani, 2005). The phenomenon was inresponse to an unexpected increase in the load level, sometimesin combination with an inadequate reactive power support at crit-ical network buses. Voltage instability phenomenon has beenknown to be caused by heavily loaded system where largeamounts of real and reactive powers are transported over longtransmission lines or lines are overloaded. It may also occur atthe operating loading condition when a system is subjected tothe contingency (Balamourougan, Sidhu, & Sachdev, 2004; Nizam,Mohamed, & Hussain, 2006). In this situation, it is important to as-sess voltage stability of power systems by developing tools thatcan predict the distance to the point of collapse in a given powersystem. Much effort is currently been put into research on thephenomenon of voltage collapse and many approaches have beenexplored. However, there is still a need for reducing the computa-tional time in dynamic voltage stability assessment (Kundur,1994). Presently, the use of artificial neural network (ANN) in dy-namic voltage collapse prediction has gained a lot of interestamongst researchers due to its ability to do parallel data processingwith high accuracy and fast response. Several voltage stability pre-diction studies have been carried out by using multi layer percep-

ll rights reserved.

+60 3 89256579.Nizam), [email protected]

tron neural network (NN) model (Bettiol, Souza, Todesco, & Tesch,2003; Izzri & Yahya, 2007; Pothisarn & Jiriwibhakorn, 2003).Sharkawi and Neibur (1996) and Musirin and Rahman (2004) pro-posed the use of radial basis function (RBF) and recurrent NN (Celli,Loddo, & Pilo, 2002) for voltage stability assessment. Anothermethod to assess power system stability using ANN is by meansof classifying the system into either stable or unstable states forseveral contingencies applied to the system (Krishna & Padiyar,2000). Support Vector Machine (SVM) is another method used forsolving classification problems (Moulin, daSilva, El-Sharkawi, &Marks, 2004; Ravikumar, Thukaram, & Khincha, 2008; Wang, Wu,Li, & Wang, 2005) in which the method has several advantagessuch as automatic determination of the number of hidden neurons,fast convergence rate and good generalization capability. Beside forclassification, SVM can be applied for solving prediction problems(Pelckmans et al., 2003) named Support Vector Regression (SVR).

In this paper, a new method for dynamic voltage prediction isproposed by using SVR for fast and accurate prediction of voltagecollapse. The procedures of dynamic voltage collapse predictionusing SVR are explained and the performance of the SVR is com-pared with the multilayer perceptron neural network (MLPNN)so as to verify the effectiveness of the proposed method. TheMLP NN was developed using the MATLAB Neural Network Tool-box, whereas SVM were developed using the LSSVM Matlab Tool-box (Pelckmans et al., 2003).

Initially, the work focused on the development of a new dy-namic voltage collapse indicator named as the Power Transfer Sta-bility Index (PTSI). The index is calculated by using information oftotal apparent power of the load, Thevenin voltage and impedance

Page 2: Dynamic voltage collapse prediction in power systems using support vector regression

M. Nizam et al. / Expert Systems with Applications 37 (2010) 3730–3736 3731

at a bus and the phase angle between Thevenin and load imped-ance. The value of PTSI will fall between 0 and 1 in which whenPTSI value reaches 1, it indicates that a voltage collapse has oc-curred. Dynamic simulations were carried out for determining

Fig. 2. The 87 bus

Fig. 1. MLP neur

the relation between voltage, reactive power and real power at aload bus and the PTSI. Load increase at all the load buses were con-sidered for generating the training and testing data sets. The per-formance of the proposed SVR technique developed for dynamic

t test system.

al network.

Page 3: Dynamic voltage collapse prediction in power systems using support vector regression

3732 M. Nizam et al. / Expert Systems with Applications 37 (2010) 3730–3736

voltage stability prediction was evaluated by implementing it onthe 87 bus practical power system which is shown in Fig. 2. Theperformance of the SVR was compared with the MLPNN in orderto determine the effectiveness of the SVR in terms of accuracyand computation time in dynamic voltage collapse prediction.

2. Dynamic voltage collapse indicator

An indicator used for predicting dynamic voltage collapse at abus is the power transfer stability index (PTSI). The PTSI is calcu-lated by knowing information of the total load power, Theveninvoltage and impedance at a bus and the phase angle betweenThevenin and load bus (Nizam et al., 2006). The formula for thePTSI can be described as,

PTSI ¼ 2SLZThevð1þ cosðb� aÞÞE2

Thevð1Þ

where: SL, load power at a bus; b, phase angle of the Theveninimpedance; ZThev, Thevenin impedance; a, phase angle of load busimpedance; EThev, Thevenin voltage.

3. Support vector machine

Support vector machine (SVM) (Pelckmans et al., 2003) is gain-ing popularity due to its many attractive features and promisingempirical performance. It adopts the structure risk minimization(SRM) principle which has been shown to be superior to the tradi-tional empirical risk minimization (ERM) principle, employed byconventional neural network. SRM minimizes an upper bound ofthe generalization error on Vapnik–Chernoverkis dimension, as op-posed to ERM that minimizes the training error. This differenceequips SVM with good generalization performance, which is thegoal of the learning problems. Strong theoretical backgroundprovides SVM with global optimal solution and can avoid localminimization. It can solve high-dimension problems with Repro-ducing Kernel Hilbert Spaces Theory and avoid ‘‘dimension disas-ter”. Now, with the introduction of e-insensitive loss function,SVM has an advantage in solving nonlinear regression estimation.

3.1. Support Vector Regression (SVR)

In SVR, the basic idea is to map the data �x of the input space intoa high dimensional feature space F via a nonlinear mapping U andto do linear regression in this space (Gunn, 1997):

f ð�xÞ ¼< w; Uð�xÞ > þb with U : Rn ! F; w 2 F; ð2Þ

where: f ð�xÞ, output function; w, weight vector; x, input; b, biasthreshold; < . , . >, dot products in the feature space.

Thus, linear regression in a high dimensional feature space Fcorresponds to nonlinear regression in the low dimensional inputspace Rn. Since U is fixed, thus w is determined from the finite sam-ples {xi, yi} (i = 1, 2, 3, . . . , N) by minimizing the sum of the empir-ical risk Remp [f] and a complexity term ||w||2, which enforcesflatness in feature space:

Rreg ½f � ¼ Remp½f � þ kkwk2 ¼Xl

i¼1

Leðyi; f ð�xi;wÞÞ þ kkwk2; ð3Þ

where l denotes the sample size, k is regularization constant, Le isthe e-insensitive loss function which is given by,

Lðy; f ð�x;wÞÞ ¼0 for jf ðxÞ � yj < ejf ðxÞ � yj � e otherwise:

�ð4Þ

The target function (3) can be minimized by solving quadratic pro-gramming problem, which is uniquely solvable. It can be normal-ized as follows:

Uðw; nÞ ¼ 12jwj2 þ C

Xi

ðn�i þ nþi Þ

subject toyi � hw;Uð�xiÞi � b � eþ n�ihw;Uð�xiÞi þ b� yi � eþ nþi

n�i ; nþi � 0;

8><>: ð5Þ

where: C, a pre-specified value; n�, n+, slack variables representingupper and lower constraints on the outputs of the system.

The first part of this cost function is a weight decay which isused to regulate weight size and penalizes large weights. Due tothis regulation, the weight converges to smaller values. Largeweights deteriorate the generalization ability of SVM because, usu-ally, they can cause excessive variance. The second part is a penaltyfunction which penalizes errors larger than ±e using a so called e-insensitive loss function Le for each of the training points. The po-sitive constant C determines the amount, up to which deviationsfrom e are tolerated. Errors larger than ±e are denoted with theso-called slack variables representing values above e (n+) and be-low e (n�), respectively. The third part of the equation representsconstraints that are set to the values of errors between regressionprediction f ð�xÞ and true values yi.

The solution is given by,

maxa;a�

Wða;a�Þ ¼maxa;a��1

2

Xl

i¼1

Xl

j¼1

ðai � a�i Þðaj � a�j Þ UðxiÞ;UðxjÞ� �

þXl

i¼1

aiðyi � eÞ � a�i ðyi þ eÞ ð6Þ

With constraints,

0 � ai;a�i � C; i ¼ 1; . . . ; l;

Xl

i¼1

ai � a�i ¼ 0:ð7Þ

By solving Eq. (6) with constraints of Eq. (7), we can determinethe Lagrange multipliers a, a* and the weight as in the regressionfunction of Eq. (2), which is given by,

�w ¼Xl

i¼1

ðai � a�i Þxi and �b ¼ �12h �w; ðxr þ xsÞi: ð8Þ

The Karush–Kuhn–Tucker conditions that are satisfied by thesolution are,

aia�i ¼ 0; i ¼ 1; . . . ; l: ð9Þ

Therefore, the support vectors are points where exactly one ofthe Lagrange multipliers are greater than zero (on the boundary),which means that they fulfill the Karush–Kuhn–Tucker condition(Smola & Scholkopf, 1998). Training points with non-zero Lagrangemultipliers are called support vectors and give shape to SVR. Whene ¼ 0, we get Le loss function and the optimization problem is sim-plified as,

minb

12

Xl

i¼1

Xl

j¼1

bibjhxi; xji �Xl

i¼1

biyi ð10Þ

With constraints,

C � bi � C; i ¼ 1; . . . ; l;

Xl

i¼1

bi ¼ 0ð11Þ

and the regression function is given by Eq. (2), where

�w ¼Xl

i¼1

bixi and �b ¼ �12h �w; ðxr þ xsÞi: ð12Þ

Page 4: Dynamic voltage collapse prediction in power systems using support vector regression

M. Nizam et al. / Expert Systems with Applications 37 (2010) 3730–3736 3733

A non-linear mapping can be used to map the data into a highdimensional feature space where linear regression is performed.The Kernel approach is again employed to address the curse ofdimensionality. The non-linear SVR solution, using an e-insensitiveloss function,

maxa;a�

Wða;a�Þ ¼maxa;a�

Xl

i¼1

a�i ðyi þ eÞ � aiðyi � eÞ

� 12

Xl

i¼1

Xl

j¼1

ða�i � aiÞða�j � ajÞKðxi; xjÞ ð13Þ

With constraints,

0 � ai;a�i � C; i ¼ 1; . . . ; l;

Xl

i¼1

ai � a�i ¼ 0: ð14Þ

Solving Eq. (13) with constraints as in Eq. (14), determines the La-grange multipliers, a, a* and the regression function which is givenby,

f ðxÞ ¼XSVs

�ai � �a�i� �

Kðxi; xÞ þ �b; ð15Þ

where,

h�w; xi ¼Xl

i¼1

ðai � a�i ÞKðxi; xjÞ;

�b ¼ �12

Xl

i¼1

ðai � a�i ÞðKðxi; xrÞ þ Kðxi; xsÞÞ:ð16Þ

The support vector equality constraint may be dropped if theKernel contains a bias term, b being accommodated within the Ker-nel function. The regression function becomes,

f ðxÞ ¼Xl

i¼1

ð�ai � �a�i ÞKðxi; xÞ: ð17Þ

In (15) the Kernel function, kðxi; xjÞ ¼ hUð�xiÞ;Uð�xjÞi. It can beshown that any symmetric Kernel function, k satisfying Mercer’scondition corresponds to a dot product in some feature space (Pel-ckmans et al., 2003). Several Kernel functions are named as Gauss-ian radial basis function (RBF) Kernel, linear Kernel and multilayerperceptron Kernel. The commonly Kernel function used is theGaussian RBF Kernel which is written as

kðx; yÞ ¼ e�kx�yk2

2r2 : ð18Þ

Note that r2 is a parameter associated with RBF function whichhas to be tuned.

For prediction cases, any data can be regarded as an input-out-put system with nonlinear mechanism. Therefore, the support vec-tor machines will essentially build a network which is capable ofapproximating the underlying functions with acceptable accuracyaccording to learning samples data.

3.1.1. MLP neural network architectureFor comparison purpose, the MLPNN (Gang, Lin, & Guo, 2007) is

used. Fig. 1 shows the architecture of MLPNN which consists of aninput layer, a hidden layer and an output layer. The input vectorsare fed to the input layer and the weights and biases are adjustedusing the ‘dotprod’ function in the Matlab toolbox. Adaptation ofweights is done by using the ‘adaptwb’ function which changesboth weights and biases.

From Fig. 1, IW and OW are input and output weights, respec-tively whereas b1 and b2 are biases in the input and output layers,respectively. In this study, the selection of number of neurons,training algorithm and activation functions are considered accord-ingly. In the training of the neural network, initially, the input datais normalized and filtered for redundancy. The training parametersconsidered are such as learning rate (0.6), learning increment (1.2),number of epochs (30,000) and parameter accuracy goal (0.0001).The selected training algorithm is the resilient back propagationalgorithm because it has been proven to give better convergencecompared to the back propagation algorithm (Repo, 1999). The sig-moid activation function is chosen and used at the input and hid-den layers whereas the linear activation function is used at theoutput layer. After training the network for a set of data, the net-work performance is evaluated with a new set of test data. TheMLPNN outputs are then compared with the outputs obtained fromSVM.

4. Methodology

Before the SVM implementation, time domain simulations con-sidering several contingencies were carried out for the purpose ofgathering the training data sets. Simulations were carried out byusing the PSS/E commercial software.

4.1. Data preprocessing

The training data was obtained by carrying out time domainsimulation in which load was increased at every load bus for everysecond at a certain rate from the base case until the occurrence of avoltage collapse. The training data for each contingency was thenrecorded. About three hundred training and testing data were gen-erated for use in the SVR and MLPNN.

The selection of input features is an important factor whichneeds to be considered in the SVR and MLPNN implementation.The input features selected for this work are real and reactivepower load (PLoad, QLoad) and the load voltage and phase angle(VLoad, hLoad). Altogether, there are 224 input features consideredfor both SVR and MLPNN.

4.2. Procedure

The SVR implementation procedure is described as follows:

(i) Input load, generator and line data of the test system. Runthe load flow for the base case.

(ii) Generate training and testing data for the SVR, by carryingout simulations considering (a) increase loads at all the loadbuses at a rate of 2% MVA/s until the system collapse, (b)increase loads at individual load buses at a rate of5% MVA/s with the loads at the other load buses remainingconstant. Measure the voltage, phase angle, real and reactivepowers and calculate PTSI at all the load buses

(iii) Create a data base for the input vector in the form of [PL, QL,VL, h] where PL and QL are the load real and reactive powers,VL is the voltage magnitude at a load bus and h is the voltagephase angle. The target or output vector is in the form of PTSIindices for the corresponding input vector.

(iv) For the training data sets, select data sets that give low andhigh values of PTSI.

(v) Select the parameter values of Kernel type and Kernelparameter used for training the SVR.

(vi) Train the SVR using the selected training data sets.(vii) Repeat steps (v)–(vii) by changing the parameter values of

number of epochs, learning rate and performance goal C.

Page 5: Dynamic voltage collapse prediction in power systems using support vector regression

3734 M. Nizam et al. / Expert Systems with Applications 37 (2010) 3730–3736

(viii) Compare results of SVR and MLPNN in terms of computa-tional time and accuracy which is in terms of mean squareerror (MSE), given as:

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37Data No.

PTSI

ActualSVMMLPNN

Fig. 3. Comparison of training results of SVR and MLPNN PTSI for the sampled data.

0

0.005

0.01

0.015

0.02

0.025

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37Data No.

Abs

olut

e Er

ror

SVMMLPNN

Fig. 4. Comparison of absolute error for result training using SVR and MLPNN.

0.6

0.7

Actual

MSE ¼Xn

i¼1

ðXi � YiÞ2

n; ð19Þ

where Xi is output value and Yi is target value.

5. Result and discussion

To evaluate the performance of the SVR in predicting dynamicvoltage collapse, a practical 87 bus test system is used for verifica-tion of the method. The test system which consists of 23 generatorsand 56 load buses is shown in Fig. 2.

In this study, time domain simulations were carried out usingthe PSS/E simulation software so as to generate the training datasets. From the simulation results, the PTSI was calculated at everyload bus using the power and voltage information. In the SVR train-ing, initially the Kernel function type and Kernel parameter, r weredetermined by trial and error. The various Kernel types consideredfor the SVR were the RBF, linear and MLP Kernels (Gunn, 1997). Forachieving the required SVR accuracy, the MSE value was chosen tobe less than or equal to 0.0003. To investigate the effect of varyingthe Kernel parameters in determining the SVR accuracy, the valuesof r were varied according to 0.1, 0.2, 0.5 and 1.0 as shown inTable 1. It can be seen that by increasing r, the values of MSE de-crease for RBF and MLPNN. But for the linear Kernel, the effect of ris not significant because the MSE value remains constant. In termsof computational time, it can be seen that the MLP kernel is thefastest but it is the least accurate because the MSE value is greaterthan 0.0001. The performance of the linear Kernel shows that it ismost accurate because it gives the smallest MSE, but it has a draw-back of long computation time. Comparing the performance of theRBF, linear and MLP Kernels in terms of accuracy and computationtime, it can be concluded that the RBF Kernel is the best choice forthis SVR because it is accurate (MSE < 0.0003) and it is relativelyfast. In this case, the RBF Kernel function type with r = 0.2 is cho-sen as the parameter for the SVR.

The performance of the SVR in dynamic voltage collapse predic-tion is then compared with the MLPNN. Both the SVR and MLPNNresults are tabulated in terms of the training and testingaccuracies.

5.1. Comparison of SVR and MLPNN results in dynamic voltagecollapse prediction

Comparison of SVR and MLPNN for dynamic voltage collapseprediction in terms of absolute errors, the training result for somesampled data is given in Figs. 3 and 4, respectively. Numeric forsome training samples is given in Appendix A. Results show com-

Table 1Performance of Kernel parameter in SVR.

Kernel function type Computational time (s) MSE

RBF, r = 0.1 13.4164 0.0003RBF, r = 0.2 10.41 0.0001RBF, r = 0.5 21.04 0.00009RBF, r = 1.0 22.24 0.00009Linear, r = 0.1 83.92 1.10�5

Linear, r = 0.2 41.059 1.10�5

Linear, r = 0.5 69.78 1.10�5

Linear, r = 1.0 81.34 1.10�5

MLP, r = 0.1 5.2335 0.879MLP, r = 0.2 8.33 0.167MLP, r = 0.5 7.10 0.093MLP, r = 1.0 6.80 0.057

parison of the SVR and MLPNN outputs with the actual values ofPTSI obtained from simulations.

Comparison of SVR and MLPNN methods in term of testingresult and absolute errors is shown at Figs. 5 and 6, respectively.Detail numeric data of testing for some samples is given in Appen-dix B. Fig. 5 shows SVR is better than MLPNN because most of theresults are very close to the actual result. The absolute error fortraining sampled data set shows the absolute error of SVR is lessthan MLPNN.

0

0.1

0.2

0.3

0.4

0.5

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37Data No.

PTSI

SVMMLPNN

Fig. 5. Comparison of testing result using SVR and MLPNN for the sampled data.

Page 6: Dynamic voltage collapse prediction in power systems using support vector regression

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37Data No.

Abs

olut

e Er

ror

SVMMLPNN

Fig. 6. Comparison of absolute errors for result testing using SVR and MLPNN.

M. Nizam et al. / Expert Systems with Applications 37 (2010) 3730–3736 3735

The accuracy of the SVR and MLPNN are evaluated based on theabsolute errors. From Appendices A and B, is clear that the averageabsolute errors for training and testing of SVR are 0.011 and 0.012,respectively, whilst the average absolute errors for training andtesting of MLPNN are 0.01 and 0.021, respectively. The resultsprove that both the SVR and MLPNN give accurate results in dy-namic voltage collapse prediction because the absolute errors areconsidered small (<2%). For the purpose of comparing the actualPTSI values obtained from simulations, the PTSI from SVR and PTSIfrom MLPNN, the PTSI values are plotted as shown in Fig. 7. The re-sults show that in terms of accuracy in predicting dynamic voltagecollapse using the PTSI, there is no significant difference betweenthe SVR and MLPNN PTSI values when they are compared withthe actual PTSI values.

In general, the performance of SVR and MLPNN in predicting dy-namic voltage collapse can be evaluated from the results shown inTable 2. It can be seen that SVR takes less computational time ascompared to MLPNN. In terms of testing accuracy, the MSE forSVR (1.22 � 10�4) is less than MLPNN (2.09 � 10�4). Hence, in gen-eral, it can be said that for dynamic voltage collapse prediction,SVR performs better than MLPNN from the speed and accuracy

Fig. 7. Comparison of actual PTSI, SVR PTSI and MLPNN PTSI.

Table 2Performance comparison of SVR and MLPNN in dynamic voltage collapse prediction.

SVR ANN

Training data 200 200Testing data 100 100Training error (MSE) 5.84 � 10�4 1 � 10�4

Testing error (MSE) 1.22 � 10�4 2.09 � 10�4

Computational time 10.38 s 92 min 30 s

point of views. This feature is particularly important when usedin the real-time mode.

6. Conclusion

Dynamic voltage collapse prediction in power systems usingconventional analytical method requires long computational timeand therefore to accelerate up the prediction process, SVR ap-proach is proposed. In this study, the SVR is tested for dynamicvoltage collapse prediction on a practical 87 bus system. The per-formance of the SVR method in predicting dynamic voltage col-lapse based on the PTSI values, is evaluated by comparing it withthe MLP NN. In terms of training time, the SVR takes 10.38 swhereas the MLPNN takes 92 min and 30 s. In terms of accuracy,the SVR using the RBF Kernel function is more accurate than theMLPNN in predicting dynamic voltage collapse for the investigated87 bus actual power system.

Appendix A

Sampled training of SVR and MLPNN.

No data

PTSI Absolute error

Actual

SVM MLPNN SVM MLPNN

1

0.2497 0.25251 0.25743 0.00286 0.00778 2 0.3167 0.31342 0.32146 0.0033 0.00474 3 0.0004 0.01232 0.00943 0.01196 0.00907 4 0.0016 0.01593 0.00714 0.01428 0.00549 5 0.0129 0.02406 0.02946 0.01119 0.01659 6 0.0108 0.01497 0.03254 0.00418 0.02175 7 0.0111 0.01803 0.02945 0.00692 0.01834 8 0.0222 0.03127 0.02406 0.0091 0.0019 9 0.0167 0.02131 0.03068 0.00458 0.01395 10 0.0171 0.0247 0.03175 0.00757 0.01462 11 0.0270 0.03618 0.0268 0.00919 0.00018 12 0.0215 0.02963 0.02722 0.00818 0.00577 13 0.0219 0.03076 0.01827 0.00888 0.00362 14 0.0320 0.04295 0.00856 0.01098 0.02342 15 0.0266 0.03561 0.02933 0.00905 0.00276 16 0.0271 0.03721 0.03584 0.01013 0.00876 17 0.0375 0.05023 0.02677 0.01268 0.01078 18 0.0322 0.04267 0.01912 0.01045 0.0131 19 0.0328 0.04316 0.00989 0.01035 0.02292 20 0.0435 0.05837 0.03316 0.01491 0.01031 21 0.0381 0.04909 0.02564 0.01096 0.01249 22 0.0388 0.0496 0.03341 0.01076 0.00543 23 0.0497 0.0658 0.0297 0.01609 0.02001 24 0.0443 0.05548 0.04688 0.01116 0.00256 25 0.0450 0.05626 0.04562 0.01123 0.00059 26 0.0559 0.07339 0.04921 0.01745 0.00674 27 0.0507 0.06235 0.05284 0.01167 0.00216 28 0.0515 0.06298 0.0343 0.01152 0.01717 29 0.0625 0.08041 0.04683 0.01796 0.01562 30 0.0574 0.06914 0.05807 0.01174 0.00067 31 0.0583 0.0699 0.04428 0.01164 0.01399 32 0.0697 0.08763 0.0614 0.01795 0.00829 33 0.0650 0.07656 0.06166 0.01151 0.00338 34 0.0660 0.07745 0.06997 0.01147 0.00399 35 0.0776 0.09532 0.07499 0.01767 0.00266 36 0.0734 0.08471 0.09328 0.01129 0.01987 37 0.0744 0.08557 0.07623 0.01116 0.00181 38 0.0863 0.10363 0.10927 0.01738 0.02302

Average absolute error

0.011085 0.009996
Page 7: Dynamic voltage collapse prediction in power systems using support vector regression

3736 M. Nizam et al. / Expert Systems with Applications 37 (2010) 3730–3736

Appendix B

Sample testing result of SVR and MLPNN.

No data

PTSI Absolute error

Actual

SVM MLPNN SVM MLPNN

1

0.0201 0.0433 -0.0056 0.02321 0.02565 2 0.0098 0.0216 0.04232 0.01185 0.03256 3 0.0217 0.0369 0.01269 0.01521 0.00897 4 0.0224 0.0407 0.01144 0.0183 0.01098 5 0.0127 0.0247 0.03295 0.01198 0.02024 6 0.0230 0.0407 0.0608 0.01772 0.03779 7 0.0257 0.0443 0.00256 0.01859 0.02314 8 0.0260 0.0466 0.00218 0.02054 0.02387 9 0.0232 0.0422 -0.0047 0.01897 0.02786 10 0.0300 0.0507 0.01323 0.02077 0.01673 11 0.0323 0.0533 0.01346 0.021 0.01883 12 0.0437 0.0627 0.08348 0.01898 0.03981 13 0.1809 0.1900 0.18553 0.0091 0.00467 14 0.0483 0.0640 0.02155 0.01569 0.02673 15 0.0545 0.0716 0.02603 0.01706 0.02848 16 0.0569 0.0697 0.09208 0.01276 0.03516 17 0.0587 0.0708 0.07373 0.01213 0.01501 18 0.0683 0.0861 0.05873 0.01779 0.00959 19 0.0767 0.0944 0.04347 0.01772 0.0332 20 0.0732 0.0849 0.06304 0.0117 0.01021 21 0.0856 0.1029 0.10037 0.0173 0.01474 22 0.0825 0.0943 0.09325 0.01179 0.01076 23 0.1526 0.1642 0.17337 0.01161 0.02075 24 0.2550 0.2573 0.21583 0.00231 0.03912 25 0.2416 0.2445 0.25347 0.00293 0.0119 26 0.2486 0.2505 0.25164 0.00191 0.00307 27 0.2458 0.2487 0.24838 0.00293 0.00258 28 0.2339 0.2380 0.22824 0.00413 0.00563 29 0.2464 0.2493 0.25942 0.00294 0.01305 30 0.2698 0.2708 0.22935 0.00093 0.04049 31 0.2953 0.2940 0.32181 0.00138 0.02647 32 0.3813 0.3721 0.36388 0.00914 0.0174 33 0.4394 0.4250 0.47674 0.01439 0.03739 34 0.5757 0.5489 0.55118 0.02687 0.02456 35 0.1430 0.1555 0.10678 0.01251 0.03622 36 0.2311 0.2355 0.26211 0.00442 0.03105 37 0.2183 0.2225 0.24164 0.00423 0.02338 38 0.2232 0.2265 0.24559 0.00328 0.02238

Average absolute error 0.012103 0.021389

References

Balamourougan, V., Sidhu, T. S., & Sachdev, M. S. (2004). Technique for onlineprediction of voltage collapse. IEE Proceedings – Generation, Transmission,Distribution, 151(4), 453–460.

Bettiol, A. L., Souza, A., Todesco, J. L., & Tesch, J. R. Jr., (2003). Estimation of criticalclearing times using neural networks. Proceedings of the IEEE Bologna Power TechConference, 3(6).

Celli, G., Loddo, M., & Pilo, F. (2002). Voltage collapse prediction with locallyrecurrent neural network. In Proceedings of IEEE power engineering societysummer meeting (pp. 1130–1135).

Gang, D. Z., Lin, N., & Guo Z. J. (2007). Application of support vector regressionmodel based on phase space reconstruction to power systemwide-area stabilityprediction. In International power and energy conference (IPEC) (pp. 1880–1885).

Gunn, S. R. (1997). Support vector machines for classification and regression. Technicalreport, image speech and intelligent systems research group, University ofSouthampton, UK. <http://www.ecs.soton.ac.uk/srg/publications/pdf/SVM.pdf>.

Hasani, M., & Parniani, M. (2005). Method of combined static and dynamic analysisof voltage collapse in voltage stability assessment. In IEEE/PES transmission anddistribution conference and exhibition, China.

Izzri, A. W. N., Mohamed, A., & Yahya, I. (2007). A new method of transient stabilityassessment in power system using LS-SVM. In Proceeding of IEEE studentconference on research and development, Malaysia.

Krishna, S., & Padiyar, K. R. (2000). Transient stability assessment using artificialneural networks. In Proceedings of IEEE international conference on industrialtechnology (Vol. 1, pp. 627–632).

Kundur, P. (1994). Power system stability and control (EPRI). McGraw-Hill.Moulin, L. S., daSilva, A. P. A., El-Sharkawi, M. A., & Marks, R. J. (2004). Support

vector machines for transient stability analysis of large-scale power systems.IEEE Transactions on Power Systems, 19(2), 818–825.

Musirin, I., & Rahman, T. K. A. (2004). Voltage stability based weak area clusteringtechnique in power system. In Proceeding of national power and energyconference (PECon 2004) (pp. 235–240). Kuala Lumpur.

Nizam, M., Mohamed, A., & Hussain, A. (2006). Performance evaluation of voltagestability indices for dynamic voltage collapse prediction. Journal of AppliedScience, 6(5), 1104–1113.

Pelckmans, K., Suykens, J. A. K., Van Gestel, T., De Brabanter, J., Lukas, L., & Hamers,B., et al. (2003). LS-SVMlab toolbox user’s guide. ESAT-SCD-SISTA technical report02-145, Katholieke Universiteit Leuven.

Pothisarn, C., & Jiriwibhakorn, S. (2003). Critical clearing time determination EGATsystem using artificial neural networks. In Proceedings of the IEEE powerengineering society general meeting (Vol. 2, pp. 731–736).

Ravikumar, B., Thukaram, D., & Khincha, H. P. (2008). Application of support vectormachines for fault diagnosis. In Power transmission system. generation,transmission and distribution, IET (pp. 119–130).

Repo, S. (1999). General framework for neural network based real-time voltagestability assessment of electric power system. In IEEE midnight-sun workshop onsoft computing methods in industrial applications (pp. 91–96). Kusamo, Finland.

Sharkawi, M. E., & Niebur, D. (1996). Artificial neural network with application topower system. IEEE Power and Energy, 8, 7.

Smola, A. J., & Scholkopf, B. (1998). On a Kernel-based method for patternrecognition, regression, approximation and operator inversion. Algorithmica,22, 211–231.

Wang, X., Wu, S., Li, Q., & Wang, X. (2005). SVM for transient stability assessment inpower systems. In Proceedings of autonomous decentralized systems (pp. 356–363).