13
Research Article Smoothing Strategies Combined with ARIMA and Neural Networks to Improve the Forecasting of Traffic Accidents Lida Barba, 1,2 Nibaldo Rodríguez, 1 and Cecilia Montt 1 1 Pontificia Universidad Cat´ olica de Valpara´ ıso, 2362807 Valpara´ ıso, Chile 2 Universidad Nacional de Chimborazo, 33730880 Riobamba, Ecuador Correspondence should be addressed to Lida Barba; lida [email protected] Received 26 April 2014; Revised 29 July 2014; Accepted 14 August 2014; Published 28 August 2014 Academic Editor: Cagdas Hakan Aladag Copyright © 2014 Lida Barba et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks (ANNs) models to improve the forecasting of time series are presented. e strategy of forecasting is implemented using two stages. In the first stage the time series is smoothed using either, 3-point moving average smoothing, or singular value Decomposition of the Hankel matrix (HSVD). In the second stage, an ARIMA model and two ANNs for one-step-ahead time series forecasting are used. e coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm, while the coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm. e proposed models are evaluated using a weekly time series of traffic accidents of Valpara´ ıso, Chilean region, from 2003 to 2012. e best result is given by the combination HSVD-ARIMA, with a MAPE of 0 : 26%, followed by MA-ARIMA with a MAPE of 1 : 12%; the worst result is given by the MA-ANN based on PSO with a MAPE of 15 : 51%. 1. Introduction e traffic accidents occurrence is a matter of impact in the society, therefore a problem of priority public attention; the Chilean National Traffic Safety Commission (CONASET) periodically reports a high rate of sinister on roads; in Valpara´ ıso from year 2003 to 2012 28595 injured people were registered. e accuracy in the projections enables the intervention by the government agencies in terms of prevention; another demandant of information is the insur- ance companies, who require this kind of information to determine new market policies. In order to capture the dynamic of traffic accidents, during the last years some techniques have been applied. For classification, decision rules and trees [1, 2], latent class clus- tering and bayesian networks [3], and the genetic algorithm [4] have been implemented. For traffic accidents forecasting, autoregressive moving average (ARMA) and ARIMA models [5], state-space models [6, 7], extrapolation [8], dynamic harmonic regression combined with ARIMA, and dynamic transfer functions [9] have been implemented. e smoothing strategies Moving Average (MA) and Singular Value Decomposition (SVD) have been used to identify the components in a time series. MA is used to extract the trend [10], while SVD extracts more components [11]; the SVD application is multivariate and in some works is applied for parameter calibration in dynamical systems [12, 13], in time series classification [14], or to switched linear systems [15]; typically SVD has been applied over an input data set to reduce the data dimensionality [16] or to noise reduction [17]. ARIMA is a linear conventional model for nonstationary time series; by differentiation the nonstationary time series is transformed in stationary; it is based on past values of the series and on the previous error terms for forecasting. ARIMA has been applied widely to model nonstationary data; some applications are the traffic noise [18], the daily global solar radiation [19], premonsoon rainfall data for western India [20], and aerosols over the Gangetic Himalayan region [21]. e autoregressive neural network (ANN) is a nonlinear method for forecasting that has been shown to be efficient in Hindawi Publishing Corporation e Scientific World Journal Volume 2014, Article ID 152375, 12 pages http://dx.doi.org/10.1155/2014/152375

Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

Research ArticleSmoothing Strategies Combined with ARIMA and NeuralNetworks to Improve the Forecasting of Traffic Accidents

Lida Barba12 Nibaldo Rodriacuteguez1 and Cecilia Montt1

1 Pontificia Universidad Catolica de Valparaıso 2362807 Valparaıso Chile2 Universidad Nacional de Chimborazo 33730880 Riobamba Ecuador

Correspondence should be addressed to Lida Barba lida barbahotmailcom

Received 26 April 2014 Revised 29 July 2014 Accepted 14 August 2014 Published 28 August 2014

Academic Editor Cagdas Hakan Aladag

Copyright copy 2014 Lida Barba et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Two smoothing strategies combined with autoregressive integrated moving average (ARIMA) and autoregressive neural networks(ANNs)models to improve the forecasting of time series are presentedThe strategy of forecasting is implemented using two stagesIn the first stage the time series is smoothed using either 3-point moving average smoothing or singular value Decomposition ofthe Hankel matrix (HSVD) In the second stage an ARIMA model and two ANNs for one-step-ahead time series forecasting areused The coefficients of the first ANN are estimated through the particle swarm optimization (PSO) learning algorithm whilethe coefficients of the second ANN are estimated with the resilient backpropagation (RPROP) learning algorithm The proposedmodels are evaluated using a weekly time series of traffic accidents of Valparaıso Chilean region from 2003 to 2012The best resultis given by the combination HSVD-ARIMA with a MAPE of 0 26 followed by MA-ARIMA with a MAPE of 1 12 the worstresult is given by the MA-ANN based on PSO with a MAPE of 15 51

1 Introduction

The traffic accidents occurrence is a matter of impact inthe society therefore a problem of priority public attentionthe ChileanNational Traffic Safety Commission (CONASET)periodically reports a high rate of sinister on roads inValparaıso from year 2003 to 2012 28595 injured peoplewere registered The accuracy in the projections enablesthe intervention by the government agencies in terms ofprevention another demandant of information is the insur-ance companies who require this kind of information todetermine new market policies

In order to capture the dynamic of traffic accidentsduring the last years some techniques have been applied Forclassification decision rules and trees [1 2] latent class clus-tering and bayesian networks [3] and the genetic algorithm[4] have been implemented For traffic accidents forecastingautoregressive moving average (ARMA) and ARIMAmodels[5] state-space models [6 7] extrapolation [8] dynamicharmonic regression combined with ARIMA and dynamictransfer functions [9] have been implemented

The smoothing strategies Moving Average (MA) andSingular Value Decomposition (SVD) have been used toidentify the components in a time series MA is used toextract the trend [10] while SVD extracts more components[11] the SVD application is multivariate and in some worksis applied for parameter calibration in dynamical systems[12 13] in time series classification [14] or to switched linearsystems [15] typically SVD has been applied over an inputdata set to reduce the data dimensionality [16] or to noisereduction [17]

ARIMA is a linear conventional model for nonstationarytime series by differentiation the nonstationary time seriesis transformed in stationary it is based on past values ofthe series and on the previous error terms for forecastingARIMAhas been appliedwidely tomodel nonstationary datasome applications are the traffic noise [18] the daily globalsolar radiation [19] premonsoon rainfall data for westernIndia [20] and aerosols over the Gangetic Himalayan region[21]

The autoregressive neural network (ANN) is a nonlinearmethod for forecasting that has been shown to be efficient in

Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 152375 12 pageshttpdxdoiorg1011552014152375

2 The Scientific World Journal

or MAsmoothing

ARIMA

ANN-PSO

Estimation

xs

minus

x

er

(a)

ARIMA or

ANN-PSO

HSVD Estimation

x

minus

x EmbeddingHankel

Decomposition S VDH

CL

CH

er

SVD(H)

(b)

Figure 1 Smoothing strategies (a) moving average and (b) Hankel singular value decomposition

solving problems of different fields the capability of learningof the ANN is determined by the algorithm Particle swarmoptimization (PSO) is a population algorithm that has beenfound to be optimal it is based on the behaviour of a swarmthis is applied to update the connections weights of the ANNsome modifications of PSO have been evaluated based onvariants of the acceleration coefficients [22] others apply theadaptation of the inertia weight [23ndash26] also the usage ofadaptive mechanisms for both inertia weight and the accel-eration coefficients based on the behaviour of the particle ateach iteration have been used [27 28] The combination ofANN-PSO has improved the forecasting over some classicalalgorithms like backpropagation (BP) [29ndash31] and least meansquare (LMS) [32] Another learning algorithm that has beenshown to be better than backpropagation is RPROP and isalso analyzed by its robustness easy implementation and fastconvergence regarding the conventional BP [33 34]

The linear and nonlinear models may be inadequatein some forecasting problems consequently they are notconsidered universal models then the combination of lin-ear and nonlinear models could capture different formsof relationships in the time series data The Zhang hybridmethodology that combines both ARIMA and ANN modelsis an effective way to improve forecasting accuracy ARIMAmodel is used to analyze the linear part of the problem and theANNmodels the residuals from the ARIMAmodel [35] thismodel has been applied for demand forecasting [36] howeversome researchers believe that some assumptions of Zhangcan degenerate hybrid methodology when opposite situationoccurs Kashei proposes a methodology that combines thelinear and nonlinear models which has no assumptions oftraditional Zhang hybrid linear and nonlinear models inorder to yield the more general and the more accurateforecasting model [37]

Based on the arguments presented in this work twosmoothing strategies to potentiate the preprocessing stageof time series forecasting are proposed 3-point MA andHSVD are used to smooth the time series the smoothedvalues are forecasted with three models the first is based

on ARIMA model the second in ANN is based on PSOand the third in ANN is based on RPROP The models areevaluated using the time series of injured people in trafficaccidents occurring in Valparaıso Chilean region from 2003to 2012 with 531 weekly registers The smoothing strategiesand the forecasting models are combined and six models areobtained and compared to determine themodel that gives themajor accuracy The paper is structured as follows Section 2describes the smoothing strategies Section 3 explains theproposed forecasting models Section 4 presents the fore-casting accuracy metrics Section 5 presents the results anddiscussions The conclusions are shown in Section 6

2 Smoothing Strategies

21 Moving Average Moving average is a smoothing strategyused in linear filtering to identify or extract the trend froma time series MA is a mean of a constant number ofobservations that can be used to describe a series that doesnot exhibit a trend [38] When 3-point MA is applied over atime series of length 119899 the 119899 minus 2 elements of the smoothedseries are computed with

119904119896 =

119896+1

sum

119894=119896minus1

119909119894

3

(1)

where 119904119896 is the 119896th smoothed signal element for119896 = 2 119899 minus 1119909119894 is each observed element of originaltime series and terms 1199041 and 119904119899 have the same values of1199091 and 119909119899 respectively The smoothed values given by3-points MA will be used by the estimation process throughthe selected technique (ARIMA or ANN) this strategy isillustrated in Figure 1(a)

22 Hankel Singular Value Decomposition The proposedstrategy HSVD is implemented during the preprocessingstage in two steps embedding and decomposition The timeseries is embedded in a trajectory matrix then the structureof the Hankel matrix is applied the decomposition process

The Scientific World Journal 3

extracts the components of low and high frequency of thementioned matrix by means of SVD the smoothed valuesgiven by HSVD are used by the estimation process and thisstrategy is illustrated in Figure 1(b)

The original time series is represented with 119909 119867 is theHankel matrix 119880 119878 119881 are the elements obtained with SVDand will be detailed more ahead 119862119871 is the component of lowfrequency 119862119867 is the component of high frequency 119909 is theforecasted time series and 119890119903 is the error computed between119909 and 119909 with

119890119903 = 119909 minus 119909 (2)

221 Embedding the Time Series The embedding process isillustrated as follows

119867119872times119871 =

[

[

[

[

[

1199091 1199092 sdot sdot sdot 119909119871

1199092 1199093 sdot sdot sdot 119909119871+1

119909119872 119909119872+1 sdot sdot sdot 119909119899

]

]

]

]

]

(3)

where 119867 is a real matrix whose structure is the Hankelmatrix 1199091 119909119899 are the original values of the time series119872 is the number of rows of 119867 and also119872 is the number ofcomponents that will be obtained with SVD 119871 is the numberof columns of 119867 and 119899 is the length of the time series Thevalue of 119871 is computed with

119871 = 119899 minus119872 minus 1 (4)

222 Singular Value Decomposition The SVD process isimplemented over the matrix 119867 obtained in the last subsec-tion Let 119867 be an 119872 times 119873 real matrix then there exist an119872 times 119872 orthogonal matrix 119880 an 119873 times 119873 orthogonal matrix119881 and an 119872 times 119873 diagonal matrix 119878 with diagonal entries1199041 ge 1199042 ge sdot sdot sdot ge 119904119901 with 119901 = min(119872119873) such that119880119879119867119881 = 119878 Moreover the numbers 1199041 1199042 119904119901 are uniquely

determined by119867 [39]

119867 = 119880 times 119878 times 119881119879 (5)

The extraction of the components is developed throughthe singular values 119904119894 the orthogonal matrix 119880 and theorthogonal matrix 119881 for each singular value is obtained onematrix 119860 119894 with 119894 = 1 119872

119860 119894 = 119904 (119894) times 119880 ( 119894) times 119881( 119894)119879 (6)

Therefore the matrix 119860 119894 contains the 119894th component theextraction process is

119862119894 = [119860 119894 (1 ) 119860 119894(2119873 119872)119879] (7)

where 119862119894 is the 119894th component and the elements of 119862119894 arelocated in the first row and last column of 119860 119894

The energy of the obtained components is computed with

119864119894 =

1199042

119894

sum119872

119894=11199042

119894

(8)

where 119864119894 is the energy of the 119894th component and 119904119894 is the 119894thsingular valueWhen119872 gt 2 the component119862119867 is computedwith the sum of the components from 2 to119872 as follows

119862119867 =

119872

sum

119894=2

119862119894(9)

3 Proposed Forecasting Models

31 Autoregressive Integrated Moving Average Model TheARIMA model is the generalization of the ARMA modelARIMA processes are applied on nonstationary time seriesto convert them in stationary in ARIMA(119875119863119876) process119863is a nonnegative integer that determines the order and 119875 and119876 are the polynomials degrees [40]

The time series transformation process to obtain a sta-tionary time series from a nonstationary is developed bymeans of differentiation the time series 119909119905 will be nonstation-ary of order 119889 if 119909119905 = Δ

119889119909119905 is stationary the transformation

process is

Δ119909119905 = 119909119905 minus 119909119905minus1 (10a)

Δ119895+1119909119905 = Δ

119895119909119905 minus Δ

119895119909119905minus1

(10b)

where 119909 is the time series 119905 is the time instant and 119895 isthe number of differentiations obtained that is because theprocess is iterative Once we obtained the stationary timeseries the estimation is computed with

119909119905 =

119875

sum

119894=1

120572119894119911119905minus119894 +

119876

sum

119894=1

120573119894119890119905minus119894 + 119890119905(11)

where 120572119894 represents the coefficients of the AR terms of order119875 and 120573119894 denotes the coefficients of the MA terms of order119876119911 is the input regressor vector which is defined in Section 32and 119890 is a source of randomness and is called white noiseThe coefficients 120572119894 and 120573119894 are estimated using the maximumlikelihood estimation (MLE) algorithm [40]

32 Neural Network Forecasting Model The ANN has acommon structure of three layers [41] the inputs are thelagged terms contained in the regressor vector 119911 at hiddenlayer the sigmoid transfer function is applied and at outputlayer the forecasted value is obtained The ANN output is

119909 (119899) =

119876

sum

119895=1

V119895ℎ119895 (12a)

ℎ119895 = 119891[

119870

sum

119894=1

119908119895119894119911119894 (119899)] (12b)

where 119909 is the estimated value 119899 is the time instant 119876 isthe number of hidden nodes V119895 and 119908119895119894 are the linear andnonlinear weights of the ANN connections respectively 119911119894represents the 119894th lagged term and119891(sdot) is the sigmoid transferfunction denoted by

119891 (119909) =

1

1 + 119890minus119909 (13)

4 The Scientific World Journal

The lagged terms are the input of the ANN and they arecontained in the regressor vector 119911 whose representation forMA smoothing is

119885 (119905) = [119904 (119905 minus 1) 119904 (119905 minus 2) 119904 (119905 minus 119870)] (14)

where 119870 = 119875 lagged terms and 119875 and 119876 were defined inSection 31

The representation of 119911 for HSVD smoothing is

119911 (119905) = [119862119871 (119905 minus 1) 119862119871 (119905 minus 119870)

119862119867 (119905 minus 1) 119862119867 (119905 minus 119870)]

(15)

where119870 = 2119875 lagged termsThe ANN is denoted by ANN(119870 119876 1) with 119870 inputs

119876 hidden nodes and 1 output The parameters V and 119908 areupdated with the application of two learning algorithms onebased on PSO and the other on RPROP

321 Learning Algorithm Based on PSO The weight of theANN connections 119908 and V are adjusted with PSO learningalgorithm In the swarm the 119873119901 particles have a positionvector 119883119894 = (1198831198941 1198831198942 119883119894119863) and a velocity vector 119881119894 =(1198811198941 1198811198942 119881119894119863) each particle is considered a potential solu-tion in a 119863-dimensional search space During each iterationthe particles are accelerated toward the previous best positiondenoted by 119901119894119889 and toward the global best position denotedby 119901119892119889 The swarm has 119873119901 rows and 119863 columns and it isinitialized randomly 119863 is computed with 119875 times 119873ℎ + 119873ℎthe process finishes when the lowest error is obtained basedon the fitness function evaluation or when the maximumnumber of iterations is reached [42] as follows

119881119897+1

119894119889= 119868119897times 119881119897

119894119889+ 1198881 times 1199031198891 (119901

119897

119894119889+ 119883119897

119894119889)

+ 1198882 times 1199031198892 (119901119897

119892119889+ 119883119897

119894119889)

(16a)

119883119897+1

119894119889= 119883119897

119894119889+ 119881119897+1

119894119889 (16b)

119868119897= 119868119897

max minus119868119897

max minus 119868119897

minitermax

times 119897 (16c)

where 119894 = 1 119873119901 119889 = 1 119863 119868 denotes the inertiaweight 1198881 and 1198882 are learning factors 1199031198891 and 1199031198892 arepositive random numbers in the range [0 1] under normaldistribution and 119897 is the 119897th iteration Inertia weight has lineardecreasing 119868max is the maximum value of inertia 119868min is thelowest and itermax is total of iterations

The particle 119883119894119889 represents the optimal solution in thiscase the set of weights 119908 and 119907 for the ANN

322 Learning Algorithm Based on Resilient BackpropagationRPROP is an efficient learning algorithm that performs adirect adaptation of the weight step based on local gradientinformation it is considered a first-ordermethodThe updaterule depends only on the sign of the partial derivative ofthe arbitrary error regarding each weight of the ANN The

individual step sizeΔ 119894119895 is computed for eachweight using thisrule [33] as follows

Δ(119905)

119894119895=

120578+sdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

gt 0

120578minussdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

lt 0

Δ119905minus1

119894119895else

(17)

where 0 lt 120578minuslt 1 lt 120578

+ If the partial derivative 120597119864120597119908119894119895has the same sign for consecutive steps the step size isslightly increased by the factor 120578+ in order to accelerate theconvergence whereas if it changes the sign the step sizeis decreased by the factor 120578minus Additionally in the case ofa change in the sign there should be no adaptation in thesucceeding step in the practice this can be done by setting120597119864120597119908119894119895 = 0 in the adaptation rule Δ 119894119895 Finally the weightupdate and the adaptation are performed after the gradientinformation of all the weights is computed

4 Forecasting Accuracy Metrics

The forecasting accuracy is evaluated with the metrics rootmean squared error (RMSE) generalized cross validation(GCV)mean absolute percentage error (MAPE) and relativeerror (RE)

RMSE = radic 1

119873V

119873V

sum

119894=1

(119909119894 minus 119909119894)2

GCV =

RMSE(1 minus 119870119873V)

2

MAPE = [ 1

119873V

119873V

sum

119894=1

100381610038161003816100381610038161003816100381610038161003816

(119909119894 minus 119909119894)

119909119894

100381610038161003816100381610038161003816100381610038161003816

] times 100

RE =119873V

sum

119894=1

(119909119894 minus 119909119894)

119909119894

(18)

where 119873V is the validation (testing) sample size 119909119894 is the 119894thobserved value 119909119894 is the 119894th estimated value and 119870 is thelength of the input regressor vector

5 Results and Discussions

The data used for forecasting is the time series of injuredpeople in traffic accidents occurring in Valparaıso from 2003to 2012 they were obtained from CONASET Chile [43] Thedata sampling period is weekly with 531 registers as shown inFigure 2(a) the series was separated for training and testingand by trial and error the 85 for training and the 15 fortesting were determined

51 ARIMA Forecasting

511 Moving Average Smoothing The raw time series issmoothed using 3-point moving average whose obtained

The Scientific World Journal 5

1 81 161 241 321 401 481 5310

02040608

1

Time (weeks)

Wee

kly

acci

dent

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus02

minus01

0

01

02

Resid

ual

auto

corr

elat

ion

(b)

Figure 2 Accidents time series (a) raw data and (b) autocorrelation function

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Lagged values

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18Lagged values

minus2

minus18

minus16

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(b)

Figure 3 (a) MA smoothing and (b) HSVD smoothing

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (week)

Relat

ive e

rror

() 26

18

1

02

minus06

minus14

minus22

minus3

(b)

Figure 4 MA-ARIMA(9010) (a) observed versus estimated (b) relative error

values are used as input of the forecasting modelARIMA(119875119863119876) this is presented in Figure 1(a) Theeffective order of the polynomial for the AR terms is foundto be 119875 = 9 and the differentiation parameter is found to be119863 = 0 those values were obtained from the autocorrelationfunction (ACF) shown in Figure 2(b) to set the order 119876 ofMA terms is evaluated the metric GCV versus the 119876 Laggedvalues The results of the GCV are presented in Figure 3(a)it shows that the lowest GCV is achieved with 10 laggedvalues Therefore the configuration of the model is denotedby AM-ARIMA(9010)

The evaluation executed in the testing stage is presentedin Figures 4 and 5(a) and Table 1 The observed values versusthe estimated values are illustrated in Figure 4(a) reachinga good accuracy while the relative error is presented in

Figure 4(b) which shows that the 87 of the points presentan error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(a) it shows that ACF for a lag of 16 is slightlylower than the 95 confidence limit however the rest ofthe coefficients are inside the confidence limit thereforein the errors of the model AM-ARIMA(9010) there is noserial correlation we can conclude that the proposed modelexplains efficiently the variability of the process

512 HSVD Smoothing In this section the forecasting strat-egy presented in Figure 1(b) is evaluated to implement thisstrategy in first instance the time series is mapped using theHankel matrix after the SVD process is executed to obtain

6 The Scientific World Journal

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus03minus02minus01

0010203

Resid

ual

auto

corr

elat

ion

(b)

Figure 5 Residual ACF (a) MA-ARIMA(9010) and (b) SVD-ARIMA(9011)

1 2 3 4 5 6 7 80

05

1

Components

Ener

gy

(a)

0 50 100 150 200 250 300 350 400 450 5000

051

Time (weeks)

CL

valu

es

(b)

0 50 100 150 200 250 300 350 400 450 500Time (weeks)

minus050

05

CH

valu

es

(c)

Figure 6 Accidents time series (a) components energy (b) low frequency component and (c) high frequency component

Table 1 Forecasting with ARIMA

MA-ARIMA HSVD-ARIMARMSE 00034 000073MAPE 112 026GCV 0006 00013RE plusmn 15 87 mdashRE plusmn 05 mdash 95

the 119872 components The value of 119872 is found through thecomputation of the singular values of the decompositionthis is presented in Figure 6(a) as shown in Figure 6(a)the major quantity of energy is captured by the two firstcomponents therefore in this work only two componentshave been selectedwith119872 = 2Thefirst component extractedrepresents the long-term trend (119862119871) of the time series whilethe second represents the short-term component of highfrequency fluctuation (119862119867) The components 119862119871 and 119862119867 areshown in Figures 6(b) and 6(c) respectively

To evaluate the model in this section 119875 = 9 and 119863 = 0

are used and 119876 is evaluated using the GCV metric for 1 le

119876 le 18 then the effective value 119876 = 11 is found as shownin Figure 3(b) therefore the forecasting model is denoted byHSVD-ARIMA(9011)

Once 119875 and 119876 are found the forecasting is executed withthe testing data set and the results of HSVD-ARIMA(9011)are shown in Figures 7(a) 7(b) and 5(b) and Table 1Figure 7(a) shows the observed values versus the estimates

vales and a good adjusting between them is found Therelative errors are presented in Figure 7(b) it shows that the95 of the points present an error lower than plusmn05

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(b) it shows that all the coefficients are inside theconfidence limit therefore in the model errors there is noserial correlation we can conclude that the proposed modelHSVD-ARIMA(9011) explains efficiently the variability ofthe process

The results presented in Table 1 show that the majoraccuracy is achieved with the model HSVD-ARIMA(9011)with a RMSE of 000073 and a MAPE of 026 the 95 ofthe points have a relative error lower than plusmn05

52 ANN Forecasting Model Based on PSO

521 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecastingmodel presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network and then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage ispresented in Figures 8 and 9(a) and Table 2 Theobserved values versus the estimated values areillustrated in Figure 8(a) reaching a good accuracywhile the relative error is presented in Figure 8(b)

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

2 The Scientific World Journal

or MAsmoothing

ARIMA

ANN-PSO

Estimation

xs

minus

x

er

(a)

ARIMA or

ANN-PSO

HSVD Estimation

x

minus

x EmbeddingHankel

Decomposition S VDH

CL

CH

er

SVD(H)

(b)

Figure 1 Smoothing strategies (a) moving average and (b) Hankel singular value decomposition

solving problems of different fields the capability of learningof the ANN is determined by the algorithm Particle swarmoptimization (PSO) is a population algorithm that has beenfound to be optimal it is based on the behaviour of a swarmthis is applied to update the connections weights of the ANNsome modifications of PSO have been evaluated based onvariants of the acceleration coefficients [22] others apply theadaptation of the inertia weight [23ndash26] also the usage ofadaptive mechanisms for both inertia weight and the accel-eration coefficients based on the behaviour of the particle ateach iteration have been used [27 28] The combination ofANN-PSO has improved the forecasting over some classicalalgorithms like backpropagation (BP) [29ndash31] and least meansquare (LMS) [32] Another learning algorithm that has beenshown to be better than backpropagation is RPROP and isalso analyzed by its robustness easy implementation and fastconvergence regarding the conventional BP [33 34]

The linear and nonlinear models may be inadequatein some forecasting problems consequently they are notconsidered universal models then the combination of lin-ear and nonlinear models could capture different formsof relationships in the time series data The Zhang hybridmethodology that combines both ARIMA and ANN modelsis an effective way to improve forecasting accuracy ARIMAmodel is used to analyze the linear part of the problem and theANNmodels the residuals from the ARIMAmodel [35] thismodel has been applied for demand forecasting [36] howeversome researchers believe that some assumptions of Zhangcan degenerate hybrid methodology when opposite situationoccurs Kashei proposes a methodology that combines thelinear and nonlinear models which has no assumptions oftraditional Zhang hybrid linear and nonlinear models inorder to yield the more general and the more accurateforecasting model [37]

Based on the arguments presented in this work twosmoothing strategies to potentiate the preprocessing stageof time series forecasting are proposed 3-point MA andHSVD are used to smooth the time series the smoothedvalues are forecasted with three models the first is based

on ARIMA model the second in ANN is based on PSOand the third in ANN is based on RPROP The models areevaluated using the time series of injured people in trafficaccidents occurring in Valparaıso Chilean region from 2003to 2012 with 531 weekly registers The smoothing strategiesand the forecasting models are combined and six models areobtained and compared to determine themodel that gives themajor accuracy The paper is structured as follows Section 2describes the smoothing strategies Section 3 explains theproposed forecasting models Section 4 presents the fore-casting accuracy metrics Section 5 presents the results anddiscussions The conclusions are shown in Section 6

2 Smoothing Strategies

21 Moving Average Moving average is a smoothing strategyused in linear filtering to identify or extract the trend froma time series MA is a mean of a constant number ofobservations that can be used to describe a series that doesnot exhibit a trend [38] When 3-point MA is applied over atime series of length 119899 the 119899 minus 2 elements of the smoothedseries are computed with

119904119896 =

119896+1

sum

119894=119896minus1

119909119894

3

(1)

where 119904119896 is the 119896th smoothed signal element for119896 = 2 119899 minus 1119909119894 is each observed element of originaltime series and terms 1199041 and 119904119899 have the same values of1199091 and 119909119899 respectively The smoothed values given by3-points MA will be used by the estimation process throughthe selected technique (ARIMA or ANN) this strategy isillustrated in Figure 1(a)

22 Hankel Singular Value Decomposition The proposedstrategy HSVD is implemented during the preprocessingstage in two steps embedding and decomposition The timeseries is embedded in a trajectory matrix then the structureof the Hankel matrix is applied the decomposition process

The Scientific World Journal 3

extracts the components of low and high frequency of thementioned matrix by means of SVD the smoothed valuesgiven by HSVD are used by the estimation process and thisstrategy is illustrated in Figure 1(b)

The original time series is represented with 119909 119867 is theHankel matrix 119880 119878 119881 are the elements obtained with SVDand will be detailed more ahead 119862119871 is the component of lowfrequency 119862119867 is the component of high frequency 119909 is theforecasted time series and 119890119903 is the error computed between119909 and 119909 with

119890119903 = 119909 minus 119909 (2)

221 Embedding the Time Series The embedding process isillustrated as follows

119867119872times119871 =

[

[

[

[

[

1199091 1199092 sdot sdot sdot 119909119871

1199092 1199093 sdot sdot sdot 119909119871+1

119909119872 119909119872+1 sdot sdot sdot 119909119899

]

]

]

]

]

(3)

where 119867 is a real matrix whose structure is the Hankelmatrix 1199091 119909119899 are the original values of the time series119872 is the number of rows of 119867 and also119872 is the number ofcomponents that will be obtained with SVD 119871 is the numberof columns of 119867 and 119899 is the length of the time series Thevalue of 119871 is computed with

119871 = 119899 minus119872 minus 1 (4)

222 Singular Value Decomposition The SVD process isimplemented over the matrix 119867 obtained in the last subsec-tion Let 119867 be an 119872 times 119873 real matrix then there exist an119872 times 119872 orthogonal matrix 119880 an 119873 times 119873 orthogonal matrix119881 and an 119872 times 119873 diagonal matrix 119878 with diagonal entries1199041 ge 1199042 ge sdot sdot sdot ge 119904119901 with 119901 = min(119872119873) such that119880119879119867119881 = 119878 Moreover the numbers 1199041 1199042 119904119901 are uniquely

determined by119867 [39]

119867 = 119880 times 119878 times 119881119879 (5)

The extraction of the components is developed throughthe singular values 119904119894 the orthogonal matrix 119880 and theorthogonal matrix 119881 for each singular value is obtained onematrix 119860 119894 with 119894 = 1 119872

119860 119894 = 119904 (119894) times 119880 ( 119894) times 119881( 119894)119879 (6)

Therefore the matrix 119860 119894 contains the 119894th component theextraction process is

119862119894 = [119860 119894 (1 ) 119860 119894(2119873 119872)119879] (7)

where 119862119894 is the 119894th component and the elements of 119862119894 arelocated in the first row and last column of 119860 119894

The energy of the obtained components is computed with

119864119894 =

1199042

119894

sum119872

119894=11199042

119894

(8)

where 119864119894 is the energy of the 119894th component and 119904119894 is the 119894thsingular valueWhen119872 gt 2 the component119862119867 is computedwith the sum of the components from 2 to119872 as follows

119862119867 =

119872

sum

119894=2

119862119894(9)

3 Proposed Forecasting Models

31 Autoregressive Integrated Moving Average Model TheARIMA model is the generalization of the ARMA modelARIMA processes are applied on nonstationary time seriesto convert them in stationary in ARIMA(119875119863119876) process119863is a nonnegative integer that determines the order and 119875 and119876 are the polynomials degrees [40]

The time series transformation process to obtain a sta-tionary time series from a nonstationary is developed bymeans of differentiation the time series 119909119905 will be nonstation-ary of order 119889 if 119909119905 = Δ

119889119909119905 is stationary the transformation

process is

Δ119909119905 = 119909119905 minus 119909119905minus1 (10a)

Δ119895+1119909119905 = Δ

119895119909119905 minus Δ

119895119909119905minus1

(10b)

where 119909 is the time series 119905 is the time instant and 119895 isthe number of differentiations obtained that is because theprocess is iterative Once we obtained the stationary timeseries the estimation is computed with

119909119905 =

119875

sum

119894=1

120572119894119911119905minus119894 +

119876

sum

119894=1

120573119894119890119905minus119894 + 119890119905(11)

where 120572119894 represents the coefficients of the AR terms of order119875 and 120573119894 denotes the coefficients of the MA terms of order119876119911 is the input regressor vector which is defined in Section 32and 119890 is a source of randomness and is called white noiseThe coefficients 120572119894 and 120573119894 are estimated using the maximumlikelihood estimation (MLE) algorithm [40]

32 Neural Network Forecasting Model The ANN has acommon structure of three layers [41] the inputs are thelagged terms contained in the regressor vector 119911 at hiddenlayer the sigmoid transfer function is applied and at outputlayer the forecasted value is obtained The ANN output is

119909 (119899) =

119876

sum

119895=1

V119895ℎ119895 (12a)

ℎ119895 = 119891[

119870

sum

119894=1

119908119895119894119911119894 (119899)] (12b)

where 119909 is the estimated value 119899 is the time instant 119876 isthe number of hidden nodes V119895 and 119908119895119894 are the linear andnonlinear weights of the ANN connections respectively 119911119894represents the 119894th lagged term and119891(sdot) is the sigmoid transferfunction denoted by

119891 (119909) =

1

1 + 119890minus119909 (13)

4 The Scientific World Journal

The lagged terms are the input of the ANN and they arecontained in the regressor vector 119911 whose representation forMA smoothing is

119885 (119905) = [119904 (119905 minus 1) 119904 (119905 minus 2) 119904 (119905 minus 119870)] (14)

where 119870 = 119875 lagged terms and 119875 and 119876 were defined inSection 31

The representation of 119911 for HSVD smoothing is

119911 (119905) = [119862119871 (119905 minus 1) 119862119871 (119905 minus 119870)

119862119867 (119905 minus 1) 119862119867 (119905 minus 119870)]

(15)

where119870 = 2119875 lagged termsThe ANN is denoted by ANN(119870 119876 1) with 119870 inputs

119876 hidden nodes and 1 output The parameters V and 119908 areupdated with the application of two learning algorithms onebased on PSO and the other on RPROP

321 Learning Algorithm Based on PSO The weight of theANN connections 119908 and V are adjusted with PSO learningalgorithm In the swarm the 119873119901 particles have a positionvector 119883119894 = (1198831198941 1198831198942 119883119894119863) and a velocity vector 119881119894 =(1198811198941 1198811198942 119881119894119863) each particle is considered a potential solu-tion in a 119863-dimensional search space During each iterationthe particles are accelerated toward the previous best positiondenoted by 119901119894119889 and toward the global best position denotedby 119901119892119889 The swarm has 119873119901 rows and 119863 columns and it isinitialized randomly 119863 is computed with 119875 times 119873ℎ + 119873ℎthe process finishes when the lowest error is obtained basedon the fitness function evaluation or when the maximumnumber of iterations is reached [42] as follows

119881119897+1

119894119889= 119868119897times 119881119897

119894119889+ 1198881 times 1199031198891 (119901

119897

119894119889+ 119883119897

119894119889)

+ 1198882 times 1199031198892 (119901119897

119892119889+ 119883119897

119894119889)

(16a)

119883119897+1

119894119889= 119883119897

119894119889+ 119881119897+1

119894119889 (16b)

119868119897= 119868119897

max minus119868119897

max minus 119868119897

minitermax

times 119897 (16c)

where 119894 = 1 119873119901 119889 = 1 119863 119868 denotes the inertiaweight 1198881 and 1198882 are learning factors 1199031198891 and 1199031198892 arepositive random numbers in the range [0 1] under normaldistribution and 119897 is the 119897th iteration Inertia weight has lineardecreasing 119868max is the maximum value of inertia 119868min is thelowest and itermax is total of iterations

The particle 119883119894119889 represents the optimal solution in thiscase the set of weights 119908 and 119907 for the ANN

322 Learning Algorithm Based on Resilient BackpropagationRPROP is an efficient learning algorithm that performs adirect adaptation of the weight step based on local gradientinformation it is considered a first-ordermethodThe updaterule depends only on the sign of the partial derivative ofthe arbitrary error regarding each weight of the ANN The

individual step sizeΔ 119894119895 is computed for eachweight using thisrule [33] as follows

Δ(119905)

119894119895=

120578+sdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

gt 0

120578minussdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

lt 0

Δ119905minus1

119894119895else

(17)

where 0 lt 120578minuslt 1 lt 120578

+ If the partial derivative 120597119864120597119908119894119895has the same sign for consecutive steps the step size isslightly increased by the factor 120578+ in order to accelerate theconvergence whereas if it changes the sign the step sizeis decreased by the factor 120578minus Additionally in the case ofa change in the sign there should be no adaptation in thesucceeding step in the practice this can be done by setting120597119864120597119908119894119895 = 0 in the adaptation rule Δ 119894119895 Finally the weightupdate and the adaptation are performed after the gradientinformation of all the weights is computed

4 Forecasting Accuracy Metrics

The forecasting accuracy is evaluated with the metrics rootmean squared error (RMSE) generalized cross validation(GCV)mean absolute percentage error (MAPE) and relativeerror (RE)

RMSE = radic 1

119873V

119873V

sum

119894=1

(119909119894 minus 119909119894)2

GCV =

RMSE(1 minus 119870119873V)

2

MAPE = [ 1

119873V

119873V

sum

119894=1

100381610038161003816100381610038161003816100381610038161003816

(119909119894 minus 119909119894)

119909119894

100381610038161003816100381610038161003816100381610038161003816

] times 100

RE =119873V

sum

119894=1

(119909119894 minus 119909119894)

119909119894

(18)

where 119873V is the validation (testing) sample size 119909119894 is the 119894thobserved value 119909119894 is the 119894th estimated value and 119870 is thelength of the input regressor vector

5 Results and Discussions

The data used for forecasting is the time series of injuredpeople in traffic accidents occurring in Valparaıso from 2003to 2012 they were obtained from CONASET Chile [43] Thedata sampling period is weekly with 531 registers as shown inFigure 2(a) the series was separated for training and testingand by trial and error the 85 for training and the 15 fortesting were determined

51 ARIMA Forecasting

511 Moving Average Smoothing The raw time series issmoothed using 3-point moving average whose obtained

The Scientific World Journal 5

1 81 161 241 321 401 481 5310

02040608

1

Time (weeks)

Wee

kly

acci

dent

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus02

minus01

0

01

02

Resid

ual

auto

corr

elat

ion

(b)

Figure 2 Accidents time series (a) raw data and (b) autocorrelation function

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Lagged values

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18Lagged values

minus2

minus18

minus16

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(b)

Figure 3 (a) MA smoothing and (b) HSVD smoothing

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (week)

Relat

ive e

rror

() 26

18

1

02

minus06

minus14

minus22

minus3

(b)

Figure 4 MA-ARIMA(9010) (a) observed versus estimated (b) relative error

values are used as input of the forecasting modelARIMA(119875119863119876) this is presented in Figure 1(a) Theeffective order of the polynomial for the AR terms is foundto be 119875 = 9 and the differentiation parameter is found to be119863 = 0 those values were obtained from the autocorrelationfunction (ACF) shown in Figure 2(b) to set the order 119876 ofMA terms is evaluated the metric GCV versus the 119876 Laggedvalues The results of the GCV are presented in Figure 3(a)it shows that the lowest GCV is achieved with 10 laggedvalues Therefore the configuration of the model is denotedby AM-ARIMA(9010)

The evaluation executed in the testing stage is presentedin Figures 4 and 5(a) and Table 1 The observed values versusthe estimated values are illustrated in Figure 4(a) reachinga good accuracy while the relative error is presented in

Figure 4(b) which shows that the 87 of the points presentan error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(a) it shows that ACF for a lag of 16 is slightlylower than the 95 confidence limit however the rest ofthe coefficients are inside the confidence limit thereforein the errors of the model AM-ARIMA(9010) there is noserial correlation we can conclude that the proposed modelexplains efficiently the variability of the process

512 HSVD Smoothing In this section the forecasting strat-egy presented in Figure 1(b) is evaluated to implement thisstrategy in first instance the time series is mapped using theHankel matrix after the SVD process is executed to obtain

6 The Scientific World Journal

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus03minus02minus01

0010203

Resid

ual

auto

corr

elat

ion

(b)

Figure 5 Residual ACF (a) MA-ARIMA(9010) and (b) SVD-ARIMA(9011)

1 2 3 4 5 6 7 80

05

1

Components

Ener

gy

(a)

0 50 100 150 200 250 300 350 400 450 5000

051

Time (weeks)

CL

valu

es

(b)

0 50 100 150 200 250 300 350 400 450 500Time (weeks)

minus050

05

CH

valu

es

(c)

Figure 6 Accidents time series (a) components energy (b) low frequency component and (c) high frequency component

Table 1 Forecasting with ARIMA

MA-ARIMA HSVD-ARIMARMSE 00034 000073MAPE 112 026GCV 0006 00013RE plusmn 15 87 mdashRE plusmn 05 mdash 95

the 119872 components The value of 119872 is found through thecomputation of the singular values of the decompositionthis is presented in Figure 6(a) as shown in Figure 6(a)the major quantity of energy is captured by the two firstcomponents therefore in this work only two componentshave been selectedwith119872 = 2Thefirst component extractedrepresents the long-term trend (119862119871) of the time series whilethe second represents the short-term component of highfrequency fluctuation (119862119867) The components 119862119871 and 119862119867 areshown in Figures 6(b) and 6(c) respectively

To evaluate the model in this section 119875 = 9 and 119863 = 0

are used and 119876 is evaluated using the GCV metric for 1 le

119876 le 18 then the effective value 119876 = 11 is found as shownin Figure 3(b) therefore the forecasting model is denoted byHSVD-ARIMA(9011)

Once 119875 and 119876 are found the forecasting is executed withthe testing data set and the results of HSVD-ARIMA(9011)are shown in Figures 7(a) 7(b) and 5(b) and Table 1Figure 7(a) shows the observed values versus the estimates

vales and a good adjusting between them is found Therelative errors are presented in Figure 7(b) it shows that the95 of the points present an error lower than plusmn05

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(b) it shows that all the coefficients are inside theconfidence limit therefore in the model errors there is noserial correlation we can conclude that the proposed modelHSVD-ARIMA(9011) explains efficiently the variability ofthe process

The results presented in Table 1 show that the majoraccuracy is achieved with the model HSVD-ARIMA(9011)with a RMSE of 000073 and a MAPE of 026 the 95 ofthe points have a relative error lower than plusmn05

52 ANN Forecasting Model Based on PSO

521 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecastingmodel presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network and then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage ispresented in Figures 8 and 9(a) and Table 2 Theobserved values versus the estimated values areillustrated in Figure 8(a) reaching a good accuracywhile the relative error is presented in Figure 8(b)

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

The Scientific World Journal 3

extracts the components of low and high frequency of thementioned matrix by means of SVD the smoothed valuesgiven by HSVD are used by the estimation process and thisstrategy is illustrated in Figure 1(b)

The original time series is represented with 119909 119867 is theHankel matrix 119880 119878 119881 are the elements obtained with SVDand will be detailed more ahead 119862119871 is the component of lowfrequency 119862119867 is the component of high frequency 119909 is theforecasted time series and 119890119903 is the error computed between119909 and 119909 with

119890119903 = 119909 minus 119909 (2)

221 Embedding the Time Series The embedding process isillustrated as follows

119867119872times119871 =

[

[

[

[

[

1199091 1199092 sdot sdot sdot 119909119871

1199092 1199093 sdot sdot sdot 119909119871+1

119909119872 119909119872+1 sdot sdot sdot 119909119899

]

]

]

]

]

(3)

where 119867 is a real matrix whose structure is the Hankelmatrix 1199091 119909119899 are the original values of the time series119872 is the number of rows of 119867 and also119872 is the number ofcomponents that will be obtained with SVD 119871 is the numberof columns of 119867 and 119899 is the length of the time series Thevalue of 119871 is computed with

119871 = 119899 minus119872 minus 1 (4)

222 Singular Value Decomposition The SVD process isimplemented over the matrix 119867 obtained in the last subsec-tion Let 119867 be an 119872 times 119873 real matrix then there exist an119872 times 119872 orthogonal matrix 119880 an 119873 times 119873 orthogonal matrix119881 and an 119872 times 119873 diagonal matrix 119878 with diagonal entries1199041 ge 1199042 ge sdot sdot sdot ge 119904119901 with 119901 = min(119872119873) such that119880119879119867119881 = 119878 Moreover the numbers 1199041 1199042 119904119901 are uniquely

determined by119867 [39]

119867 = 119880 times 119878 times 119881119879 (5)

The extraction of the components is developed throughthe singular values 119904119894 the orthogonal matrix 119880 and theorthogonal matrix 119881 for each singular value is obtained onematrix 119860 119894 with 119894 = 1 119872

119860 119894 = 119904 (119894) times 119880 ( 119894) times 119881( 119894)119879 (6)

Therefore the matrix 119860 119894 contains the 119894th component theextraction process is

119862119894 = [119860 119894 (1 ) 119860 119894(2119873 119872)119879] (7)

where 119862119894 is the 119894th component and the elements of 119862119894 arelocated in the first row and last column of 119860 119894

The energy of the obtained components is computed with

119864119894 =

1199042

119894

sum119872

119894=11199042

119894

(8)

where 119864119894 is the energy of the 119894th component and 119904119894 is the 119894thsingular valueWhen119872 gt 2 the component119862119867 is computedwith the sum of the components from 2 to119872 as follows

119862119867 =

119872

sum

119894=2

119862119894(9)

3 Proposed Forecasting Models

31 Autoregressive Integrated Moving Average Model TheARIMA model is the generalization of the ARMA modelARIMA processes are applied on nonstationary time seriesto convert them in stationary in ARIMA(119875119863119876) process119863is a nonnegative integer that determines the order and 119875 and119876 are the polynomials degrees [40]

The time series transformation process to obtain a sta-tionary time series from a nonstationary is developed bymeans of differentiation the time series 119909119905 will be nonstation-ary of order 119889 if 119909119905 = Δ

119889119909119905 is stationary the transformation

process is

Δ119909119905 = 119909119905 minus 119909119905minus1 (10a)

Δ119895+1119909119905 = Δ

119895119909119905 minus Δ

119895119909119905minus1

(10b)

where 119909 is the time series 119905 is the time instant and 119895 isthe number of differentiations obtained that is because theprocess is iterative Once we obtained the stationary timeseries the estimation is computed with

119909119905 =

119875

sum

119894=1

120572119894119911119905minus119894 +

119876

sum

119894=1

120573119894119890119905minus119894 + 119890119905(11)

where 120572119894 represents the coefficients of the AR terms of order119875 and 120573119894 denotes the coefficients of the MA terms of order119876119911 is the input regressor vector which is defined in Section 32and 119890 is a source of randomness and is called white noiseThe coefficients 120572119894 and 120573119894 are estimated using the maximumlikelihood estimation (MLE) algorithm [40]

32 Neural Network Forecasting Model The ANN has acommon structure of three layers [41] the inputs are thelagged terms contained in the regressor vector 119911 at hiddenlayer the sigmoid transfer function is applied and at outputlayer the forecasted value is obtained The ANN output is

119909 (119899) =

119876

sum

119895=1

V119895ℎ119895 (12a)

ℎ119895 = 119891[

119870

sum

119894=1

119908119895119894119911119894 (119899)] (12b)

where 119909 is the estimated value 119899 is the time instant 119876 isthe number of hidden nodes V119895 and 119908119895119894 are the linear andnonlinear weights of the ANN connections respectively 119911119894represents the 119894th lagged term and119891(sdot) is the sigmoid transferfunction denoted by

119891 (119909) =

1

1 + 119890minus119909 (13)

4 The Scientific World Journal

The lagged terms are the input of the ANN and they arecontained in the regressor vector 119911 whose representation forMA smoothing is

119885 (119905) = [119904 (119905 minus 1) 119904 (119905 minus 2) 119904 (119905 minus 119870)] (14)

where 119870 = 119875 lagged terms and 119875 and 119876 were defined inSection 31

The representation of 119911 for HSVD smoothing is

119911 (119905) = [119862119871 (119905 minus 1) 119862119871 (119905 minus 119870)

119862119867 (119905 minus 1) 119862119867 (119905 minus 119870)]

(15)

where119870 = 2119875 lagged termsThe ANN is denoted by ANN(119870 119876 1) with 119870 inputs

119876 hidden nodes and 1 output The parameters V and 119908 areupdated with the application of two learning algorithms onebased on PSO and the other on RPROP

321 Learning Algorithm Based on PSO The weight of theANN connections 119908 and V are adjusted with PSO learningalgorithm In the swarm the 119873119901 particles have a positionvector 119883119894 = (1198831198941 1198831198942 119883119894119863) and a velocity vector 119881119894 =(1198811198941 1198811198942 119881119894119863) each particle is considered a potential solu-tion in a 119863-dimensional search space During each iterationthe particles are accelerated toward the previous best positiondenoted by 119901119894119889 and toward the global best position denotedby 119901119892119889 The swarm has 119873119901 rows and 119863 columns and it isinitialized randomly 119863 is computed with 119875 times 119873ℎ + 119873ℎthe process finishes when the lowest error is obtained basedon the fitness function evaluation or when the maximumnumber of iterations is reached [42] as follows

119881119897+1

119894119889= 119868119897times 119881119897

119894119889+ 1198881 times 1199031198891 (119901

119897

119894119889+ 119883119897

119894119889)

+ 1198882 times 1199031198892 (119901119897

119892119889+ 119883119897

119894119889)

(16a)

119883119897+1

119894119889= 119883119897

119894119889+ 119881119897+1

119894119889 (16b)

119868119897= 119868119897

max minus119868119897

max minus 119868119897

minitermax

times 119897 (16c)

where 119894 = 1 119873119901 119889 = 1 119863 119868 denotes the inertiaweight 1198881 and 1198882 are learning factors 1199031198891 and 1199031198892 arepositive random numbers in the range [0 1] under normaldistribution and 119897 is the 119897th iteration Inertia weight has lineardecreasing 119868max is the maximum value of inertia 119868min is thelowest and itermax is total of iterations

The particle 119883119894119889 represents the optimal solution in thiscase the set of weights 119908 and 119907 for the ANN

322 Learning Algorithm Based on Resilient BackpropagationRPROP is an efficient learning algorithm that performs adirect adaptation of the weight step based on local gradientinformation it is considered a first-ordermethodThe updaterule depends only on the sign of the partial derivative ofthe arbitrary error regarding each weight of the ANN The

individual step sizeΔ 119894119895 is computed for eachweight using thisrule [33] as follows

Δ(119905)

119894119895=

120578+sdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

gt 0

120578minussdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

lt 0

Δ119905minus1

119894119895else

(17)

where 0 lt 120578minuslt 1 lt 120578

+ If the partial derivative 120597119864120597119908119894119895has the same sign for consecutive steps the step size isslightly increased by the factor 120578+ in order to accelerate theconvergence whereas if it changes the sign the step sizeis decreased by the factor 120578minus Additionally in the case ofa change in the sign there should be no adaptation in thesucceeding step in the practice this can be done by setting120597119864120597119908119894119895 = 0 in the adaptation rule Δ 119894119895 Finally the weightupdate and the adaptation are performed after the gradientinformation of all the weights is computed

4 Forecasting Accuracy Metrics

The forecasting accuracy is evaluated with the metrics rootmean squared error (RMSE) generalized cross validation(GCV)mean absolute percentage error (MAPE) and relativeerror (RE)

RMSE = radic 1

119873V

119873V

sum

119894=1

(119909119894 minus 119909119894)2

GCV =

RMSE(1 minus 119870119873V)

2

MAPE = [ 1

119873V

119873V

sum

119894=1

100381610038161003816100381610038161003816100381610038161003816

(119909119894 minus 119909119894)

119909119894

100381610038161003816100381610038161003816100381610038161003816

] times 100

RE =119873V

sum

119894=1

(119909119894 minus 119909119894)

119909119894

(18)

where 119873V is the validation (testing) sample size 119909119894 is the 119894thobserved value 119909119894 is the 119894th estimated value and 119870 is thelength of the input regressor vector

5 Results and Discussions

The data used for forecasting is the time series of injuredpeople in traffic accidents occurring in Valparaıso from 2003to 2012 they were obtained from CONASET Chile [43] Thedata sampling period is weekly with 531 registers as shown inFigure 2(a) the series was separated for training and testingand by trial and error the 85 for training and the 15 fortesting were determined

51 ARIMA Forecasting

511 Moving Average Smoothing The raw time series issmoothed using 3-point moving average whose obtained

The Scientific World Journal 5

1 81 161 241 321 401 481 5310

02040608

1

Time (weeks)

Wee

kly

acci

dent

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus02

minus01

0

01

02

Resid

ual

auto

corr

elat

ion

(b)

Figure 2 Accidents time series (a) raw data and (b) autocorrelation function

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Lagged values

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18Lagged values

minus2

minus18

minus16

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(b)

Figure 3 (a) MA smoothing and (b) HSVD smoothing

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (week)

Relat

ive e

rror

() 26

18

1

02

minus06

minus14

minus22

minus3

(b)

Figure 4 MA-ARIMA(9010) (a) observed versus estimated (b) relative error

values are used as input of the forecasting modelARIMA(119875119863119876) this is presented in Figure 1(a) Theeffective order of the polynomial for the AR terms is foundto be 119875 = 9 and the differentiation parameter is found to be119863 = 0 those values were obtained from the autocorrelationfunction (ACF) shown in Figure 2(b) to set the order 119876 ofMA terms is evaluated the metric GCV versus the 119876 Laggedvalues The results of the GCV are presented in Figure 3(a)it shows that the lowest GCV is achieved with 10 laggedvalues Therefore the configuration of the model is denotedby AM-ARIMA(9010)

The evaluation executed in the testing stage is presentedin Figures 4 and 5(a) and Table 1 The observed values versusthe estimated values are illustrated in Figure 4(a) reachinga good accuracy while the relative error is presented in

Figure 4(b) which shows that the 87 of the points presentan error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(a) it shows that ACF for a lag of 16 is slightlylower than the 95 confidence limit however the rest ofthe coefficients are inside the confidence limit thereforein the errors of the model AM-ARIMA(9010) there is noserial correlation we can conclude that the proposed modelexplains efficiently the variability of the process

512 HSVD Smoothing In this section the forecasting strat-egy presented in Figure 1(b) is evaluated to implement thisstrategy in first instance the time series is mapped using theHankel matrix after the SVD process is executed to obtain

6 The Scientific World Journal

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus03minus02minus01

0010203

Resid

ual

auto

corr

elat

ion

(b)

Figure 5 Residual ACF (a) MA-ARIMA(9010) and (b) SVD-ARIMA(9011)

1 2 3 4 5 6 7 80

05

1

Components

Ener

gy

(a)

0 50 100 150 200 250 300 350 400 450 5000

051

Time (weeks)

CL

valu

es

(b)

0 50 100 150 200 250 300 350 400 450 500Time (weeks)

minus050

05

CH

valu

es

(c)

Figure 6 Accidents time series (a) components energy (b) low frequency component and (c) high frequency component

Table 1 Forecasting with ARIMA

MA-ARIMA HSVD-ARIMARMSE 00034 000073MAPE 112 026GCV 0006 00013RE plusmn 15 87 mdashRE plusmn 05 mdash 95

the 119872 components The value of 119872 is found through thecomputation of the singular values of the decompositionthis is presented in Figure 6(a) as shown in Figure 6(a)the major quantity of energy is captured by the two firstcomponents therefore in this work only two componentshave been selectedwith119872 = 2Thefirst component extractedrepresents the long-term trend (119862119871) of the time series whilethe second represents the short-term component of highfrequency fluctuation (119862119867) The components 119862119871 and 119862119867 areshown in Figures 6(b) and 6(c) respectively

To evaluate the model in this section 119875 = 9 and 119863 = 0

are used and 119876 is evaluated using the GCV metric for 1 le

119876 le 18 then the effective value 119876 = 11 is found as shownin Figure 3(b) therefore the forecasting model is denoted byHSVD-ARIMA(9011)

Once 119875 and 119876 are found the forecasting is executed withthe testing data set and the results of HSVD-ARIMA(9011)are shown in Figures 7(a) 7(b) and 5(b) and Table 1Figure 7(a) shows the observed values versus the estimates

vales and a good adjusting between them is found Therelative errors are presented in Figure 7(b) it shows that the95 of the points present an error lower than plusmn05

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(b) it shows that all the coefficients are inside theconfidence limit therefore in the model errors there is noserial correlation we can conclude that the proposed modelHSVD-ARIMA(9011) explains efficiently the variability ofthe process

The results presented in Table 1 show that the majoraccuracy is achieved with the model HSVD-ARIMA(9011)with a RMSE of 000073 and a MAPE of 026 the 95 ofthe points have a relative error lower than plusmn05

52 ANN Forecasting Model Based on PSO

521 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecastingmodel presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network and then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage ispresented in Figures 8 and 9(a) and Table 2 Theobserved values versus the estimated values areillustrated in Figure 8(a) reaching a good accuracywhile the relative error is presented in Figure 8(b)

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

4 The Scientific World Journal

The lagged terms are the input of the ANN and they arecontained in the regressor vector 119911 whose representation forMA smoothing is

119885 (119905) = [119904 (119905 minus 1) 119904 (119905 minus 2) 119904 (119905 minus 119870)] (14)

where 119870 = 119875 lagged terms and 119875 and 119876 were defined inSection 31

The representation of 119911 for HSVD smoothing is

119911 (119905) = [119862119871 (119905 minus 1) 119862119871 (119905 minus 119870)

119862119867 (119905 minus 1) 119862119867 (119905 minus 119870)]

(15)

where119870 = 2119875 lagged termsThe ANN is denoted by ANN(119870 119876 1) with 119870 inputs

119876 hidden nodes and 1 output The parameters V and 119908 areupdated with the application of two learning algorithms onebased on PSO and the other on RPROP

321 Learning Algorithm Based on PSO The weight of theANN connections 119908 and V are adjusted with PSO learningalgorithm In the swarm the 119873119901 particles have a positionvector 119883119894 = (1198831198941 1198831198942 119883119894119863) and a velocity vector 119881119894 =(1198811198941 1198811198942 119881119894119863) each particle is considered a potential solu-tion in a 119863-dimensional search space During each iterationthe particles are accelerated toward the previous best positiondenoted by 119901119894119889 and toward the global best position denotedby 119901119892119889 The swarm has 119873119901 rows and 119863 columns and it isinitialized randomly 119863 is computed with 119875 times 119873ℎ + 119873ℎthe process finishes when the lowest error is obtained basedon the fitness function evaluation or when the maximumnumber of iterations is reached [42] as follows

119881119897+1

119894119889= 119868119897times 119881119897

119894119889+ 1198881 times 1199031198891 (119901

119897

119894119889+ 119883119897

119894119889)

+ 1198882 times 1199031198892 (119901119897

119892119889+ 119883119897

119894119889)

(16a)

119883119897+1

119894119889= 119883119897

119894119889+ 119881119897+1

119894119889 (16b)

119868119897= 119868119897

max minus119868119897

max minus 119868119897

minitermax

times 119897 (16c)

where 119894 = 1 119873119901 119889 = 1 119863 119868 denotes the inertiaweight 1198881 and 1198882 are learning factors 1199031198891 and 1199031198892 arepositive random numbers in the range [0 1] under normaldistribution and 119897 is the 119897th iteration Inertia weight has lineardecreasing 119868max is the maximum value of inertia 119868min is thelowest and itermax is total of iterations

The particle 119883119894119889 represents the optimal solution in thiscase the set of weights 119908 and 119907 for the ANN

322 Learning Algorithm Based on Resilient BackpropagationRPROP is an efficient learning algorithm that performs adirect adaptation of the weight step based on local gradientinformation it is considered a first-ordermethodThe updaterule depends only on the sign of the partial derivative ofthe arbitrary error regarding each weight of the ANN The

individual step sizeΔ 119894119895 is computed for eachweight using thisrule [33] as follows

Δ(119905)

119894119895=

120578+sdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

gt 0

120578minussdot Δ119905minus1

119894119895if 120597119864

120597119908119894119895

(119905minus1)120597119864

120597119908119894119895

(119905)

lt 0

Δ119905minus1

119894119895else

(17)

where 0 lt 120578minuslt 1 lt 120578

+ If the partial derivative 120597119864120597119908119894119895has the same sign for consecutive steps the step size isslightly increased by the factor 120578+ in order to accelerate theconvergence whereas if it changes the sign the step sizeis decreased by the factor 120578minus Additionally in the case ofa change in the sign there should be no adaptation in thesucceeding step in the practice this can be done by setting120597119864120597119908119894119895 = 0 in the adaptation rule Δ 119894119895 Finally the weightupdate and the adaptation are performed after the gradientinformation of all the weights is computed

4 Forecasting Accuracy Metrics

The forecasting accuracy is evaluated with the metrics rootmean squared error (RMSE) generalized cross validation(GCV)mean absolute percentage error (MAPE) and relativeerror (RE)

RMSE = radic 1

119873V

119873V

sum

119894=1

(119909119894 minus 119909119894)2

GCV =

RMSE(1 minus 119870119873V)

2

MAPE = [ 1

119873V

119873V

sum

119894=1

100381610038161003816100381610038161003816100381610038161003816

(119909119894 minus 119909119894)

119909119894

100381610038161003816100381610038161003816100381610038161003816

] times 100

RE =119873V

sum

119894=1

(119909119894 minus 119909119894)

119909119894

(18)

where 119873V is the validation (testing) sample size 119909119894 is the 119894thobserved value 119909119894 is the 119894th estimated value and 119870 is thelength of the input regressor vector

5 Results and Discussions

The data used for forecasting is the time series of injuredpeople in traffic accidents occurring in Valparaıso from 2003to 2012 they were obtained from CONASET Chile [43] Thedata sampling period is weekly with 531 registers as shown inFigure 2(a) the series was separated for training and testingand by trial and error the 85 for training and the 15 fortesting were determined

51 ARIMA Forecasting

511 Moving Average Smoothing The raw time series issmoothed using 3-point moving average whose obtained

The Scientific World Journal 5

1 81 161 241 321 401 481 5310

02040608

1

Time (weeks)

Wee

kly

acci

dent

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus02

minus01

0

01

02

Resid

ual

auto

corr

elat

ion

(b)

Figure 2 Accidents time series (a) raw data and (b) autocorrelation function

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Lagged values

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18Lagged values

minus2

minus18

minus16

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(b)

Figure 3 (a) MA smoothing and (b) HSVD smoothing

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (week)

Relat

ive e

rror

() 26

18

1

02

minus06

minus14

minus22

minus3

(b)

Figure 4 MA-ARIMA(9010) (a) observed versus estimated (b) relative error

values are used as input of the forecasting modelARIMA(119875119863119876) this is presented in Figure 1(a) Theeffective order of the polynomial for the AR terms is foundto be 119875 = 9 and the differentiation parameter is found to be119863 = 0 those values were obtained from the autocorrelationfunction (ACF) shown in Figure 2(b) to set the order 119876 ofMA terms is evaluated the metric GCV versus the 119876 Laggedvalues The results of the GCV are presented in Figure 3(a)it shows that the lowest GCV is achieved with 10 laggedvalues Therefore the configuration of the model is denotedby AM-ARIMA(9010)

The evaluation executed in the testing stage is presentedin Figures 4 and 5(a) and Table 1 The observed values versusthe estimated values are illustrated in Figure 4(a) reachinga good accuracy while the relative error is presented in

Figure 4(b) which shows that the 87 of the points presentan error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(a) it shows that ACF for a lag of 16 is slightlylower than the 95 confidence limit however the rest ofthe coefficients are inside the confidence limit thereforein the errors of the model AM-ARIMA(9010) there is noserial correlation we can conclude that the proposed modelexplains efficiently the variability of the process

512 HSVD Smoothing In this section the forecasting strat-egy presented in Figure 1(b) is evaluated to implement thisstrategy in first instance the time series is mapped using theHankel matrix after the SVD process is executed to obtain

6 The Scientific World Journal

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus03minus02minus01

0010203

Resid

ual

auto

corr

elat

ion

(b)

Figure 5 Residual ACF (a) MA-ARIMA(9010) and (b) SVD-ARIMA(9011)

1 2 3 4 5 6 7 80

05

1

Components

Ener

gy

(a)

0 50 100 150 200 250 300 350 400 450 5000

051

Time (weeks)

CL

valu

es

(b)

0 50 100 150 200 250 300 350 400 450 500Time (weeks)

minus050

05

CH

valu

es

(c)

Figure 6 Accidents time series (a) components energy (b) low frequency component and (c) high frequency component

Table 1 Forecasting with ARIMA

MA-ARIMA HSVD-ARIMARMSE 00034 000073MAPE 112 026GCV 0006 00013RE plusmn 15 87 mdashRE plusmn 05 mdash 95

the 119872 components The value of 119872 is found through thecomputation of the singular values of the decompositionthis is presented in Figure 6(a) as shown in Figure 6(a)the major quantity of energy is captured by the two firstcomponents therefore in this work only two componentshave been selectedwith119872 = 2Thefirst component extractedrepresents the long-term trend (119862119871) of the time series whilethe second represents the short-term component of highfrequency fluctuation (119862119867) The components 119862119871 and 119862119867 areshown in Figures 6(b) and 6(c) respectively

To evaluate the model in this section 119875 = 9 and 119863 = 0

are used and 119876 is evaluated using the GCV metric for 1 le

119876 le 18 then the effective value 119876 = 11 is found as shownin Figure 3(b) therefore the forecasting model is denoted byHSVD-ARIMA(9011)

Once 119875 and 119876 are found the forecasting is executed withthe testing data set and the results of HSVD-ARIMA(9011)are shown in Figures 7(a) 7(b) and 5(b) and Table 1Figure 7(a) shows the observed values versus the estimates

vales and a good adjusting between them is found Therelative errors are presented in Figure 7(b) it shows that the95 of the points present an error lower than plusmn05

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(b) it shows that all the coefficients are inside theconfidence limit therefore in the model errors there is noserial correlation we can conclude that the proposed modelHSVD-ARIMA(9011) explains efficiently the variability ofthe process

The results presented in Table 1 show that the majoraccuracy is achieved with the model HSVD-ARIMA(9011)with a RMSE of 000073 and a MAPE of 026 the 95 ofthe points have a relative error lower than plusmn05

52 ANN Forecasting Model Based on PSO

521 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecastingmodel presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network and then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage ispresented in Figures 8 and 9(a) and Table 2 Theobserved values versus the estimated values areillustrated in Figure 8(a) reaching a good accuracywhile the relative error is presented in Figure 8(b)

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

The Scientific World Journal 5

1 81 161 241 321 401 481 5310

02040608

1

Time (weeks)

Wee

kly

acci

dent

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus02

minus01

0

01

02

Resid

ual

auto

corr

elat

ion

(b)

Figure 2 Accidents time series (a) raw data and (b) autocorrelation function

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Lagged values

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18Lagged values

minus2

minus18

minus16

minus14

minus12

minus1

minus08

minus06

Log(

GCV

)

(b)

Figure 3 (a) MA smoothing and (b) HSVD smoothing

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (week)

Relat

ive e

rror

() 26

18

1

02

minus06

minus14

minus22

minus3

(b)

Figure 4 MA-ARIMA(9010) (a) observed versus estimated (b) relative error

values are used as input of the forecasting modelARIMA(119875119863119876) this is presented in Figure 1(a) Theeffective order of the polynomial for the AR terms is foundto be 119875 = 9 and the differentiation parameter is found to be119863 = 0 those values were obtained from the autocorrelationfunction (ACF) shown in Figure 2(b) to set the order 119876 ofMA terms is evaluated the metric GCV versus the 119876 Laggedvalues The results of the GCV are presented in Figure 3(a)it shows that the lowest GCV is achieved with 10 laggedvalues Therefore the configuration of the model is denotedby AM-ARIMA(9010)

The evaluation executed in the testing stage is presentedin Figures 4 and 5(a) and Table 1 The observed values versusthe estimated values are illustrated in Figure 4(a) reachinga good accuracy while the relative error is presented in

Figure 4(b) which shows that the 87 of the points presentan error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(a) it shows that ACF for a lag of 16 is slightlylower than the 95 confidence limit however the rest ofthe coefficients are inside the confidence limit thereforein the errors of the model AM-ARIMA(9010) there is noserial correlation we can conclude that the proposed modelexplains efficiently the variability of the process

512 HSVD Smoothing In this section the forecasting strat-egy presented in Figure 1(b) is evaluated to implement thisstrategy in first instance the time series is mapped using theHankel matrix after the SVD process is executed to obtain

6 The Scientific World Journal

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus03minus02minus01

0010203

Resid

ual

auto

corr

elat

ion

(b)

Figure 5 Residual ACF (a) MA-ARIMA(9010) and (b) SVD-ARIMA(9011)

1 2 3 4 5 6 7 80

05

1

Components

Ener

gy

(a)

0 50 100 150 200 250 300 350 400 450 5000

051

Time (weeks)

CL

valu

es

(b)

0 50 100 150 200 250 300 350 400 450 500Time (weeks)

minus050

05

CH

valu

es

(c)

Figure 6 Accidents time series (a) components energy (b) low frequency component and (c) high frequency component

Table 1 Forecasting with ARIMA

MA-ARIMA HSVD-ARIMARMSE 00034 000073MAPE 112 026GCV 0006 00013RE plusmn 15 87 mdashRE plusmn 05 mdash 95

the 119872 components The value of 119872 is found through thecomputation of the singular values of the decompositionthis is presented in Figure 6(a) as shown in Figure 6(a)the major quantity of energy is captured by the two firstcomponents therefore in this work only two componentshave been selectedwith119872 = 2Thefirst component extractedrepresents the long-term trend (119862119871) of the time series whilethe second represents the short-term component of highfrequency fluctuation (119862119867) The components 119862119871 and 119862119867 areshown in Figures 6(b) and 6(c) respectively

To evaluate the model in this section 119875 = 9 and 119863 = 0

are used and 119876 is evaluated using the GCV metric for 1 le

119876 le 18 then the effective value 119876 = 11 is found as shownin Figure 3(b) therefore the forecasting model is denoted byHSVD-ARIMA(9011)

Once 119875 and 119876 are found the forecasting is executed withthe testing data set and the results of HSVD-ARIMA(9011)are shown in Figures 7(a) 7(b) and 5(b) and Table 1Figure 7(a) shows the observed values versus the estimates

vales and a good adjusting between them is found Therelative errors are presented in Figure 7(b) it shows that the95 of the points present an error lower than plusmn05

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(b) it shows that all the coefficients are inside theconfidence limit therefore in the model errors there is noserial correlation we can conclude that the proposed modelHSVD-ARIMA(9011) explains efficiently the variability ofthe process

The results presented in Table 1 show that the majoraccuracy is achieved with the model HSVD-ARIMA(9011)with a RMSE of 000073 and a MAPE of 026 the 95 ofthe points have a relative error lower than plusmn05

52 ANN Forecasting Model Based on PSO

521 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecastingmodel presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network and then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage ispresented in Figures 8 and 9(a) and Table 2 Theobserved values versus the estimated values areillustrated in Figure 8(a) reaching a good accuracywhile the relative error is presented in Figure 8(b)

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

6 The Scientific World Journal

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus03minus02minus01

0010203

Resid

ual

auto

corr

elat

ion

(b)

Figure 5 Residual ACF (a) MA-ARIMA(9010) and (b) SVD-ARIMA(9011)

1 2 3 4 5 6 7 80

05

1

Components

Ener

gy

(a)

0 50 100 150 200 250 300 350 400 450 5000

051

Time (weeks)

CL

valu

es

(b)

0 50 100 150 200 250 300 350 400 450 500Time (weeks)

minus050

05

CH

valu

es

(c)

Figure 6 Accidents time series (a) components energy (b) low frequency component and (c) high frequency component

Table 1 Forecasting with ARIMA

MA-ARIMA HSVD-ARIMARMSE 00034 000073MAPE 112 026GCV 0006 00013RE plusmn 15 87 mdashRE plusmn 05 mdash 95

the 119872 components The value of 119872 is found through thecomputation of the singular values of the decompositionthis is presented in Figure 6(a) as shown in Figure 6(a)the major quantity of energy is captured by the two firstcomponents therefore in this work only two componentshave been selectedwith119872 = 2Thefirst component extractedrepresents the long-term trend (119862119871) of the time series whilethe second represents the short-term component of highfrequency fluctuation (119862119867) The components 119862119871 and 119862119867 areshown in Figures 6(b) and 6(c) respectively

To evaluate the model in this section 119875 = 9 and 119863 = 0

are used and 119876 is evaluated using the GCV metric for 1 le

119876 le 18 then the effective value 119876 = 11 is found as shownin Figure 3(b) therefore the forecasting model is denoted byHSVD-ARIMA(9011)

Once 119875 and 119876 are found the forecasting is executed withthe testing data set and the results of HSVD-ARIMA(9011)are shown in Figures 7(a) 7(b) and 5(b) and Table 1Figure 7(a) shows the observed values versus the estimates

vales and a good adjusting between them is found Therelative errors are presented in Figure 7(b) it shows that the95 of the points present an error lower than plusmn05

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 5(b) it shows that all the coefficients are inside theconfidence limit therefore in the model errors there is noserial correlation we can conclude that the proposed modelHSVD-ARIMA(9011) explains efficiently the variability ofthe process

The results presented in Table 1 show that the majoraccuracy is achieved with the model HSVD-ARIMA(9011)with a RMSE of 000073 and a MAPE of 026 the 95 ofthe points have a relative error lower than plusmn05

52 ANN Forecasting Model Based on PSO

521 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecastingmodel presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network and then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage ispresented in Figures 8 and 9(a) and Table 2 Theobserved values versus the estimated values areillustrated in Figure 8(a) reaching a good accuracywhile the relative error is presented in Figure 8(b)

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

The Scientific World Journal 7

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus1minus075

minus05minus025

0025

05075

1

Rela

tive e

rror

()

(b)

Figure 7 SVD-ARIMA(9011) (a) observed versus estimated and (b) relative error

0010203040506

Wee

kly

acci

dent

Actual valueEstimated value

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(a)

minus50minus40minus30minus20minus10

01020304050

Relat

ive e

rror

()

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

(b)

Figure 8 MA-ANN-PSO(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20minus06minus04minus02

0020406

Lagged values

Resid

ual

auto

corr

elat

ion

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18minus03minus02minus01

0010203

Lagged values

Resid

ual

auto

corr

elat

ion

(b)

Figure 9 Residual ACF (a) MA-ANN-PSO(9101) and (b) HSVD-ANN-PSO(9111)

Table 2 Forecasting with ANN-PSO

MA-ANN-PSO HSVD-ANN-PSORMSE 004145 00123MAPE 1551 545GCV 0053 0022RE plusmn 15 85 mdashRE plusmn 4 mdash 95

which shows that the 85 of the points present an errorlower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(a) it shows that there are values with significative

difference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-PSO(9101) is not recommended for future usage andprobably other explanatory variables should be added in themodel

The process was run 30 times and the best result wasreached in the run 22 as shown in Figure 10(a) Figure 10(b)presents the RMSE metric for the best run

522 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibration

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

8 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

Run number

0038

004

0042

0044

0046

0048

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500

Iteration number

minus34

minus32

minus3

minus28

minus26

minus24

minus22

minus2

for b

est r

unLo

g(RM

SE)

(b)

Figure 10 MA-ANN-PSO(9101) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76Time (weeks)

minus10minus8minus6minus4minus2

02468

10

Relat

ive e

rror

()

(b)

Figure 11 HSVD-ANN-PSO(9111) (a) observed versus estimated and (b) relative rrror

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300038

0040042004400460048

Run number

Best

fitne

ss (R

MSE

)

(a)

500 1000 1500 2000 2500minus34minus32

minus3minus28minus26minus24minus22

minus2

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 12 HSVD-ANN-PSO(9111) (a) run versus fitness for 2500 iterations and (b) iterations number for the best run

explained in Section 512 then anANN(119870 119876 1) is used with119870 = 9 inputs (lagged values) 119876 = 11 hidden nodes and 1output

The evaluation executed in the testing stage is presentedin Figures 11 and 9(b) and Table 2The observed values versusthe estimated values are illustrated in Figure 11(a) reachinga good accuracy while the relative error is presented inFigure 11(b) which shows that the 95 of the points presentan error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 9(b) it shows that all the coefficients are insidethe confidence limit of 95 and statistically are equalto zero therefore in the model errors there is no serialcorrelation we can conclude that the proposed model

HSVD-ANN-PSO(9111) explains efficiently the variability ofthe process

The process was run 30 times and the best result wasreached in the run 11 as shown in Figure 12(a) Figure 12(b)presents the RMSE metric for the best run

The results presented in Table 2 show that themajor accu-racy is achieved with the model HSVD-ANN-PSO(9111)with a RMSE of 00123 and a MAPE of 545 the 95 of thepoints have a relative error lower than plusmn4

53 ANN Forecasting Model Based on RPROP

531 Moving Average Smoothing The raw time series issmoothed using the moving average of order 3 whoseobtained values are used as input of the forecasting

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

The Scientific World Journal 9

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus50minus40minus30minus20minus10

01020304050

Time (week)

Relat

ive e

rror

()

(b)

Figure 13 MA-ANN-RPROP(9101) (a) observed versus estimated and (b) relative error

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

Resid

ual

auto

corr

elat

ion

minus06

minus04

minus02

02

04

06

0

(a)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Lagged values

minus025minus015minus005

005015025

Resid

ual

auto

corr

elat

ion

(b)

Figure 14 Residual ACF (a) MA-ANN-RPROP(9101) and (b) HSVD-ANN-RPROP(9111)

Table 3 Forecasting with ANN-RPROP

MA-ANN-RPROP HSVD-ANN-RPROPRMSE 00384 0024MAPE 1225 808GCV 00695 0045RE plusmn 15 81 mdashRE plusmn 4 mdash 96

model presented in Figure 1(a) The calibration executed inSection 511 is used for the neural network then anANN(119870 119876 1) is used with 119870 = 9 inputs (lagged values)119876 = 10 hidden nodes and 1 output

The evaluation executed in the testing stage is presentedin Figures 13 and 14(a) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 13(a)reaching a good accuracy while the relative error is presentedin Figure 13(b) which shows that the 81 of the pointspresent an error lower than plusmn15

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(a) it shows that there are values with significativedifference from zero to 95 of the confidence limit byexample the three major values are obtained when the laggedvalue is equal to 3 4 and 7 weeks Therefore in the residualsthere is serial correlation this implies that the model MA-ANN-RPROP(9101) is not recommended for future usageand probably other explanatory variables should be added inthe model

The process was run 30 times and the best result wasreached in the run 26 as shown in Figure 15(a) Figure 15(b)presents the RMSE metric for the best run

532 HSVD Smoothing In this section the forecastingstrategy presented in Figure 1(b) is evaluated the HSVDsmoothing strategy is applied using the same calibrationexplained in Section 512 and then an ANN(119870 119876 1) is usedwith119870 = 9 inputs (lagged values)119876 = 11 hidden nodes and1 output

The evaluation executed in the testing stage is presentedin Figures 16 and 14(b) and Table 3 The observed valuesversus the estimated values are illustrated in Figure 16(a)reaching a good accuracy while the relative error is presentedin Figure 16(b) which shows that the 96 of the pointspresent an error lower than plusmn4

For the evaluation of the serial correlation of the modelerrors the ACF is applied whose values are presented inFigure 14(b) it shows that all the coefficients are inside theconfidence limit and statistically are equal to zero thereforein the model errors there is no serial correlation we can con-clude that the proposed model HSVD-ANN-RPROP(9111)explains efficiently the variability of the process The processwas run 30 times and the first best result was reached inthe run 21 as shown in Figure 17(a) Figure 17(b) presents theRMSE metric for the best run

The results presented in Table 3 show that themajor accu-racy is achieved with the model HSVD-ANN-RPROP(9111)with a RMSE of 0024 and a MAPE of 808 the 96 of thepoints have a relative error lower than plusmn4

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

10 The Scientific World Journal

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30003004005006007008009

Run number

Best

fitne

ss (R

MSE

)

(a)

5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85minus7

minus65minus6

minus55minus5

minus45minus4

minus35minus3

minus25

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 15 MA-ANN-RPROP(9101) (a) run versus fitness for 85 iterations and (b) iterations number for the best run

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 760

010203040506

Time (weeks)

Wee

kly

acci

dent

Actual valueEstimated value

(a)

4 8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76minus10

minus8minus6minus4minus2

02468

10

Time (week)Re

lativ

e err

or (

)

(b)

Figure 16 HSVD-ANN-RPROP(9111) (a) observed versus estimated and (b) relative error

2 4 6 8 10 12 14 16 18 20 22 24 26 28 300022002400260028

003003200340036

Run number

Best

fitne

ss (R

MSE

)

(a)

10 20 30 40 50 60 70minus11minus10

minus9minus8minus7minus6minus5minus4minus3minus2minus1

Iteration number

for b

est r

unLo

g(RM

SE)

(b)

Figure 17 HSVD-ANN(9111) (a) run versus fitness for 70 iterations and (b) iterations number for the best run

Finally Pitmanrsquos correlation test [44] is used to compareall forecasting models in a pairwise fashion Pitmanrsquos test isequivalent to testing if the correlation (Corr) between Υ andΨ is significantly different from zero where Υ and Ψ aredefined by

Υ = 1198901 (119899) + 1198902 (119899) 119899 = 1 2 119873V (19a)

Ψ = 1198901 (119899) minus 1198902 (119899) 119899 = 1 2 119873V (19b)

where 1198901 and 1198902 represent the one-step-ahead forecast errorfor model 1 and model 2 respectively The null hypothesis issignificant at the 5 significance level if |Corr| gt 196radic119873V

The evaluated correlations betweenΥ andΨ are presentedin Table 4

The results presented in Table 4 show that statisticallythere is a significant superiority of the HSVD-ARIMA fore-casting model regarding the rest of models The results arepresented from left to right where the first is the best modeland the last is the worst model

6 Conclusions

In this paper were proposed two strategies of time seriessmoothing to improve the forecasting accuracy The firstsmoothing strategy is based on moving average of order3 while the second is based on the Hankel singular valuedecomposition The strategies were evaluated with the time

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 11: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

The Scientific World Journal 11

Table 4 Pitmanrsquos correlation (Corr) for pairwise comparison six models at 5 of significance and the critical value 02219

Models M1 M2 M3 M4 M5 M6M1 =HSVD-ARIMA mdash minus09146 minus09931 minus09983 minus09994 minus09993M2 =MA-ARIMA mdash mdash minus08676 minus09648 minus09895 minus09887M3 =HSVD-ANN-PSO mdash mdash mdash minus06645 minus08521 minus08216M4 =HSVD-ANN-RPRO mdash mdash mdash mdash minus05129 minus04458M5 = ANN-RPROP mdash mdash mdash mdash mdash 01623M6 =MA-ANN-PSO mdash mdash mdash mdash mdash mdash

series of traffic accidents occurring in Valparaıso Chile from2003 to 2012

The estimation of the smoothed values was developedthrough three conventional models ARIMA an ANN basedon PSO and an ANN based on RPROP The comparisonof the six models implemented shows that the first bestmodel is HSVD-ARIMA as it obtained the major accuracywith a MAPE of 026 and a RMSE of 000073 while thesecond best is the model MA-ARIMA with a MAPE of112 and a RMSE of 00034 On the other hand the modelwith the lowest accuracy was MA-ANN-PSO with a MAPEof 1551 and a RMSE of 0041 Pitmanrsquos test was executedto evaluate the difference of the accuracy between the sixproposed models and the results show that statistically thereis a significant superiority of the forecasting model based onHSVD-ARIMA Due to the high accuracy reached with thebest model in future works it will be applied to evaluate newtime series of other regions and countries

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

Acknowledgments

This work was supported in part by Grant CONICYTFONDECYTRegular 1131105 and by the DI-Regular projectof the Pontificia Universidad Catolica de Valparaıso

References

[1] J Abellan G Lopez and J de Ona ldquoAnalysis of traffic accidentseverity using decision rules via decision treesrdquo Expert Systemswith Applications vol 40 no 15 pp 6047ndash6054 2013

[2] L Chang and J Chien ldquoAnalysis of driver injury severity intruck-involved accidents using a non-parametric classificationtree modelrdquo Safety Science vol 51 no 1 pp 17ndash22 2013

[3] J deOna G Lopez RMujalli and F J Calvo ldquoAnalysis of trafficaccidents on rural highways using Latent Class Clustering andBayesian Networksrdquo Accident Analysis and Prevention vol 51pp 1ndash10 2013

[4] M Fogue P Garrido F J Martinez J Cano C T Calafateand P Manzoni ldquoA novel approach for traffic accidents sanitaryresource allocation based on multi-objective genetic algo-rithmsrdquo Expert Systems with Applications vol 40 no 1 pp 323ndash336 2013

[5] M A Quddus ldquoTime series count data models an empiricalapplication to traffic accidentsrdquo Accident Analysis and Preven-tion vol 40 no 5 pp 1732ndash1741 2008

[6] J J F Commandeur F D Bijleveld R Bergel-Hayat CAntoniou G Yannis and E Papadimitriou ldquoOn statisticalinference in time series analysis of the evolution of road safetyrdquoAccident Analysis and Prevention vol 60 pp 424ndash434 2013

[7] C Antoniou and G Yannis ldquoState-space based analysis andforecasting of macroscopic road safety trends in GreecerdquoAccident Analysis and Prevention vol 60 pp 268ndash276 2013

[8] W Weijermars and P Wesemann ldquoRoad safety forecasting andex-ante evaluation of policy in the Netherlandsrdquo TransportationResearch A Policy and Practice vol 52 pp 64ndash72 2013

[9] A Garcıa-Ferrer A de Juan and P Poncela ldquoForecasting trafficaccidents using disaggregated datardquo International Journal ofForecasting vol 22 no 2 pp 203ndash222 2006

[10] R Gencay F Selcuk and B Whitcher An Introduction toWavelets andOther FilteringMethods in Finance and EconomicsAcademic Press 2002

[11] N Abu-Shikhah and F Elkarmi ldquoMedium-term electric loadforecasting using singular value decompositionrdquoEnergy vol 36no 7 pp 4259ndash4271 2011

[12] C Sun and J Hahn ldquoParameter reduction for stable dynamicalsystems based on Hankel singular values and sensitivity analy-sisrdquoChemical Engineering Science vol 61 no 16 pp 5393ndash54032006

[13] H Gu and H Wang ldquoFuzzy prediction of chaotic time seriesbased on singular value decompositionrdquo Applied Mathematicsand Computation vol 185 no 2 pp 1171ndash1185 2007

[14] X Weng and J Shen ldquoClassification of multivariate timeseries using two-dimensional singular value decompositionrdquoKnowledge-Based Systems vol 21 no 7 pp 535ndash539 2008

[15] N Hara H Kokame and K Konishi ldquoSingular value decompo-sition for a class of linear time-varying systems with applicationto switched linear systemsrdquo Systems and Control Letters vol 59no 12 pp 792ndash798 2010

[16] K Kavaklioglu ldquoRobust electricity consumption modelingof Turkey using singular value decompositionrdquo InternationalJournal of Electrical Power amp Energy Systems vol 54 pp 268ndash276 2014

[17] W X Yang and P W Tse ldquoMedium-term electric load fore-casting using singular value decompositionrdquoNDT amp E Interna-tional vol 37 pp 419ndash432 2003

[18] K Kumar and V K Jain ldquoAutoregressive integrated movingaverages (ARIMA) modelling of a traffic noise time seriesrdquoApplied Acoustics vol 58 no 3 pp 283ndash294 1999

[19] JHassan ldquoARIMAand regressionmodels for prediction of dailyand monthly clearness indexrdquo Renewable Energy vol 68 pp421ndash427 2014

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 12: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

12 The Scientific World Journal

[20] P Narayanan A Basistha S Sarkar and S Kamna ldquoTrendanalysis and ARIMA modelling of pre-monsoon rainfall datafor western Indiardquo Comptes Rendus Geoscience vol 345 no 1pp 22ndash27 2013

[21] K Soni S Kapoor K S Parmar and D G Kaskaoutis ldquoStatisti-cal analysis of aerosols over the gangetichimalayan region usingARIMA model based on long-term MODIS observationsrdquoAtmospheric Research vol 149 pp 174ndash192 2014

[22] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004

[23] X Yang J Yuan J Yuan and H Mao ldquoA modified particleswarm optimizer with dynamic adaptationrdquoAppliedMathemat-ics and Computation vol 189 no 2 pp 1205ndash1213 2007

[24] M S Arumugam andM Rao ldquoOn the improved performancesof the particle swarm optimization algorithms with adaptiveparameters cross-over operators and root mean square (RMS)variants for computing optimal control of a class of hybridsystemsrdquo Applied Soft Computing Journal vol 8 no 1 pp 324ndash336 2008

[25] B K Panigrahi V Ravikumar Pandi and S Das ldquoAdaptiveparticle swarm optimization approach for static and dynamiceconomic load dispatchrdquo Energy Conversion and Managementvol 49 no 6 pp 1407ndash1415 2008

[26] A Nickabadi M M Ebadzadeh and R Safabakhsh ldquoA novelparticle swarm optimization algorithm with adaptive inertiaweightrdquoApplied Soft Computing Journal vol 11 no 4 pp 3658ndash3670 2011

[27] X Jiang H Ling J Yan B Li and Z Li ldquoForecasting electricalenergy consumption of equipment maintenance using neuralnetwork and particle swarm optimizationrdquoMathematical Prob-lems in Engineering vol 2013 Article ID 194730 8 pages 2013

[28] J Chen Y Ding and K Hao ldquoThe bidirectional optimizationof carbon fiber production by neural network with a GA-IPSOhybrid algorithmrdquo Mathematical Problems in Engineering vol2013 Article ID 768756 16 pages 2013

[29] J Zhou Z Duan Y Li J Deng and D Yu ldquoPSO-based neuralnetwork optimization and its utilization in a boring machinerdquoJournal of Materials Processing Technology vol 178 no 1ndash3 pp19ndash23 2006

[30] M A Mohandes ldquoModeling global solar radiation using Parti-cle SwarmOptimization (PSO)rdquo Solar Energy vol 86 no 11 pp3137ndash3145 2012

[31] L F De Mingo Lopez N Gomez Blas and A Arteta ldquoTheoptimal combination grammatical swarm particle swarmoptimization and neural networksrdquo Journal of ComputationalScience vol 3 no 1-2 pp 46ndash55 2012

[32] A Yazgan and I H Cavdar ldquoA comparative study between LMSand PSO algorithms on the optical channel estimation for radioover fiber systemsrdquo Optik vol 125 no 11 pp 2582ndash2586 2014

[33] M Riedmiller and H Braun ldquoA direct adaptive me thodfor faster backpropagation learning the RPROP algorithmrdquoin Proceedings of the IEEE International Conference of NeuralNetworks E H Ruspini Ed pp 586ndash591 1993

[34] C Igel and M Husken ldquoEmpirical evaluation of the improvedRprop learning algorithmsrdquo Neurocomputing vol 50 pp 105ndash123 2003

[35] P G Zhang ldquoTime series forecasting using a hybrid ARIMAand neural network modelrdquo Neurocomputing vol 50 pp 159ndash175 2003

[36] L Aburto and R Weber ldquoImproved supply chain managementbased on hybrid demand forecastsrdquo Applied Soft ComputingJournal vol 7 no 1 pp 136ndash144 2007

[37] M Khashei and M Bijari ldquoA new hybrid methodology fornonlinear time series forecastingrdquoModelling and Simulation inEngineering vol 2011 Article ID 379121 5 pages 2011

[38] R A Yafee and M McGee An Introduction to Time SeriesAnalysis and Forecasting With Applications of SAS and SPSSAcademic Press New York NY USA 2000

[39] TS Shores Applied Linear Algebra and Matrix AnalysisSpringer 2007

[40] P J Brockwell and R A Davis Introduction to Time Series andForecasting Springer Berlin Germany 2nd edition 2002

[41] J A Freeman and D M Skapura Neural Networks AlgorithmsApplications and Programming Techniques Addison-Wesley1991

[42] R C Eberhart Y Shi and J Kennedy Swarm IntelligenceMorgan Kaufmann 2001

[43] Conaset 2014 httpwwwconasetcl[44] K Hipel and A McLeod Time Series Modelling of Water

Resources and Environmental Systems Elsevier 1994

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 13: Research Article Smoothing Strategies Combined with ...downloads.hindawi.com/journals/tswj/2014/152375.pdfResearch Article Smoothing Strategies Combined with ARIMA and Neural Networks

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014