Upload
others
View
12
Download
0
Embed Size (px)
Citation preview
Hindawi Publishing CorporationComputational and Mathematical Methods in MedicineVolume 2013 Article ID 158056 9 pageshttpdxdoiorg1011552013158056
Research ArticleStudy on Parameter Optimization for Support VectorRegression in Solving the Inverse ECG Problem
Mingfeng Jiang1 Shanshan Jiang1 Lingyan Zhu2 Yaming Wang1
Wenqing Huang1 and Heng Zhang1
1 School of Information Science and Technology Zhejiang Sci-Tech University Hangzhou 310018 China2The Dongfang College Zhejiang University of Finance and Economics Hangzhou 310018 China
Correspondence should be addressed to Mingfeng Jiang mjiangzstueducn
Received 9 May 2013 Revised 26 June 2013 Accepted 2 July 2013
Academic Editor Kayvan Najarian
Copyright copy 2013 Mingfeng Jiang et alThis is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
The typical inverse ECG problem is to noninvasively reconstruct the transmembrane potentials (TMPs) from body surfacepotentials (BSPs) In the study the inverse ECG problem can be treated as a regression problem with multi-inputs (body surfacepotentials) and multi-outputs (transmembrane potentials) which can be solved by the support vector regression (SVR) methodIn order to obtain an effective SVR model with optimal regression accuracy and generalization performance the hyperparametersof SVR must be set carefully Three different optimization methods that is genetic algorithm (GA) differential evolution (DE)algorithm and particle swarm optimization (PSO) are proposed to determine optimal hyperparameters of the SVR model In thispaper we attempt to investigate which one is the most effective way in reconstructing the cardiac TMPs from BSPs and a fullcomparison of their performances is also provided The experimental results show that these three optimization methods are wellperformed in finding the proper parameters of SVR and can yield good generalization performance in solving the inverse ECGproblem Moreover compared with DE and GA PSO algorithm is more efficient in parameters optimization and performs betterin solving the inverse ECG problem leading to a more accurate reconstruction of the TMPs
1 Introduction
The inverse ECG problem is to obtain myocardial trans-membrane potential (TMPs) distribution from body surfacepotentials (BSPs) thus noninvasively imaging the electro-physiological information on the cardiac surface [1 2]Generally approaches to solving this inverse ECG problemcan be relied on potential-based model including epicardialendocardial or transmembrane potentials which is used toevaluate the potential values on the cardiac surface [3] atcertain time instantsMoreover the cardiac electrophysiolog-ical information is closely associatedwith the transmembranepotentials (TMPs) of the myocardial cells Compared to bodysurface potentials (BSPs) recordings TMPs can providemoredetailed and complicated electrophysiological informationIn this study we focus on implementing the reconstructionof TMPs from BSPs
To study inverse ECGproblems various numericalmeth-ods have been proposed In the last decades regularizationmethods have been engaged for dealing with the inherentill-posed property The regularization techniques includetruncated total least squares (TTLS) [4] GMRes [5] andthe LSQR [6] which require an appropriate selection ofregularization parameters so as to relax the ill-posednessof the inverse ECG problem and to produce a well-posedproblem However the robustness and the quality of theinverse solution are not always guaranteed despite that theycan more or less deal with the geometry and measurementnoises for the inverse ECG problems In addition duringthe solution procedure the inverse ECG problem can betreated as a regression problem with multi-inputs (BSPs)andmulti-outputs (TMPs)Therefore an alternative methodsupport vector regression (SVR)method [7] was proposed tosolve the inverse ECGproblem Comparedwith conventional
2 Computational and Mathematical Methods in Medicine
regularization methods SVR method can produce moreaccurate results in terms of reconstruction of the trans-membrane potential distributions on epi- and endocardialsurfaces [1 8] SVR is an extension version of support vectormachine (SVM) to solve nonlinear regression estimationproblems which aims at minimizing an upper bound ofthe generalization error instead of the training error [9] byadhering to the principle of structural risk minimization(SRM)
Although SVR is a powerful technique to solve thenonlinear regression problem it has received less attention inrelation to the inverse ECGproblems due to the fact that SVRalgorithm is sensitive to usersrsquo defined free parameters Theinvolved hyperparameters of SVR model consist of penaltyparameter 119862 insensitive loss function parameter 120576 and theparameter 120590 for kernel function Inappropriate parameters inSVR can lead to overfitting or underfitting problems How toproperly set the hyperparameters is a major task which has asignificant impact on the optimal generalization performanceand the SVR regression accuracy [10] especially when itcomes to predicting the diagnosis of cardiac diseases becauseeven a slight improvement of prediction accuracy could havea significance impact on the patientrsquos diagnosis [11 12]
In early development stage of the algorithm a grid searchoptimizing method [13] and cross-validation method [8] areemployed to optimize the hyperparameters However thesemethods are computationally expensive and data intensiveRecently a number of new algorithms have been proposedfor the optimization of the SVR parameters [14 15] ForexampleWang et al [16] has proposed a hybrid load forecast-ing model combining differential evolution (DE) algorithmand support vector regression to deal with the problemof annual load forecasting In the analysis of predictingtourism demand the study [17] applies genetic algorithm(GA) to seek the SVRrsquos optimal parameters and then adoptsthe optimal parameters to construct the SVR models Theexperimental results demonstrate that SVR outperforms theother two neural network techniques Another study [18]tries a new technology particle swarmoptimization (PSO) toautomatically determine the parameters of SVR then appliesthe hybridmodel (PSO-SVR) to grid resource prediction andthe experimental results indicate high predictive accuracy
In this paper the above mentioned optimization algo-rithms (GA DE and PSO) are all adopted to dynamicallyoptimize the hyperparameters of SVR model in solving theinverse ECG problem For convenience the SVR model withGA parameter selection is referred to as GA-SVR methodand the other two are termed DE-SVR and PSO-SVRrespectively In this paper we attempt to investigate whichone is the most effective in reconstructing the cardiac TMPsfrom BSPs and a full comparison of the performance forsolving the inverse ECG problem will be evaluated
2 Theory and Methodology
21 Brief Overview of the SVR In this section the basicSVR concepts are concisely described for detailed descrip-tion please see [19 20] Suppose a given training data of
119873 elements (119909119894 119910119894) 119894 = 1 2 119873 where 119909
119894denotes
the 119894th element in 119899-dimensional space that is 119909119894
=
1199091119894 119909
119899119894 isin 119877
119899 and 119910119894
isin 119877 is the output valuecorresponding to119909
119894 According tomathematical notation the
SVR algorithm builds the linear regression function as thefollowing form
119891 (119909 120596) = (120596 sdot 120593 (119909) + 119887)
120593 119877119899997888rarr 119865 120596 isin 119865
(1)
where 120596 and 119887 are the slope and offset coefficients 120593(119909)
denotes the high-dimensional feature space which is non-linearly mapped from the input space 119909 And the previousregression problem is equivalent to minimizing the followingconvex optimization problem [see (2)]
min 1
21205962
subject to 119910119894minus 120596120593 (119909
119894) minus 119887 le 120576
120596120593 (119909119894) + 119887 minus 119910
119894le 120576
(2)
In this equation an implicit assumption is that a function119891 essentially approximates all pairs (119909
119894 119910119894) with 120576 precision
but sometimes this may not be the case Therefore by intro-ducing two additional positive slack variables 120585
119894and 120585lowast
119894 the
minimization is reformulated as the following constrainedoptimization problem [shown in (3)]
min 119877 (120596 120585 120585lowast) =
1
21205962+ 119862
119873
sum
119894=1
(120585119894+ 120585119894
lowast)
st 119910119894minus (120596 120593 (119909)) minus 119887 le 120576 + 120585
lowast
119894
(120596 120593 (119909)) + 119887 minus 119910119894le 120576 + 120585
119894
120585119894 120585119894
lowastge 0 119894 = 1 2 119873 120576 ge 0
(3)
where the parameter 119862 is the regulator which is determinedby the user and it influences a tradeoff between an approxi-mation error and the weights vector norm ||120596|| 120585
119894and 120585lowast
119894are
slack variables that represent the distance from actual valuesto the corresponding boundary values of 120576-tube
According to the strategy outlined by Scholkopf andSmola [10] by applying Lagrangian theory and the KKTcondition the constrained optimization problem can befurther restated as the following equation
119891 (119909 120596) =
119873
sum
119894=1
(120572119894minus 120572lowast
119894)119870 (119909
119894 119909) + 119887
st119873
sum
119894=1
(120572119894minus 120572119894
lowast) = 0
(4)
Here 120572119894and 120572
lowast
119894are the Lagrange multipliers The term
119870(119909119894 119909) is defined as the kernel function The nonlinear
separable cases could be easily transformed to linear casesby mapping the original variable into a new feature spaceof high dimension using 119870(119909
119894 119909) The radial basis function
Computational and Mathematical Methods in Medicine 3
(RBF) was applied in the study which has the ability touniversally approximate any distribution in the feature spaceWith an appropriate parameter RBF usually provides a betterprediction performance so it is adopted in this study asshown in (5)
119870(119909119894 119909119895) = exp(minus
10038171003817100381710038171003817119909119894minus 119909119895
10038171003817100381710038171003817
2
21205902) (5)
where 119909119894and 119909
119894are input vector spaces and 120590
2 is the band-width of the kernel function
In the above equations there exist three hyper-param-eters to be determined in advance that is the penaltyparameter 119862 insensitive parameter 120576 and the related kernelfunction parameters 120590
2 They heavily affect the regressionaccuracy and computation complexity of SVR The penaltyparameter 119862 controls the degree of punishing the sampleswhose errors go beyond the given value The insensitiveparameter 120576 controls the width of the 120576-insensitive zoneused to fit the training data The value of 120576 can enhancethe generalization capability with the increase of 120576 thenumber of support vectors will decrease and the algorithmiccomputation complexity will also reduceThe bandwidth 120590 ofthe kernel function has a great influence on the performanceof learning machine
In this study three optimization methods that is GADE and PSO are presented to determine the optimal hyper-parameters of the SVR model According to [15] the generalrange of119862 1205902 and 120576 has been given In the trial operation wenarrowed it to avoid blindness in the optimization process Inthis paper the set of hyperparameter (119862 120590
2 120576) is initialized
in the given range 119862 isin [0 10000] 1205902 isin [0 2] and 120576 isin
[0 00001] where optimization methods (GA DE and PSO)are to seek the global optimal solutions
In order to calculate the error uniformly we adopt thesame fitness function which plays a critical role in measuringthese algorithms performance The fitness function is deter-mined as follows
min119891 =sum119899119905
119894=1(1003816100381610038161003816119886119894 minus 119901
119894
1003816100381610038161003816 119886119894)
119899119905
lowast 100 (6)
where 119899119905is the number of training data samples 119886
119894is the
actual TMPs of train data and 119901119894is the predicted TMPs The
solution with a smaller fitness 119891 of the training dataset has abetter chance of surviving in the successive generations Themain tool LIBSVM was used for training and validating theSVR model [21]
22 Genetic Algorithm (GA) Optimization Method The GA[22] is a biologically motivated optimization techniqueguided by the concepts of biological evolution and Darwinrsquosprinciple of survival of the fittest It is a computer model ofan evolution of a population of artificial individuals In thisstudy for the specific optimizing problemof hyperparameters(119862 1205902 120576) the process is defined as follows
Step 1 (initialize population) The population size NP isequal to 20 the dimension of parameter vectors 119863 is 3
The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value is setas 0001 The probabilities of selection (Stochastic universalsampling) crossover (multipoint crossover) and mutation(mutation operator of the Breeder Genetic Algorithm) thatwere used herein are 09 08 and 005 respectively
Step 2 (encode chromosomes) According to the possiblerange of parameters 119862 120590
2 and 120576 given before the GA utilizesbinary encoding method and each parameter is encodedby 20 bits of binary number Therefore the search space isdefined as the solution space in which each potential solutionis encoded as a distinct chromosome
Step 3 The parameters 119862 1205902 and 120576 of each individual
are used to build the SVR model With the BSPs of thetraining data the cardiac TMPs are reconstructed Then theperformances of individuals in the generation are evaluatedby the specific objective fitness function according to (6)Theindividual with the minimum fitness value will be selectedand then the chromosome of selected individual is preservedas the best result
Step 4 Thebasic genetic search operators comprise selectionmutation and crossover which are applied orderly to obtaina new generation where the new individual (119862 120590
2 120576) with
the best performance is retained
Step 5 The new best fitness value will be compared to that ofthe best result and then select the better one to update thebest result
Step 6 This process will not come to an end until thetermination criterion is met and the best chromosome ispresented as the final solution otherwise go back to Step 4
23 Differential Evolution (DE) Optimization Method Dif-ferential evolution [23] is a population-based and paralleldirect searchmethod which is used to approximate the globaloptimal solution As is mentioned above the optimization ofhyperparameter (119862 120590
2 120576) can be transformed into solving
the minimization of the fitness function Min 119891(119909) 119909 =
119909119894119866 119894119866
= 1 2 andNP is three-dimensional parametervectors including (119862 120590
2 and 120576) Subsequently each gen-eration evolves by employing the evolutionary operatorsinvolving mutation crossover and selection operations toproduce a trail vector for each individual vector [24] and thedetailed evolutionary strategies can be described as follows
Step 1 (initialize DE parameters) The population size NP isset as 20 [25] and the dimension of parameter vectors 119863 is3 The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value is set as0001 The mutation factor 119865 is selected in [05 1] and thecrossover rate CR is selected in [0 1]
Step 2 (initialize population) Set 119866 = 0 Generate an NPlowastDgeneration which consists of individuals (119862 120590
2 and 120576) with
4 Computational and Mathematical Methods in Medicine
uniform probability distribution random values within thecontrol variable bounds of parameter space
Step 3 With the parameters train the SVR model andforecast the training datasetsThen calculate the fitness targetof all the individuals in the generation and record theminimumvalues of the fitness function and set (119862 120590
2 and 120576)
correspondingly as the 119909best
Step 4 (mutation operation) For each target vector 119909119894119866 an
associated mutant vector is generated according to
V119894119866+1
= 119909best + 119865 [(1199091199031119866
minus 1199091199032119866) + (119909
1199033119866
minus 1199091199034119866)] (7)
The random indexes 1199031 1199032 1199033 and 119903
4have to be distin-
guished from each other and from the current trial index 119894
Step 5 (crossover operation) In order to increase the diver-sity of the perturbed parameter vectors crossover is intro-duced by the following formula
119906119895119894119866+1
= V119895119894119866+1
if (randb (119895) le CR) or 119895 = rnbr (119894)119909119895119894119866+1
if (randb (119895) gt CR) and 119895 = rnbr (119894)
(8)
where 119906119894119866+1
= (1199061119894119866+1
1199062119894119866+1
1199063119894119866+1
) randb(119895) isin [0 1] isthe 119895th evaluation of a uniform random number generatorand rnbr(119894) is a randomly chosen integer in the range in [1 3]to ensure that 119906
119894119866+1gets at least one element from V
119894119866+1
Step 6 (selection operation) After the fitness values of thenew generation being calculated then selecting the new bestindividual as 119906best119866+1 the selection operation is performedTo decide whether or not it could replace the 119909best the fitnessvalue of 119891(119906best119866+1) is compared to that of 119891(119909best) Theoperation is given as follows
119909best = 119906best119866+1 119891 (119906best119866+1) le 119891 (119909best)
119909best 119891 (119906best119866+1) gt 119891 (119909best) (9)
Step 7 Termination condition checking if max iterations119866 ismet or the fitness value of119891(119909best) is reachedwithin the fitnesstolerance value return the recorded global optimal parameter(119862 1205902 and 120576) otherwise go to Step 4
24 Particle Swarm Optimization (PSO) Optimization Meth-od The Particle swarm optimization (PSO) is a bioinspiredstochastic optimization technique which was basically devel-oped through simulating social behaviour of birds and insectsthat keep living by maintaining swarm actions A particleis considered as a bird in a swarm consisting of a numberof birds and all particles fly through the searching spaceby following the current optimum particle to find the finaloptimum solution of the optimization problem [26]
The detailed experimental procedure for selecting thebest hyperparameters (119862 120590
2 and 120576) is listed as follows
Step 1 Particle initialization andPSOparameters setting [27]set the PSO parameters including the scope of 119862 120590
2 and
120576 the number of particles is 20 particle dimension is 3The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value as 0001
Step 2 Randomly generate primal population of randomparticles (119862 120590
2 and 120576) and velocities inside the searching
space
Step 3 Construct SVR model with the parameters in eachparticle and perform the prediction with the training dataThen evaluate the regression accuracy based on the definedfitness functionThen maintain the record of the best perfor-mance with the minimum error as the global best solution
Step 4 Particle manipulations suppose that 119883119894(119896) and 119881
119894(119896)
are the particle and velocity at step 119896 and they are updatedinto their new values 119883
119894(119896 + 1) and 119881
119894(119896 + 1) at step 119896 + 1 by
applying the two equations below
119881119894 (119896 + 1) = 120596119881
119894 (119896) + 11988811199031(119901119894 (119896) minus 119883
119894 (119896))
+ 11988821199032(119901119892 (119896) minus 119883
119894 (119896))
119883119894 (119896 + 1) = 119883
119894 (119896) + 119881119894 (119896 + 1) Δ119905
(10)
where 119881119894
isin [119881119894min 119881119894max] Δ119905 is the unit time step value
and 120596 is the inertia weight 119901119894(119896) and 119901
119892(119896) are the best
coordinates of particle number 119894 and the whole swarm untilstep 119896 respectively Positive constants 119888
1and 1198882are learning
rates while 1199031and 1199032are random numbers between 0 and 1
Step 5 Train the SVRmodel using the particles (119862 1205902 and 120576)
in the new generation and forecast the training datasets Afterevaluating the fitness function pick the new best particle asthe local best
Step 6 Compare the local best and global best and choosethe better one to update the global best
Step 7 Stop condition checking if a termination criterion(the number of iterations) is met or the fitness value of theglobal best is reached within the extent predefined returnthe recorded global best as the best parameter (119862 120590
2 and
120576) otherwise go to Step 4
25 Simulation Protocol and Data Set The SVR model istested with our previously developed realistic heart-torsomodel [11 12] In this simulation protocol a normal ven-tricular excitation is illustrated as an example to calculatethe data set for the SVR model The considered ventricularexcitation period from the first breakthrough to the end is357ms and the time step is 1ms and thus 358 BSPs 120593
119861and
TMPs 120593119898temporal data sets are calculated Sixty data sets
(120593119861and 120593
119898) are chosen at times of 3ms 9ms 15ms
357ms after the first ventricular breakthrough which will beused as testing samples to assess the SVR models predictioncapability and robustness The rest 298 in 358 data setsare employed as the training samples for the training andoptimal parameter selection procedures Moreover duringeach ventricular excitation period 412 potentials on the BSPs
Computational and Mathematical Methods in Medicine 5
and 478 potentials on the TMPs are captured That is to saythe matrix of BSPs is divided into the training data (298 times
412) and testing data (60 times 412) and the matrix of TMPs isalso divided into the training data (298 times 478) and testingdata (60 times 478) Each SVR regression model is built byusing the mapping relations between BSPs training data (298times 412) and one TMPs training point (298 times 1) Based onall the training data 478 different regression models werebuilt by usingGA-SVRDE-SVR and PSO-SVR respectivelyUsing the corresponding SVR model we can reconstruct thecorresponding TMPs (60 times 1) from the BSPs testing data (60times 412) For all testing samples we can reconstruct the TMPsfrom the all 478 regression SVR models
3 The Proposed System Framework
31 Overview of the SVRParametersOptimization FrameworkAs shown in Figure 1 in the first stage the original inputdata is preprocessed by scaling the training data and featureextraction In the second stage the hyperparameters (119862 1205902and 120576) of SVR method are set carefully by using the GADE and PSO optimization methods Moreover GA-SVRDE-SVR and PSO-SVR are applied to construct the hybridSVR model respectively Finally according to the effectivehybrid SVR models we can reconstruct the transmembranepotentials (TMPs) from body surface potentials (BSPs) byusing these testing samples
32 Preprocessing the Data Set
321 Scaling the Training Set During preprocessing stageeach input variable is scaled in order to avoid the potentialvalue from spanning a great numerical range and to preventnumeric difficulties Generally all input values including 358BSPs and TMPs temporal data sets are linearly scaled to therange (01) by the following formula [see (11)]
1205931015840
119905=
120593119905minus 120593119905min
120593119905max minus 120593
119905min (11)
where 120593119905is the original potential value of each time 119905 1205931015840
119905is the
scaled potential value 120593119905max is the maximum potential value
of each time 119905 and 120593119905min is the minimum potential value of
each time 119905
33 Feature Extraction by Using KPCA The kernel principalcomponent analysis (KPCA) is a nonlinear feature extractionmethod which is one type of a nonlinear PCA developedby generalizing the kernel method into PCA Specificallythe KPCA initially maps the original inputs into a high-dimensional feature space 119865 using the kernel method andthen calculates the PCA in the high-dimensional featurespace 119865The detailed theory of KPCA can be consulted in thepaper [28]
Feature extraction is a significant task for preprocessingthe original input data in developing an effective SVRmodelAs is investigated in our previous study [12] the SVR withfeature extraction by using KPCA has superior performances
BSPs
Scaling data set
KPCA for feature extraction
Train data
Parameters selected by
GA
Testing data
Parameters selected by
DE
Parameters selected by
PSO
Build GA-SVR
model
Build DE-SVR
model
Build PSO-SVR
model
Reconstructing TMPs
Figure 1 The framework of proposed parameters optimizationmethod for SVR method in solving the inverse ECG problem
0 50 100 150 200 250 300 350 400
0
20
Time (ms)
minus100
minus80
minus60
minus40
minus20
Original TMPsGA-SVR method
DE-SVR methodPSO-SVR method
TMP
(mV
)
Figure 2 The temporal course of the TMPs for one representativesource point on epicardium (50th) over the 60 testing timesThe reconstruction TMPs with GA-SVR DE-SVR and PSO-SVRmethods are all compared with original TMPs
to that of using PCAor the single SVRmethod in reconstruct-ing the TMPs In this paper the kernel principal componentanalysis (KPCA) was proposed to implement the featureextraction And the dimension of BSPs dataset is reducedfrom 412 to 200 properly which can reduce the dimensionsof the inputs and improve the generalization performances ofthe SVR method
4 Results and Discussion
41 The Result of Parameters Selection In this study theGA DE and PSO algorithms were proposed to seek the
6 Computational and Mathematical Methods in Medicine
(a)
(mV)minus90 minus80 minus70 minus60 minus50 minus40 minus30 minus20 minus10 0 10
(b)
Figure 3 The TMP distribution on the ventricular surface attwo sequential testing time points (a) shows the comparison ofreconstructed TMP distributions when the time point is 27ms and(b) shows the comparison of reconstructed TMP distributions whenthe time point is 51ms Moreover the upper row shows the TMPsfrom an anterior view and the lower from a posterior view In eachrow the first figure is the original TMPs and the rest three are thereconstructed TMPs by using the GA-SVR DE-SVR and PSO-SVRmethods
corresponding optimal hyperparameter (119862 1205902 and 120576) of the
478 SVR models Table 1 displays the average value and thestandard deviations of three parameters in 478 SVR modelsby using GA DE and PSO optimization algorithms From itwe can find that the gaps among these optimal parameters areobvious by using three different optimization methods Thestandard deviations of 120576 are neglected because they are verysmall
42 Comparison of Reconstruction Accuracy This study com-pares the reconstruction performance based on the proposedthree intelligent optimization algorithms for selecting thehyperparameters of SVR By using GA-SVR DE-SVR andPSO-SVR hybrid models two types of experiments forreconstructing the TMPs have been conducted The firstexperiment reconstructs the TMPs for one representativesource point (the 50th point) on the epicardium over 60testing times as is depicted in Figure 2 Since the gaps among
Table 1 The average and the standard deviations of optimal hyper-parameters (119862 120590
2 and 120576) by using GA DE and PSO optimization
algorithms
Parameter MethodGA DE PSO
119862 144761 plusmn 59432 762414 plusmn 91367 531 plusmn 302
1205902
064 plusmn 055 159 plusmn 138 122 plusmn 102
120576 148119890 minus 005 180119890 minus 005 198119890 minus 005
Table 2 The mean values and standard deviations of these 60testing errors between the simulated TMPs and the reconstructedTMPs by using GA-SVR DE-SVR and PSO-SVR methods at onerepresentative source point (the 50th point on the epicardium)
Indexes MethodGA-SVR DE-SVR PSO-SVR
Mean value 296 269 155Standard deviation 192 167 103
these three methods are so slight that we have to add astatistical analysis to validate the results The mean valuesand standard deviations of these 60 testing errors betweenthe simulated TMPs and the reconstructed TMPs by usingGA-SVR DE-SVR and PSO-SVR methods are investigatedas shown in the Table 2 From Figure 2 and Table 2 thePSO-SVR outperforms the GA-SVR and DE-SVR methodsin reconstructing the TMPs for one representative sourcepoint over all the testing times Furthermore the DE-SVR issuperior to the GA-SVRmethod in reconstructing the TMPs
The second experiment reconstructs the TMPs on all theheart surface points at one time instant These inverse ECGsolutions are shown in Figure 3 in which two sequentialtesting time points (27 and 51ms after the first ventricularbreakthrough) are presented to illustrate the performances oftheGA-SVRDE-SVR and PSO-SVRmethods It can be seenthat among the three parameters optimization methods thehybrid PSO-SVR performs better thanGA-SVR andDE-SVRmethods because its reconstructed solutions are much closerto the simulated TMPs distributions
Based on the simulation information of TMPs we canevaluate the accuracy of the reconstructed TMPs of thesethree different intelligent optimization algorithms at thetesting time by mean square error (MSE) the relative error(RE) and the correlation coefficient (CC)
MSE =1
119899
119899
sum
119894=1
(120593119888minus 120593119890)2
RE =
1003817100381710038171003817120593119888 minus 120593119890
1003817100381710038171003817
120593119890
CC =sum119899
119894=1[(120593119888)119894minus 120593119888] [(120593119890)119894minus 120593119890]
1003817100381710038171003817120593119888 minus 120593119890
10038171003817100381710038171003817100381710038171003817120593119890 minus 120593
119890
1003817100381710038171003817
(12)
Here 120593119890is the simulated TMP distribution at time 119905 and 120593
119888
is the reconstructed TMPs The quantities 120593119890and 120593
119888are the
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
2 Computational and Mathematical Methods in Medicine
regularization methods SVR method can produce moreaccurate results in terms of reconstruction of the trans-membrane potential distributions on epi- and endocardialsurfaces [1 8] SVR is an extension version of support vectormachine (SVM) to solve nonlinear regression estimationproblems which aims at minimizing an upper bound ofthe generalization error instead of the training error [9] byadhering to the principle of structural risk minimization(SRM)
Although SVR is a powerful technique to solve thenonlinear regression problem it has received less attention inrelation to the inverse ECGproblems due to the fact that SVRalgorithm is sensitive to usersrsquo defined free parameters Theinvolved hyperparameters of SVR model consist of penaltyparameter 119862 insensitive loss function parameter 120576 and theparameter 120590 for kernel function Inappropriate parameters inSVR can lead to overfitting or underfitting problems How toproperly set the hyperparameters is a major task which has asignificant impact on the optimal generalization performanceand the SVR regression accuracy [10] especially when itcomes to predicting the diagnosis of cardiac diseases becauseeven a slight improvement of prediction accuracy could havea significance impact on the patientrsquos diagnosis [11 12]
In early development stage of the algorithm a grid searchoptimizing method [13] and cross-validation method [8] areemployed to optimize the hyperparameters However thesemethods are computationally expensive and data intensiveRecently a number of new algorithms have been proposedfor the optimization of the SVR parameters [14 15] ForexampleWang et al [16] has proposed a hybrid load forecast-ing model combining differential evolution (DE) algorithmand support vector regression to deal with the problemof annual load forecasting In the analysis of predictingtourism demand the study [17] applies genetic algorithm(GA) to seek the SVRrsquos optimal parameters and then adoptsthe optimal parameters to construct the SVR models Theexperimental results demonstrate that SVR outperforms theother two neural network techniques Another study [18]tries a new technology particle swarmoptimization (PSO) toautomatically determine the parameters of SVR then appliesthe hybridmodel (PSO-SVR) to grid resource prediction andthe experimental results indicate high predictive accuracy
In this paper the above mentioned optimization algo-rithms (GA DE and PSO) are all adopted to dynamicallyoptimize the hyperparameters of SVR model in solving theinverse ECG problem For convenience the SVR model withGA parameter selection is referred to as GA-SVR methodand the other two are termed DE-SVR and PSO-SVRrespectively In this paper we attempt to investigate whichone is the most effective in reconstructing the cardiac TMPsfrom BSPs and a full comparison of the performance forsolving the inverse ECG problem will be evaluated
2 Theory and Methodology
21 Brief Overview of the SVR In this section the basicSVR concepts are concisely described for detailed descrip-tion please see [19 20] Suppose a given training data of
119873 elements (119909119894 119910119894) 119894 = 1 2 119873 where 119909
119894denotes
the 119894th element in 119899-dimensional space that is 119909119894
=
1199091119894 119909
119899119894 isin 119877
119899 and 119910119894
isin 119877 is the output valuecorresponding to119909
119894 According tomathematical notation the
SVR algorithm builds the linear regression function as thefollowing form
119891 (119909 120596) = (120596 sdot 120593 (119909) + 119887)
120593 119877119899997888rarr 119865 120596 isin 119865
(1)
where 120596 and 119887 are the slope and offset coefficients 120593(119909)
denotes the high-dimensional feature space which is non-linearly mapped from the input space 119909 And the previousregression problem is equivalent to minimizing the followingconvex optimization problem [see (2)]
min 1
21205962
subject to 119910119894minus 120596120593 (119909
119894) minus 119887 le 120576
120596120593 (119909119894) + 119887 minus 119910
119894le 120576
(2)
In this equation an implicit assumption is that a function119891 essentially approximates all pairs (119909
119894 119910119894) with 120576 precision
but sometimes this may not be the case Therefore by intro-ducing two additional positive slack variables 120585
119894and 120585lowast
119894 the
minimization is reformulated as the following constrainedoptimization problem [shown in (3)]
min 119877 (120596 120585 120585lowast) =
1
21205962+ 119862
119873
sum
119894=1
(120585119894+ 120585119894
lowast)
st 119910119894minus (120596 120593 (119909)) minus 119887 le 120576 + 120585
lowast
119894
(120596 120593 (119909)) + 119887 minus 119910119894le 120576 + 120585
119894
120585119894 120585119894
lowastge 0 119894 = 1 2 119873 120576 ge 0
(3)
where the parameter 119862 is the regulator which is determinedby the user and it influences a tradeoff between an approxi-mation error and the weights vector norm ||120596|| 120585
119894and 120585lowast
119894are
slack variables that represent the distance from actual valuesto the corresponding boundary values of 120576-tube
According to the strategy outlined by Scholkopf andSmola [10] by applying Lagrangian theory and the KKTcondition the constrained optimization problem can befurther restated as the following equation
119891 (119909 120596) =
119873
sum
119894=1
(120572119894minus 120572lowast
119894)119870 (119909
119894 119909) + 119887
st119873
sum
119894=1
(120572119894minus 120572119894
lowast) = 0
(4)
Here 120572119894and 120572
lowast
119894are the Lagrange multipliers The term
119870(119909119894 119909) is defined as the kernel function The nonlinear
separable cases could be easily transformed to linear casesby mapping the original variable into a new feature spaceof high dimension using 119870(119909
119894 119909) The radial basis function
Computational and Mathematical Methods in Medicine 3
(RBF) was applied in the study which has the ability touniversally approximate any distribution in the feature spaceWith an appropriate parameter RBF usually provides a betterprediction performance so it is adopted in this study asshown in (5)
119870(119909119894 119909119895) = exp(minus
10038171003817100381710038171003817119909119894minus 119909119895
10038171003817100381710038171003817
2
21205902) (5)
where 119909119894and 119909
119894are input vector spaces and 120590
2 is the band-width of the kernel function
In the above equations there exist three hyper-param-eters to be determined in advance that is the penaltyparameter 119862 insensitive parameter 120576 and the related kernelfunction parameters 120590
2 They heavily affect the regressionaccuracy and computation complexity of SVR The penaltyparameter 119862 controls the degree of punishing the sampleswhose errors go beyond the given value The insensitiveparameter 120576 controls the width of the 120576-insensitive zoneused to fit the training data The value of 120576 can enhancethe generalization capability with the increase of 120576 thenumber of support vectors will decrease and the algorithmiccomputation complexity will also reduceThe bandwidth 120590 ofthe kernel function has a great influence on the performanceof learning machine
In this study three optimization methods that is GADE and PSO are presented to determine the optimal hyper-parameters of the SVR model According to [15] the generalrange of119862 1205902 and 120576 has been given In the trial operation wenarrowed it to avoid blindness in the optimization process Inthis paper the set of hyperparameter (119862 120590
2 120576) is initialized
in the given range 119862 isin [0 10000] 1205902 isin [0 2] and 120576 isin
[0 00001] where optimization methods (GA DE and PSO)are to seek the global optimal solutions
In order to calculate the error uniformly we adopt thesame fitness function which plays a critical role in measuringthese algorithms performance The fitness function is deter-mined as follows
min119891 =sum119899119905
119894=1(1003816100381610038161003816119886119894 minus 119901
119894
1003816100381610038161003816 119886119894)
119899119905
lowast 100 (6)
where 119899119905is the number of training data samples 119886
119894is the
actual TMPs of train data and 119901119894is the predicted TMPs The
solution with a smaller fitness 119891 of the training dataset has abetter chance of surviving in the successive generations Themain tool LIBSVM was used for training and validating theSVR model [21]
22 Genetic Algorithm (GA) Optimization Method The GA[22] is a biologically motivated optimization techniqueguided by the concepts of biological evolution and Darwinrsquosprinciple of survival of the fittest It is a computer model ofan evolution of a population of artificial individuals In thisstudy for the specific optimizing problemof hyperparameters(119862 1205902 120576) the process is defined as follows
Step 1 (initialize population) The population size NP isequal to 20 the dimension of parameter vectors 119863 is 3
The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value is setas 0001 The probabilities of selection (Stochastic universalsampling) crossover (multipoint crossover) and mutation(mutation operator of the Breeder Genetic Algorithm) thatwere used herein are 09 08 and 005 respectively
Step 2 (encode chromosomes) According to the possiblerange of parameters 119862 120590
2 and 120576 given before the GA utilizesbinary encoding method and each parameter is encodedby 20 bits of binary number Therefore the search space isdefined as the solution space in which each potential solutionis encoded as a distinct chromosome
Step 3 The parameters 119862 1205902 and 120576 of each individual
are used to build the SVR model With the BSPs of thetraining data the cardiac TMPs are reconstructed Then theperformances of individuals in the generation are evaluatedby the specific objective fitness function according to (6)Theindividual with the minimum fitness value will be selectedand then the chromosome of selected individual is preservedas the best result
Step 4 Thebasic genetic search operators comprise selectionmutation and crossover which are applied orderly to obtaina new generation where the new individual (119862 120590
2 120576) with
the best performance is retained
Step 5 The new best fitness value will be compared to that ofthe best result and then select the better one to update thebest result
Step 6 This process will not come to an end until thetermination criterion is met and the best chromosome ispresented as the final solution otherwise go back to Step 4
23 Differential Evolution (DE) Optimization Method Dif-ferential evolution [23] is a population-based and paralleldirect searchmethod which is used to approximate the globaloptimal solution As is mentioned above the optimization ofhyperparameter (119862 120590
2 120576) can be transformed into solving
the minimization of the fitness function Min 119891(119909) 119909 =
119909119894119866 119894119866
= 1 2 andNP is three-dimensional parametervectors including (119862 120590
2 and 120576) Subsequently each gen-eration evolves by employing the evolutionary operatorsinvolving mutation crossover and selection operations toproduce a trail vector for each individual vector [24] and thedetailed evolutionary strategies can be described as follows
Step 1 (initialize DE parameters) The population size NP isset as 20 [25] and the dimension of parameter vectors 119863 is3 The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value is set as0001 The mutation factor 119865 is selected in [05 1] and thecrossover rate CR is selected in [0 1]
Step 2 (initialize population) Set 119866 = 0 Generate an NPlowastDgeneration which consists of individuals (119862 120590
2 and 120576) with
4 Computational and Mathematical Methods in Medicine
uniform probability distribution random values within thecontrol variable bounds of parameter space
Step 3 With the parameters train the SVR model andforecast the training datasetsThen calculate the fitness targetof all the individuals in the generation and record theminimumvalues of the fitness function and set (119862 120590
2 and 120576)
correspondingly as the 119909best
Step 4 (mutation operation) For each target vector 119909119894119866 an
associated mutant vector is generated according to
V119894119866+1
= 119909best + 119865 [(1199091199031119866
minus 1199091199032119866) + (119909
1199033119866
minus 1199091199034119866)] (7)
The random indexes 1199031 1199032 1199033 and 119903
4have to be distin-
guished from each other and from the current trial index 119894
Step 5 (crossover operation) In order to increase the diver-sity of the perturbed parameter vectors crossover is intro-duced by the following formula
119906119895119894119866+1
= V119895119894119866+1
if (randb (119895) le CR) or 119895 = rnbr (119894)119909119895119894119866+1
if (randb (119895) gt CR) and 119895 = rnbr (119894)
(8)
where 119906119894119866+1
= (1199061119894119866+1
1199062119894119866+1
1199063119894119866+1
) randb(119895) isin [0 1] isthe 119895th evaluation of a uniform random number generatorand rnbr(119894) is a randomly chosen integer in the range in [1 3]to ensure that 119906
119894119866+1gets at least one element from V
119894119866+1
Step 6 (selection operation) After the fitness values of thenew generation being calculated then selecting the new bestindividual as 119906best119866+1 the selection operation is performedTo decide whether or not it could replace the 119909best the fitnessvalue of 119891(119906best119866+1) is compared to that of 119891(119909best) Theoperation is given as follows
119909best = 119906best119866+1 119891 (119906best119866+1) le 119891 (119909best)
119909best 119891 (119906best119866+1) gt 119891 (119909best) (9)
Step 7 Termination condition checking if max iterations119866 ismet or the fitness value of119891(119909best) is reachedwithin the fitnesstolerance value return the recorded global optimal parameter(119862 1205902 and 120576) otherwise go to Step 4
24 Particle Swarm Optimization (PSO) Optimization Meth-od The Particle swarm optimization (PSO) is a bioinspiredstochastic optimization technique which was basically devel-oped through simulating social behaviour of birds and insectsthat keep living by maintaining swarm actions A particleis considered as a bird in a swarm consisting of a numberof birds and all particles fly through the searching spaceby following the current optimum particle to find the finaloptimum solution of the optimization problem [26]
The detailed experimental procedure for selecting thebest hyperparameters (119862 120590
2 and 120576) is listed as follows
Step 1 Particle initialization andPSOparameters setting [27]set the PSO parameters including the scope of 119862 120590
2 and
120576 the number of particles is 20 particle dimension is 3The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value as 0001
Step 2 Randomly generate primal population of randomparticles (119862 120590
2 and 120576) and velocities inside the searching
space
Step 3 Construct SVR model with the parameters in eachparticle and perform the prediction with the training dataThen evaluate the regression accuracy based on the definedfitness functionThen maintain the record of the best perfor-mance with the minimum error as the global best solution
Step 4 Particle manipulations suppose that 119883119894(119896) and 119881
119894(119896)
are the particle and velocity at step 119896 and they are updatedinto their new values 119883
119894(119896 + 1) and 119881
119894(119896 + 1) at step 119896 + 1 by
applying the two equations below
119881119894 (119896 + 1) = 120596119881
119894 (119896) + 11988811199031(119901119894 (119896) minus 119883
119894 (119896))
+ 11988821199032(119901119892 (119896) minus 119883
119894 (119896))
119883119894 (119896 + 1) = 119883
119894 (119896) + 119881119894 (119896 + 1) Δ119905
(10)
where 119881119894
isin [119881119894min 119881119894max] Δ119905 is the unit time step value
and 120596 is the inertia weight 119901119894(119896) and 119901
119892(119896) are the best
coordinates of particle number 119894 and the whole swarm untilstep 119896 respectively Positive constants 119888
1and 1198882are learning
rates while 1199031and 1199032are random numbers between 0 and 1
Step 5 Train the SVRmodel using the particles (119862 1205902 and 120576)
in the new generation and forecast the training datasets Afterevaluating the fitness function pick the new best particle asthe local best
Step 6 Compare the local best and global best and choosethe better one to update the global best
Step 7 Stop condition checking if a termination criterion(the number of iterations) is met or the fitness value of theglobal best is reached within the extent predefined returnthe recorded global best as the best parameter (119862 120590
2 and
120576) otherwise go to Step 4
25 Simulation Protocol and Data Set The SVR model istested with our previously developed realistic heart-torsomodel [11 12] In this simulation protocol a normal ven-tricular excitation is illustrated as an example to calculatethe data set for the SVR model The considered ventricularexcitation period from the first breakthrough to the end is357ms and the time step is 1ms and thus 358 BSPs 120593
119861and
TMPs 120593119898temporal data sets are calculated Sixty data sets
(120593119861and 120593
119898) are chosen at times of 3ms 9ms 15ms
357ms after the first ventricular breakthrough which will beused as testing samples to assess the SVR models predictioncapability and robustness The rest 298 in 358 data setsare employed as the training samples for the training andoptimal parameter selection procedures Moreover duringeach ventricular excitation period 412 potentials on the BSPs
Computational and Mathematical Methods in Medicine 5
and 478 potentials on the TMPs are captured That is to saythe matrix of BSPs is divided into the training data (298 times
412) and testing data (60 times 412) and the matrix of TMPs isalso divided into the training data (298 times 478) and testingdata (60 times 478) Each SVR regression model is built byusing the mapping relations between BSPs training data (298times 412) and one TMPs training point (298 times 1) Based onall the training data 478 different regression models werebuilt by usingGA-SVRDE-SVR and PSO-SVR respectivelyUsing the corresponding SVR model we can reconstruct thecorresponding TMPs (60 times 1) from the BSPs testing data (60times 412) For all testing samples we can reconstruct the TMPsfrom the all 478 regression SVR models
3 The Proposed System Framework
31 Overview of the SVRParametersOptimization FrameworkAs shown in Figure 1 in the first stage the original inputdata is preprocessed by scaling the training data and featureextraction In the second stage the hyperparameters (119862 1205902and 120576) of SVR method are set carefully by using the GADE and PSO optimization methods Moreover GA-SVRDE-SVR and PSO-SVR are applied to construct the hybridSVR model respectively Finally according to the effectivehybrid SVR models we can reconstruct the transmembranepotentials (TMPs) from body surface potentials (BSPs) byusing these testing samples
32 Preprocessing the Data Set
321 Scaling the Training Set During preprocessing stageeach input variable is scaled in order to avoid the potentialvalue from spanning a great numerical range and to preventnumeric difficulties Generally all input values including 358BSPs and TMPs temporal data sets are linearly scaled to therange (01) by the following formula [see (11)]
1205931015840
119905=
120593119905minus 120593119905min
120593119905max minus 120593
119905min (11)
where 120593119905is the original potential value of each time 119905 1205931015840
119905is the
scaled potential value 120593119905max is the maximum potential value
of each time 119905 and 120593119905min is the minimum potential value of
each time 119905
33 Feature Extraction by Using KPCA The kernel principalcomponent analysis (KPCA) is a nonlinear feature extractionmethod which is one type of a nonlinear PCA developedby generalizing the kernel method into PCA Specificallythe KPCA initially maps the original inputs into a high-dimensional feature space 119865 using the kernel method andthen calculates the PCA in the high-dimensional featurespace 119865The detailed theory of KPCA can be consulted in thepaper [28]
Feature extraction is a significant task for preprocessingthe original input data in developing an effective SVRmodelAs is investigated in our previous study [12] the SVR withfeature extraction by using KPCA has superior performances
BSPs
Scaling data set
KPCA for feature extraction
Train data
Parameters selected by
GA
Testing data
Parameters selected by
DE
Parameters selected by
PSO
Build GA-SVR
model
Build DE-SVR
model
Build PSO-SVR
model
Reconstructing TMPs
Figure 1 The framework of proposed parameters optimizationmethod for SVR method in solving the inverse ECG problem
0 50 100 150 200 250 300 350 400
0
20
Time (ms)
minus100
minus80
minus60
minus40
minus20
Original TMPsGA-SVR method
DE-SVR methodPSO-SVR method
TMP
(mV
)
Figure 2 The temporal course of the TMPs for one representativesource point on epicardium (50th) over the 60 testing timesThe reconstruction TMPs with GA-SVR DE-SVR and PSO-SVRmethods are all compared with original TMPs
to that of using PCAor the single SVRmethod in reconstruct-ing the TMPs In this paper the kernel principal componentanalysis (KPCA) was proposed to implement the featureextraction And the dimension of BSPs dataset is reducedfrom 412 to 200 properly which can reduce the dimensionsof the inputs and improve the generalization performances ofthe SVR method
4 Results and Discussion
41 The Result of Parameters Selection In this study theGA DE and PSO algorithms were proposed to seek the
6 Computational and Mathematical Methods in Medicine
(a)
(mV)minus90 minus80 minus70 minus60 minus50 minus40 minus30 minus20 minus10 0 10
(b)
Figure 3 The TMP distribution on the ventricular surface attwo sequential testing time points (a) shows the comparison ofreconstructed TMP distributions when the time point is 27ms and(b) shows the comparison of reconstructed TMP distributions whenthe time point is 51ms Moreover the upper row shows the TMPsfrom an anterior view and the lower from a posterior view In eachrow the first figure is the original TMPs and the rest three are thereconstructed TMPs by using the GA-SVR DE-SVR and PSO-SVRmethods
corresponding optimal hyperparameter (119862 1205902 and 120576) of the
478 SVR models Table 1 displays the average value and thestandard deviations of three parameters in 478 SVR modelsby using GA DE and PSO optimization algorithms From itwe can find that the gaps among these optimal parameters areobvious by using three different optimization methods Thestandard deviations of 120576 are neglected because they are verysmall
42 Comparison of Reconstruction Accuracy This study com-pares the reconstruction performance based on the proposedthree intelligent optimization algorithms for selecting thehyperparameters of SVR By using GA-SVR DE-SVR andPSO-SVR hybrid models two types of experiments forreconstructing the TMPs have been conducted The firstexperiment reconstructs the TMPs for one representativesource point (the 50th point) on the epicardium over 60testing times as is depicted in Figure 2 Since the gaps among
Table 1 The average and the standard deviations of optimal hyper-parameters (119862 120590
2 and 120576) by using GA DE and PSO optimization
algorithms
Parameter MethodGA DE PSO
119862 144761 plusmn 59432 762414 plusmn 91367 531 plusmn 302
1205902
064 plusmn 055 159 plusmn 138 122 plusmn 102
120576 148119890 minus 005 180119890 minus 005 198119890 minus 005
Table 2 The mean values and standard deviations of these 60testing errors between the simulated TMPs and the reconstructedTMPs by using GA-SVR DE-SVR and PSO-SVR methods at onerepresentative source point (the 50th point on the epicardium)
Indexes MethodGA-SVR DE-SVR PSO-SVR
Mean value 296 269 155Standard deviation 192 167 103
these three methods are so slight that we have to add astatistical analysis to validate the results The mean valuesand standard deviations of these 60 testing errors betweenthe simulated TMPs and the reconstructed TMPs by usingGA-SVR DE-SVR and PSO-SVR methods are investigatedas shown in the Table 2 From Figure 2 and Table 2 thePSO-SVR outperforms the GA-SVR and DE-SVR methodsin reconstructing the TMPs for one representative sourcepoint over all the testing times Furthermore the DE-SVR issuperior to the GA-SVRmethod in reconstructing the TMPs
The second experiment reconstructs the TMPs on all theheart surface points at one time instant These inverse ECGsolutions are shown in Figure 3 in which two sequentialtesting time points (27 and 51ms after the first ventricularbreakthrough) are presented to illustrate the performances oftheGA-SVRDE-SVR and PSO-SVRmethods It can be seenthat among the three parameters optimization methods thehybrid PSO-SVR performs better thanGA-SVR andDE-SVRmethods because its reconstructed solutions are much closerto the simulated TMPs distributions
Based on the simulation information of TMPs we canevaluate the accuracy of the reconstructed TMPs of thesethree different intelligent optimization algorithms at thetesting time by mean square error (MSE) the relative error(RE) and the correlation coefficient (CC)
MSE =1
119899
119899
sum
119894=1
(120593119888minus 120593119890)2
RE =
1003817100381710038171003817120593119888 minus 120593119890
1003817100381710038171003817
120593119890
CC =sum119899
119894=1[(120593119888)119894minus 120593119888] [(120593119890)119894minus 120593119890]
1003817100381710038171003817120593119888 minus 120593119890
10038171003817100381710038171003817100381710038171003817120593119890 minus 120593
119890
1003817100381710038171003817
(12)
Here 120593119890is the simulated TMP distribution at time 119905 and 120593
119888
is the reconstructed TMPs The quantities 120593119890and 120593
119888are the
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
Computational and Mathematical Methods in Medicine 3
(RBF) was applied in the study which has the ability touniversally approximate any distribution in the feature spaceWith an appropriate parameter RBF usually provides a betterprediction performance so it is adopted in this study asshown in (5)
119870(119909119894 119909119895) = exp(minus
10038171003817100381710038171003817119909119894minus 119909119895
10038171003817100381710038171003817
2
21205902) (5)
where 119909119894and 119909
119894are input vector spaces and 120590
2 is the band-width of the kernel function
In the above equations there exist three hyper-param-eters to be determined in advance that is the penaltyparameter 119862 insensitive parameter 120576 and the related kernelfunction parameters 120590
2 They heavily affect the regressionaccuracy and computation complexity of SVR The penaltyparameter 119862 controls the degree of punishing the sampleswhose errors go beyond the given value The insensitiveparameter 120576 controls the width of the 120576-insensitive zoneused to fit the training data The value of 120576 can enhancethe generalization capability with the increase of 120576 thenumber of support vectors will decrease and the algorithmiccomputation complexity will also reduceThe bandwidth 120590 ofthe kernel function has a great influence on the performanceof learning machine
In this study three optimization methods that is GADE and PSO are presented to determine the optimal hyper-parameters of the SVR model According to [15] the generalrange of119862 1205902 and 120576 has been given In the trial operation wenarrowed it to avoid blindness in the optimization process Inthis paper the set of hyperparameter (119862 120590
2 120576) is initialized
in the given range 119862 isin [0 10000] 1205902 isin [0 2] and 120576 isin
[0 00001] where optimization methods (GA DE and PSO)are to seek the global optimal solutions
In order to calculate the error uniformly we adopt thesame fitness function which plays a critical role in measuringthese algorithms performance The fitness function is deter-mined as follows
min119891 =sum119899119905
119894=1(1003816100381610038161003816119886119894 minus 119901
119894
1003816100381610038161003816 119886119894)
119899119905
lowast 100 (6)
where 119899119905is the number of training data samples 119886
119894is the
actual TMPs of train data and 119901119894is the predicted TMPs The
solution with a smaller fitness 119891 of the training dataset has abetter chance of surviving in the successive generations Themain tool LIBSVM was used for training and validating theSVR model [21]
22 Genetic Algorithm (GA) Optimization Method The GA[22] is a biologically motivated optimization techniqueguided by the concepts of biological evolution and Darwinrsquosprinciple of survival of the fittest It is a computer model ofan evolution of a population of artificial individuals In thisstudy for the specific optimizing problemof hyperparameters(119862 1205902 120576) the process is defined as follows
Step 1 (initialize population) The population size NP isequal to 20 the dimension of parameter vectors 119863 is 3
The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value is setas 0001 The probabilities of selection (Stochastic universalsampling) crossover (multipoint crossover) and mutation(mutation operator of the Breeder Genetic Algorithm) thatwere used herein are 09 08 and 005 respectively
Step 2 (encode chromosomes) According to the possiblerange of parameters 119862 120590
2 and 120576 given before the GA utilizesbinary encoding method and each parameter is encodedby 20 bits of binary number Therefore the search space isdefined as the solution space in which each potential solutionis encoded as a distinct chromosome
Step 3 The parameters 119862 1205902 and 120576 of each individual
are used to build the SVR model With the BSPs of thetraining data the cardiac TMPs are reconstructed Then theperformances of individuals in the generation are evaluatedby the specific objective fitness function according to (6)Theindividual with the minimum fitness value will be selectedand then the chromosome of selected individual is preservedas the best result
Step 4 Thebasic genetic search operators comprise selectionmutation and crossover which are applied orderly to obtaina new generation where the new individual (119862 120590
2 120576) with
the best performance is retained
Step 5 The new best fitness value will be compared to that ofthe best result and then select the better one to update thebest result
Step 6 This process will not come to an end until thetermination criterion is met and the best chromosome ispresented as the final solution otherwise go back to Step 4
23 Differential Evolution (DE) Optimization Method Dif-ferential evolution [23] is a population-based and paralleldirect searchmethod which is used to approximate the globaloptimal solution As is mentioned above the optimization ofhyperparameter (119862 120590
2 120576) can be transformed into solving
the minimization of the fitness function Min 119891(119909) 119909 =
119909119894119866 119894119866
= 1 2 andNP is three-dimensional parametervectors including (119862 120590
2 and 120576) Subsequently each gen-eration evolves by employing the evolutionary operatorsinvolving mutation crossover and selection operations toproduce a trail vector for each individual vector [24] and thedetailed evolutionary strategies can be described as follows
Step 1 (initialize DE parameters) The population size NP isset as 20 [25] and the dimension of parameter vectors 119863 is3 The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value is set as0001 The mutation factor 119865 is selected in [05 1] and thecrossover rate CR is selected in [0 1]
Step 2 (initialize population) Set 119866 = 0 Generate an NPlowastDgeneration which consists of individuals (119862 120590
2 and 120576) with
4 Computational and Mathematical Methods in Medicine
uniform probability distribution random values within thecontrol variable bounds of parameter space
Step 3 With the parameters train the SVR model andforecast the training datasetsThen calculate the fitness targetof all the individuals in the generation and record theminimumvalues of the fitness function and set (119862 120590
2 and 120576)
correspondingly as the 119909best
Step 4 (mutation operation) For each target vector 119909119894119866 an
associated mutant vector is generated according to
V119894119866+1
= 119909best + 119865 [(1199091199031119866
minus 1199091199032119866) + (119909
1199033119866
minus 1199091199034119866)] (7)
The random indexes 1199031 1199032 1199033 and 119903
4have to be distin-
guished from each other and from the current trial index 119894
Step 5 (crossover operation) In order to increase the diver-sity of the perturbed parameter vectors crossover is intro-duced by the following formula
119906119895119894119866+1
= V119895119894119866+1
if (randb (119895) le CR) or 119895 = rnbr (119894)119909119895119894119866+1
if (randb (119895) gt CR) and 119895 = rnbr (119894)
(8)
where 119906119894119866+1
= (1199061119894119866+1
1199062119894119866+1
1199063119894119866+1
) randb(119895) isin [0 1] isthe 119895th evaluation of a uniform random number generatorand rnbr(119894) is a randomly chosen integer in the range in [1 3]to ensure that 119906
119894119866+1gets at least one element from V
119894119866+1
Step 6 (selection operation) After the fitness values of thenew generation being calculated then selecting the new bestindividual as 119906best119866+1 the selection operation is performedTo decide whether or not it could replace the 119909best the fitnessvalue of 119891(119906best119866+1) is compared to that of 119891(119909best) Theoperation is given as follows
119909best = 119906best119866+1 119891 (119906best119866+1) le 119891 (119909best)
119909best 119891 (119906best119866+1) gt 119891 (119909best) (9)
Step 7 Termination condition checking if max iterations119866 ismet or the fitness value of119891(119909best) is reachedwithin the fitnesstolerance value return the recorded global optimal parameter(119862 1205902 and 120576) otherwise go to Step 4
24 Particle Swarm Optimization (PSO) Optimization Meth-od The Particle swarm optimization (PSO) is a bioinspiredstochastic optimization technique which was basically devel-oped through simulating social behaviour of birds and insectsthat keep living by maintaining swarm actions A particleis considered as a bird in a swarm consisting of a numberof birds and all particles fly through the searching spaceby following the current optimum particle to find the finaloptimum solution of the optimization problem [26]
The detailed experimental procedure for selecting thebest hyperparameters (119862 120590
2 and 120576) is listed as follows
Step 1 Particle initialization andPSOparameters setting [27]set the PSO parameters including the scope of 119862 120590
2 and
120576 the number of particles is 20 particle dimension is 3The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value as 0001
Step 2 Randomly generate primal population of randomparticles (119862 120590
2 and 120576) and velocities inside the searching
space
Step 3 Construct SVR model with the parameters in eachparticle and perform the prediction with the training dataThen evaluate the regression accuracy based on the definedfitness functionThen maintain the record of the best perfor-mance with the minimum error as the global best solution
Step 4 Particle manipulations suppose that 119883119894(119896) and 119881
119894(119896)
are the particle and velocity at step 119896 and they are updatedinto their new values 119883
119894(119896 + 1) and 119881
119894(119896 + 1) at step 119896 + 1 by
applying the two equations below
119881119894 (119896 + 1) = 120596119881
119894 (119896) + 11988811199031(119901119894 (119896) minus 119883
119894 (119896))
+ 11988821199032(119901119892 (119896) minus 119883
119894 (119896))
119883119894 (119896 + 1) = 119883
119894 (119896) + 119881119894 (119896 + 1) Δ119905
(10)
where 119881119894
isin [119881119894min 119881119894max] Δ119905 is the unit time step value
and 120596 is the inertia weight 119901119894(119896) and 119901
119892(119896) are the best
coordinates of particle number 119894 and the whole swarm untilstep 119896 respectively Positive constants 119888
1and 1198882are learning
rates while 1199031and 1199032are random numbers between 0 and 1
Step 5 Train the SVRmodel using the particles (119862 1205902 and 120576)
in the new generation and forecast the training datasets Afterevaluating the fitness function pick the new best particle asthe local best
Step 6 Compare the local best and global best and choosethe better one to update the global best
Step 7 Stop condition checking if a termination criterion(the number of iterations) is met or the fitness value of theglobal best is reached within the extent predefined returnthe recorded global best as the best parameter (119862 120590
2 and
120576) otherwise go to Step 4
25 Simulation Protocol and Data Set The SVR model istested with our previously developed realistic heart-torsomodel [11 12] In this simulation protocol a normal ven-tricular excitation is illustrated as an example to calculatethe data set for the SVR model The considered ventricularexcitation period from the first breakthrough to the end is357ms and the time step is 1ms and thus 358 BSPs 120593
119861and
TMPs 120593119898temporal data sets are calculated Sixty data sets
(120593119861and 120593
119898) are chosen at times of 3ms 9ms 15ms
357ms after the first ventricular breakthrough which will beused as testing samples to assess the SVR models predictioncapability and robustness The rest 298 in 358 data setsare employed as the training samples for the training andoptimal parameter selection procedures Moreover duringeach ventricular excitation period 412 potentials on the BSPs
Computational and Mathematical Methods in Medicine 5
and 478 potentials on the TMPs are captured That is to saythe matrix of BSPs is divided into the training data (298 times
412) and testing data (60 times 412) and the matrix of TMPs isalso divided into the training data (298 times 478) and testingdata (60 times 478) Each SVR regression model is built byusing the mapping relations between BSPs training data (298times 412) and one TMPs training point (298 times 1) Based onall the training data 478 different regression models werebuilt by usingGA-SVRDE-SVR and PSO-SVR respectivelyUsing the corresponding SVR model we can reconstruct thecorresponding TMPs (60 times 1) from the BSPs testing data (60times 412) For all testing samples we can reconstruct the TMPsfrom the all 478 regression SVR models
3 The Proposed System Framework
31 Overview of the SVRParametersOptimization FrameworkAs shown in Figure 1 in the first stage the original inputdata is preprocessed by scaling the training data and featureextraction In the second stage the hyperparameters (119862 1205902and 120576) of SVR method are set carefully by using the GADE and PSO optimization methods Moreover GA-SVRDE-SVR and PSO-SVR are applied to construct the hybridSVR model respectively Finally according to the effectivehybrid SVR models we can reconstruct the transmembranepotentials (TMPs) from body surface potentials (BSPs) byusing these testing samples
32 Preprocessing the Data Set
321 Scaling the Training Set During preprocessing stageeach input variable is scaled in order to avoid the potentialvalue from spanning a great numerical range and to preventnumeric difficulties Generally all input values including 358BSPs and TMPs temporal data sets are linearly scaled to therange (01) by the following formula [see (11)]
1205931015840
119905=
120593119905minus 120593119905min
120593119905max minus 120593
119905min (11)
where 120593119905is the original potential value of each time 119905 1205931015840
119905is the
scaled potential value 120593119905max is the maximum potential value
of each time 119905 and 120593119905min is the minimum potential value of
each time 119905
33 Feature Extraction by Using KPCA The kernel principalcomponent analysis (KPCA) is a nonlinear feature extractionmethod which is one type of a nonlinear PCA developedby generalizing the kernel method into PCA Specificallythe KPCA initially maps the original inputs into a high-dimensional feature space 119865 using the kernel method andthen calculates the PCA in the high-dimensional featurespace 119865The detailed theory of KPCA can be consulted in thepaper [28]
Feature extraction is a significant task for preprocessingthe original input data in developing an effective SVRmodelAs is investigated in our previous study [12] the SVR withfeature extraction by using KPCA has superior performances
BSPs
Scaling data set
KPCA for feature extraction
Train data
Parameters selected by
GA
Testing data
Parameters selected by
DE
Parameters selected by
PSO
Build GA-SVR
model
Build DE-SVR
model
Build PSO-SVR
model
Reconstructing TMPs
Figure 1 The framework of proposed parameters optimizationmethod for SVR method in solving the inverse ECG problem
0 50 100 150 200 250 300 350 400
0
20
Time (ms)
minus100
minus80
minus60
minus40
minus20
Original TMPsGA-SVR method
DE-SVR methodPSO-SVR method
TMP
(mV
)
Figure 2 The temporal course of the TMPs for one representativesource point on epicardium (50th) over the 60 testing timesThe reconstruction TMPs with GA-SVR DE-SVR and PSO-SVRmethods are all compared with original TMPs
to that of using PCAor the single SVRmethod in reconstruct-ing the TMPs In this paper the kernel principal componentanalysis (KPCA) was proposed to implement the featureextraction And the dimension of BSPs dataset is reducedfrom 412 to 200 properly which can reduce the dimensionsof the inputs and improve the generalization performances ofthe SVR method
4 Results and Discussion
41 The Result of Parameters Selection In this study theGA DE and PSO algorithms were proposed to seek the
6 Computational and Mathematical Methods in Medicine
(a)
(mV)minus90 minus80 minus70 minus60 minus50 minus40 minus30 minus20 minus10 0 10
(b)
Figure 3 The TMP distribution on the ventricular surface attwo sequential testing time points (a) shows the comparison ofreconstructed TMP distributions when the time point is 27ms and(b) shows the comparison of reconstructed TMP distributions whenthe time point is 51ms Moreover the upper row shows the TMPsfrom an anterior view and the lower from a posterior view In eachrow the first figure is the original TMPs and the rest three are thereconstructed TMPs by using the GA-SVR DE-SVR and PSO-SVRmethods
corresponding optimal hyperparameter (119862 1205902 and 120576) of the
478 SVR models Table 1 displays the average value and thestandard deviations of three parameters in 478 SVR modelsby using GA DE and PSO optimization algorithms From itwe can find that the gaps among these optimal parameters areobvious by using three different optimization methods Thestandard deviations of 120576 are neglected because they are verysmall
42 Comparison of Reconstruction Accuracy This study com-pares the reconstruction performance based on the proposedthree intelligent optimization algorithms for selecting thehyperparameters of SVR By using GA-SVR DE-SVR andPSO-SVR hybrid models two types of experiments forreconstructing the TMPs have been conducted The firstexperiment reconstructs the TMPs for one representativesource point (the 50th point) on the epicardium over 60testing times as is depicted in Figure 2 Since the gaps among
Table 1 The average and the standard deviations of optimal hyper-parameters (119862 120590
2 and 120576) by using GA DE and PSO optimization
algorithms
Parameter MethodGA DE PSO
119862 144761 plusmn 59432 762414 plusmn 91367 531 plusmn 302
1205902
064 plusmn 055 159 plusmn 138 122 plusmn 102
120576 148119890 minus 005 180119890 minus 005 198119890 minus 005
Table 2 The mean values and standard deviations of these 60testing errors between the simulated TMPs and the reconstructedTMPs by using GA-SVR DE-SVR and PSO-SVR methods at onerepresentative source point (the 50th point on the epicardium)
Indexes MethodGA-SVR DE-SVR PSO-SVR
Mean value 296 269 155Standard deviation 192 167 103
these three methods are so slight that we have to add astatistical analysis to validate the results The mean valuesand standard deviations of these 60 testing errors betweenthe simulated TMPs and the reconstructed TMPs by usingGA-SVR DE-SVR and PSO-SVR methods are investigatedas shown in the Table 2 From Figure 2 and Table 2 thePSO-SVR outperforms the GA-SVR and DE-SVR methodsin reconstructing the TMPs for one representative sourcepoint over all the testing times Furthermore the DE-SVR issuperior to the GA-SVRmethod in reconstructing the TMPs
The second experiment reconstructs the TMPs on all theheart surface points at one time instant These inverse ECGsolutions are shown in Figure 3 in which two sequentialtesting time points (27 and 51ms after the first ventricularbreakthrough) are presented to illustrate the performances oftheGA-SVRDE-SVR and PSO-SVRmethods It can be seenthat among the three parameters optimization methods thehybrid PSO-SVR performs better thanGA-SVR andDE-SVRmethods because its reconstructed solutions are much closerto the simulated TMPs distributions
Based on the simulation information of TMPs we canevaluate the accuracy of the reconstructed TMPs of thesethree different intelligent optimization algorithms at thetesting time by mean square error (MSE) the relative error(RE) and the correlation coefficient (CC)
MSE =1
119899
119899
sum
119894=1
(120593119888minus 120593119890)2
RE =
1003817100381710038171003817120593119888 minus 120593119890
1003817100381710038171003817
120593119890
CC =sum119899
119894=1[(120593119888)119894minus 120593119888] [(120593119890)119894minus 120593119890]
1003817100381710038171003817120593119888 minus 120593119890
10038171003817100381710038171003817100381710038171003817120593119890 minus 120593
119890
1003817100381710038171003817
(12)
Here 120593119890is the simulated TMP distribution at time 119905 and 120593
119888
is the reconstructed TMPs The quantities 120593119890and 120593
119888are the
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
4 Computational and Mathematical Methods in Medicine
uniform probability distribution random values within thecontrol variable bounds of parameter space
Step 3 With the parameters train the SVR model andforecast the training datasetsThen calculate the fitness targetof all the individuals in the generation and record theminimumvalues of the fitness function and set (119862 120590
2 and 120576)
correspondingly as the 119909best
Step 4 (mutation operation) For each target vector 119909119894119866 an
associated mutant vector is generated according to
V119894119866+1
= 119909best + 119865 [(1199091199031119866
minus 1199091199032119866) + (119909
1199033119866
minus 1199091199034119866)] (7)
The random indexes 1199031 1199032 1199033 and 119903
4have to be distin-
guished from each other and from the current trial index 119894
Step 5 (crossover operation) In order to increase the diver-sity of the perturbed parameter vectors crossover is intro-duced by the following formula
119906119895119894119866+1
= V119895119894119866+1
if (randb (119895) le CR) or 119895 = rnbr (119894)119909119895119894119866+1
if (randb (119895) gt CR) and 119895 = rnbr (119894)
(8)
where 119906119894119866+1
= (1199061119894119866+1
1199062119894119866+1
1199063119894119866+1
) randb(119895) isin [0 1] isthe 119895th evaluation of a uniform random number generatorand rnbr(119894) is a randomly chosen integer in the range in [1 3]to ensure that 119906
119894119866+1gets at least one element from V
119894119866+1
Step 6 (selection operation) After the fitness values of thenew generation being calculated then selecting the new bestindividual as 119906best119866+1 the selection operation is performedTo decide whether or not it could replace the 119909best the fitnessvalue of 119891(119906best119866+1) is compared to that of 119891(119909best) Theoperation is given as follows
119909best = 119906best119866+1 119891 (119906best119866+1) le 119891 (119909best)
119909best 119891 (119906best119866+1) gt 119891 (119909best) (9)
Step 7 Termination condition checking if max iterations119866 ismet or the fitness value of119891(119909best) is reachedwithin the fitnesstolerance value return the recorded global optimal parameter(119862 1205902 and 120576) otherwise go to Step 4
24 Particle Swarm Optimization (PSO) Optimization Meth-od The Particle swarm optimization (PSO) is a bioinspiredstochastic optimization technique which was basically devel-oped through simulating social behaviour of birds and insectsthat keep living by maintaining swarm actions A particleis considered as a bird in a swarm consisting of a numberof birds and all particles fly through the searching spaceby following the current optimum particle to find the finaloptimum solution of the optimization problem [26]
The detailed experimental procedure for selecting thebest hyperparameters (119862 120590
2 and 120576) is listed as follows
Step 1 Particle initialization andPSOparameters setting [27]set the PSO parameters including the scope of 119862 120590
2 and
120576 the number of particles is 20 particle dimension is 3The termination criterion is set as follows the number ofiterations is set as 30 and the fitness tolerance value as 0001
Step 2 Randomly generate primal population of randomparticles (119862 120590
2 and 120576) and velocities inside the searching
space
Step 3 Construct SVR model with the parameters in eachparticle and perform the prediction with the training dataThen evaluate the regression accuracy based on the definedfitness functionThen maintain the record of the best perfor-mance with the minimum error as the global best solution
Step 4 Particle manipulations suppose that 119883119894(119896) and 119881
119894(119896)
are the particle and velocity at step 119896 and they are updatedinto their new values 119883
119894(119896 + 1) and 119881
119894(119896 + 1) at step 119896 + 1 by
applying the two equations below
119881119894 (119896 + 1) = 120596119881
119894 (119896) + 11988811199031(119901119894 (119896) minus 119883
119894 (119896))
+ 11988821199032(119901119892 (119896) minus 119883
119894 (119896))
119883119894 (119896 + 1) = 119883
119894 (119896) + 119881119894 (119896 + 1) Δ119905
(10)
where 119881119894
isin [119881119894min 119881119894max] Δ119905 is the unit time step value
and 120596 is the inertia weight 119901119894(119896) and 119901
119892(119896) are the best
coordinates of particle number 119894 and the whole swarm untilstep 119896 respectively Positive constants 119888
1and 1198882are learning
rates while 1199031and 1199032are random numbers between 0 and 1
Step 5 Train the SVRmodel using the particles (119862 1205902 and 120576)
in the new generation and forecast the training datasets Afterevaluating the fitness function pick the new best particle asthe local best
Step 6 Compare the local best and global best and choosethe better one to update the global best
Step 7 Stop condition checking if a termination criterion(the number of iterations) is met or the fitness value of theglobal best is reached within the extent predefined returnthe recorded global best as the best parameter (119862 120590
2 and
120576) otherwise go to Step 4
25 Simulation Protocol and Data Set The SVR model istested with our previously developed realistic heart-torsomodel [11 12] In this simulation protocol a normal ven-tricular excitation is illustrated as an example to calculatethe data set for the SVR model The considered ventricularexcitation period from the first breakthrough to the end is357ms and the time step is 1ms and thus 358 BSPs 120593
119861and
TMPs 120593119898temporal data sets are calculated Sixty data sets
(120593119861and 120593
119898) are chosen at times of 3ms 9ms 15ms
357ms after the first ventricular breakthrough which will beused as testing samples to assess the SVR models predictioncapability and robustness The rest 298 in 358 data setsare employed as the training samples for the training andoptimal parameter selection procedures Moreover duringeach ventricular excitation period 412 potentials on the BSPs
Computational and Mathematical Methods in Medicine 5
and 478 potentials on the TMPs are captured That is to saythe matrix of BSPs is divided into the training data (298 times
412) and testing data (60 times 412) and the matrix of TMPs isalso divided into the training data (298 times 478) and testingdata (60 times 478) Each SVR regression model is built byusing the mapping relations between BSPs training data (298times 412) and one TMPs training point (298 times 1) Based onall the training data 478 different regression models werebuilt by usingGA-SVRDE-SVR and PSO-SVR respectivelyUsing the corresponding SVR model we can reconstruct thecorresponding TMPs (60 times 1) from the BSPs testing data (60times 412) For all testing samples we can reconstruct the TMPsfrom the all 478 regression SVR models
3 The Proposed System Framework
31 Overview of the SVRParametersOptimization FrameworkAs shown in Figure 1 in the first stage the original inputdata is preprocessed by scaling the training data and featureextraction In the second stage the hyperparameters (119862 1205902and 120576) of SVR method are set carefully by using the GADE and PSO optimization methods Moreover GA-SVRDE-SVR and PSO-SVR are applied to construct the hybridSVR model respectively Finally according to the effectivehybrid SVR models we can reconstruct the transmembranepotentials (TMPs) from body surface potentials (BSPs) byusing these testing samples
32 Preprocessing the Data Set
321 Scaling the Training Set During preprocessing stageeach input variable is scaled in order to avoid the potentialvalue from spanning a great numerical range and to preventnumeric difficulties Generally all input values including 358BSPs and TMPs temporal data sets are linearly scaled to therange (01) by the following formula [see (11)]
1205931015840
119905=
120593119905minus 120593119905min
120593119905max minus 120593
119905min (11)
where 120593119905is the original potential value of each time 119905 1205931015840
119905is the
scaled potential value 120593119905max is the maximum potential value
of each time 119905 and 120593119905min is the minimum potential value of
each time 119905
33 Feature Extraction by Using KPCA The kernel principalcomponent analysis (KPCA) is a nonlinear feature extractionmethod which is one type of a nonlinear PCA developedby generalizing the kernel method into PCA Specificallythe KPCA initially maps the original inputs into a high-dimensional feature space 119865 using the kernel method andthen calculates the PCA in the high-dimensional featurespace 119865The detailed theory of KPCA can be consulted in thepaper [28]
Feature extraction is a significant task for preprocessingthe original input data in developing an effective SVRmodelAs is investigated in our previous study [12] the SVR withfeature extraction by using KPCA has superior performances
BSPs
Scaling data set
KPCA for feature extraction
Train data
Parameters selected by
GA
Testing data
Parameters selected by
DE
Parameters selected by
PSO
Build GA-SVR
model
Build DE-SVR
model
Build PSO-SVR
model
Reconstructing TMPs
Figure 1 The framework of proposed parameters optimizationmethod for SVR method in solving the inverse ECG problem
0 50 100 150 200 250 300 350 400
0
20
Time (ms)
minus100
minus80
minus60
minus40
minus20
Original TMPsGA-SVR method
DE-SVR methodPSO-SVR method
TMP
(mV
)
Figure 2 The temporal course of the TMPs for one representativesource point on epicardium (50th) over the 60 testing timesThe reconstruction TMPs with GA-SVR DE-SVR and PSO-SVRmethods are all compared with original TMPs
to that of using PCAor the single SVRmethod in reconstruct-ing the TMPs In this paper the kernel principal componentanalysis (KPCA) was proposed to implement the featureextraction And the dimension of BSPs dataset is reducedfrom 412 to 200 properly which can reduce the dimensionsof the inputs and improve the generalization performances ofthe SVR method
4 Results and Discussion
41 The Result of Parameters Selection In this study theGA DE and PSO algorithms were proposed to seek the
6 Computational and Mathematical Methods in Medicine
(a)
(mV)minus90 minus80 minus70 minus60 minus50 minus40 minus30 minus20 minus10 0 10
(b)
Figure 3 The TMP distribution on the ventricular surface attwo sequential testing time points (a) shows the comparison ofreconstructed TMP distributions when the time point is 27ms and(b) shows the comparison of reconstructed TMP distributions whenthe time point is 51ms Moreover the upper row shows the TMPsfrom an anterior view and the lower from a posterior view In eachrow the first figure is the original TMPs and the rest three are thereconstructed TMPs by using the GA-SVR DE-SVR and PSO-SVRmethods
corresponding optimal hyperparameter (119862 1205902 and 120576) of the
478 SVR models Table 1 displays the average value and thestandard deviations of three parameters in 478 SVR modelsby using GA DE and PSO optimization algorithms From itwe can find that the gaps among these optimal parameters areobvious by using three different optimization methods Thestandard deviations of 120576 are neglected because they are verysmall
42 Comparison of Reconstruction Accuracy This study com-pares the reconstruction performance based on the proposedthree intelligent optimization algorithms for selecting thehyperparameters of SVR By using GA-SVR DE-SVR andPSO-SVR hybrid models two types of experiments forreconstructing the TMPs have been conducted The firstexperiment reconstructs the TMPs for one representativesource point (the 50th point) on the epicardium over 60testing times as is depicted in Figure 2 Since the gaps among
Table 1 The average and the standard deviations of optimal hyper-parameters (119862 120590
2 and 120576) by using GA DE and PSO optimization
algorithms
Parameter MethodGA DE PSO
119862 144761 plusmn 59432 762414 plusmn 91367 531 plusmn 302
1205902
064 plusmn 055 159 plusmn 138 122 plusmn 102
120576 148119890 minus 005 180119890 minus 005 198119890 minus 005
Table 2 The mean values and standard deviations of these 60testing errors between the simulated TMPs and the reconstructedTMPs by using GA-SVR DE-SVR and PSO-SVR methods at onerepresentative source point (the 50th point on the epicardium)
Indexes MethodGA-SVR DE-SVR PSO-SVR
Mean value 296 269 155Standard deviation 192 167 103
these three methods are so slight that we have to add astatistical analysis to validate the results The mean valuesand standard deviations of these 60 testing errors betweenthe simulated TMPs and the reconstructed TMPs by usingGA-SVR DE-SVR and PSO-SVR methods are investigatedas shown in the Table 2 From Figure 2 and Table 2 thePSO-SVR outperforms the GA-SVR and DE-SVR methodsin reconstructing the TMPs for one representative sourcepoint over all the testing times Furthermore the DE-SVR issuperior to the GA-SVRmethod in reconstructing the TMPs
The second experiment reconstructs the TMPs on all theheart surface points at one time instant These inverse ECGsolutions are shown in Figure 3 in which two sequentialtesting time points (27 and 51ms after the first ventricularbreakthrough) are presented to illustrate the performances oftheGA-SVRDE-SVR and PSO-SVRmethods It can be seenthat among the three parameters optimization methods thehybrid PSO-SVR performs better thanGA-SVR andDE-SVRmethods because its reconstructed solutions are much closerto the simulated TMPs distributions
Based on the simulation information of TMPs we canevaluate the accuracy of the reconstructed TMPs of thesethree different intelligent optimization algorithms at thetesting time by mean square error (MSE) the relative error(RE) and the correlation coefficient (CC)
MSE =1
119899
119899
sum
119894=1
(120593119888minus 120593119890)2
RE =
1003817100381710038171003817120593119888 minus 120593119890
1003817100381710038171003817
120593119890
CC =sum119899
119894=1[(120593119888)119894minus 120593119888] [(120593119890)119894minus 120593119890]
1003817100381710038171003817120593119888 minus 120593119890
10038171003817100381710038171003817100381710038171003817120593119890 minus 120593
119890
1003817100381710038171003817
(12)
Here 120593119890is the simulated TMP distribution at time 119905 and 120593
119888
is the reconstructed TMPs The quantities 120593119890and 120593
119888are the
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
Computational and Mathematical Methods in Medicine 5
and 478 potentials on the TMPs are captured That is to saythe matrix of BSPs is divided into the training data (298 times
412) and testing data (60 times 412) and the matrix of TMPs isalso divided into the training data (298 times 478) and testingdata (60 times 478) Each SVR regression model is built byusing the mapping relations between BSPs training data (298times 412) and one TMPs training point (298 times 1) Based onall the training data 478 different regression models werebuilt by usingGA-SVRDE-SVR and PSO-SVR respectivelyUsing the corresponding SVR model we can reconstruct thecorresponding TMPs (60 times 1) from the BSPs testing data (60times 412) For all testing samples we can reconstruct the TMPsfrom the all 478 regression SVR models
3 The Proposed System Framework
31 Overview of the SVRParametersOptimization FrameworkAs shown in Figure 1 in the first stage the original inputdata is preprocessed by scaling the training data and featureextraction In the second stage the hyperparameters (119862 1205902and 120576) of SVR method are set carefully by using the GADE and PSO optimization methods Moreover GA-SVRDE-SVR and PSO-SVR are applied to construct the hybridSVR model respectively Finally according to the effectivehybrid SVR models we can reconstruct the transmembranepotentials (TMPs) from body surface potentials (BSPs) byusing these testing samples
32 Preprocessing the Data Set
321 Scaling the Training Set During preprocessing stageeach input variable is scaled in order to avoid the potentialvalue from spanning a great numerical range and to preventnumeric difficulties Generally all input values including 358BSPs and TMPs temporal data sets are linearly scaled to therange (01) by the following formula [see (11)]
1205931015840
119905=
120593119905minus 120593119905min
120593119905max minus 120593
119905min (11)
where 120593119905is the original potential value of each time 119905 1205931015840
119905is the
scaled potential value 120593119905max is the maximum potential value
of each time 119905 and 120593119905min is the minimum potential value of
each time 119905
33 Feature Extraction by Using KPCA The kernel principalcomponent analysis (KPCA) is a nonlinear feature extractionmethod which is one type of a nonlinear PCA developedby generalizing the kernel method into PCA Specificallythe KPCA initially maps the original inputs into a high-dimensional feature space 119865 using the kernel method andthen calculates the PCA in the high-dimensional featurespace 119865The detailed theory of KPCA can be consulted in thepaper [28]
Feature extraction is a significant task for preprocessingthe original input data in developing an effective SVRmodelAs is investigated in our previous study [12] the SVR withfeature extraction by using KPCA has superior performances
BSPs
Scaling data set
KPCA for feature extraction
Train data
Parameters selected by
GA
Testing data
Parameters selected by
DE
Parameters selected by
PSO
Build GA-SVR
model
Build DE-SVR
model
Build PSO-SVR
model
Reconstructing TMPs
Figure 1 The framework of proposed parameters optimizationmethod for SVR method in solving the inverse ECG problem
0 50 100 150 200 250 300 350 400
0
20
Time (ms)
minus100
minus80
minus60
minus40
minus20
Original TMPsGA-SVR method
DE-SVR methodPSO-SVR method
TMP
(mV
)
Figure 2 The temporal course of the TMPs for one representativesource point on epicardium (50th) over the 60 testing timesThe reconstruction TMPs with GA-SVR DE-SVR and PSO-SVRmethods are all compared with original TMPs
to that of using PCAor the single SVRmethod in reconstruct-ing the TMPs In this paper the kernel principal componentanalysis (KPCA) was proposed to implement the featureextraction And the dimension of BSPs dataset is reducedfrom 412 to 200 properly which can reduce the dimensionsof the inputs and improve the generalization performances ofthe SVR method
4 Results and Discussion
41 The Result of Parameters Selection In this study theGA DE and PSO algorithms were proposed to seek the
6 Computational and Mathematical Methods in Medicine
(a)
(mV)minus90 minus80 minus70 minus60 minus50 minus40 minus30 minus20 minus10 0 10
(b)
Figure 3 The TMP distribution on the ventricular surface attwo sequential testing time points (a) shows the comparison ofreconstructed TMP distributions when the time point is 27ms and(b) shows the comparison of reconstructed TMP distributions whenthe time point is 51ms Moreover the upper row shows the TMPsfrom an anterior view and the lower from a posterior view In eachrow the first figure is the original TMPs and the rest three are thereconstructed TMPs by using the GA-SVR DE-SVR and PSO-SVRmethods
corresponding optimal hyperparameter (119862 1205902 and 120576) of the
478 SVR models Table 1 displays the average value and thestandard deviations of three parameters in 478 SVR modelsby using GA DE and PSO optimization algorithms From itwe can find that the gaps among these optimal parameters areobvious by using three different optimization methods Thestandard deviations of 120576 are neglected because they are verysmall
42 Comparison of Reconstruction Accuracy This study com-pares the reconstruction performance based on the proposedthree intelligent optimization algorithms for selecting thehyperparameters of SVR By using GA-SVR DE-SVR andPSO-SVR hybrid models two types of experiments forreconstructing the TMPs have been conducted The firstexperiment reconstructs the TMPs for one representativesource point (the 50th point) on the epicardium over 60testing times as is depicted in Figure 2 Since the gaps among
Table 1 The average and the standard deviations of optimal hyper-parameters (119862 120590
2 and 120576) by using GA DE and PSO optimization
algorithms
Parameter MethodGA DE PSO
119862 144761 plusmn 59432 762414 plusmn 91367 531 plusmn 302
1205902
064 plusmn 055 159 plusmn 138 122 plusmn 102
120576 148119890 minus 005 180119890 minus 005 198119890 minus 005
Table 2 The mean values and standard deviations of these 60testing errors between the simulated TMPs and the reconstructedTMPs by using GA-SVR DE-SVR and PSO-SVR methods at onerepresentative source point (the 50th point on the epicardium)
Indexes MethodGA-SVR DE-SVR PSO-SVR
Mean value 296 269 155Standard deviation 192 167 103
these three methods are so slight that we have to add astatistical analysis to validate the results The mean valuesand standard deviations of these 60 testing errors betweenthe simulated TMPs and the reconstructed TMPs by usingGA-SVR DE-SVR and PSO-SVR methods are investigatedas shown in the Table 2 From Figure 2 and Table 2 thePSO-SVR outperforms the GA-SVR and DE-SVR methodsin reconstructing the TMPs for one representative sourcepoint over all the testing times Furthermore the DE-SVR issuperior to the GA-SVRmethod in reconstructing the TMPs
The second experiment reconstructs the TMPs on all theheart surface points at one time instant These inverse ECGsolutions are shown in Figure 3 in which two sequentialtesting time points (27 and 51ms after the first ventricularbreakthrough) are presented to illustrate the performances oftheGA-SVRDE-SVR and PSO-SVRmethods It can be seenthat among the three parameters optimization methods thehybrid PSO-SVR performs better thanGA-SVR andDE-SVRmethods because its reconstructed solutions are much closerto the simulated TMPs distributions
Based on the simulation information of TMPs we canevaluate the accuracy of the reconstructed TMPs of thesethree different intelligent optimization algorithms at thetesting time by mean square error (MSE) the relative error(RE) and the correlation coefficient (CC)
MSE =1
119899
119899
sum
119894=1
(120593119888minus 120593119890)2
RE =
1003817100381710038171003817120593119888 minus 120593119890
1003817100381710038171003817
120593119890
CC =sum119899
119894=1[(120593119888)119894minus 120593119888] [(120593119890)119894minus 120593119890]
1003817100381710038171003817120593119888 minus 120593119890
10038171003817100381710038171003817100381710038171003817120593119890 minus 120593
119890
1003817100381710038171003817
(12)
Here 120593119890is the simulated TMP distribution at time 119905 and 120593
119888
is the reconstructed TMPs The quantities 120593119890and 120593
119888are the
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
6 Computational and Mathematical Methods in Medicine
(a)
(mV)minus90 minus80 minus70 minus60 minus50 minus40 minus30 minus20 minus10 0 10
(b)
Figure 3 The TMP distribution on the ventricular surface attwo sequential testing time points (a) shows the comparison ofreconstructed TMP distributions when the time point is 27ms and(b) shows the comparison of reconstructed TMP distributions whenthe time point is 51ms Moreover the upper row shows the TMPsfrom an anterior view and the lower from a posterior view In eachrow the first figure is the original TMPs and the rest three are thereconstructed TMPs by using the GA-SVR DE-SVR and PSO-SVRmethods
corresponding optimal hyperparameter (119862 1205902 and 120576) of the
478 SVR models Table 1 displays the average value and thestandard deviations of three parameters in 478 SVR modelsby using GA DE and PSO optimization algorithms From itwe can find that the gaps among these optimal parameters areobvious by using three different optimization methods Thestandard deviations of 120576 are neglected because they are verysmall
42 Comparison of Reconstruction Accuracy This study com-pares the reconstruction performance based on the proposedthree intelligent optimization algorithms for selecting thehyperparameters of SVR By using GA-SVR DE-SVR andPSO-SVR hybrid models two types of experiments forreconstructing the TMPs have been conducted The firstexperiment reconstructs the TMPs for one representativesource point (the 50th point) on the epicardium over 60testing times as is depicted in Figure 2 Since the gaps among
Table 1 The average and the standard deviations of optimal hyper-parameters (119862 120590
2 and 120576) by using GA DE and PSO optimization
algorithms
Parameter MethodGA DE PSO
119862 144761 plusmn 59432 762414 plusmn 91367 531 plusmn 302
1205902
064 plusmn 055 159 plusmn 138 122 plusmn 102
120576 148119890 minus 005 180119890 minus 005 198119890 minus 005
Table 2 The mean values and standard deviations of these 60testing errors between the simulated TMPs and the reconstructedTMPs by using GA-SVR DE-SVR and PSO-SVR methods at onerepresentative source point (the 50th point on the epicardium)
Indexes MethodGA-SVR DE-SVR PSO-SVR
Mean value 296 269 155Standard deviation 192 167 103
these three methods are so slight that we have to add astatistical analysis to validate the results The mean valuesand standard deviations of these 60 testing errors betweenthe simulated TMPs and the reconstructed TMPs by usingGA-SVR DE-SVR and PSO-SVR methods are investigatedas shown in the Table 2 From Figure 2 and Table 2 thePSO-SVR outperforms the GA-SVR and DE-SVR methodsin reconstructing the TMPs for one representative sourcepoint over all the testing times Furthermore the DE-SVR issuperior to the GA-SVRmethod in reconstructing the TMPs
The second experiment reconstructs the TMPs on all theheart surface points at one time instant These inverse ECGsolutions are shown in Figure 3 in which two sequentialtesting time points (27 and 51ms after the first ventricularbreakthrough) are presented to illustrate the performances oftheGA-SVRDE-SVR and PSO-SVRmethods It can be seenthat among the three parameters optimization methods thehybrid PSO-SVR performs better thanGA-SVR andDE-SVRmethods because its reconstructed solutions are much closerto the simulated TMPs distributions
Based on the simulation information of TMPs we canevaluate the accuracy of the reconstructed TMPs of thesethree different intelligent optimization algorithms at thetesting time by mean square error (MSE) the relative error(RE) and the correlation coefficient (CC)
MSE =1
119899
119899
sum
119894=1
(120593119888minus 120593119890)2
RE =
1003817100381710038171003817120593119888 minus 120593119890
1003817100381710038171003817
120593119890
CC =sum119899
119894=1[(120593119888)119894minus 120593119888] [(120593119890)119894minus 120593119890]
1003817100381710038171003817120593119888 minus 120593119890
10038171003817100381710038171003817100381710038171003817120593119890 minus 120593
119890
1003817100381710038171003817
(12)
Here 120593119890is the simulated TMP distribution at time 119905 and 120593
119888
is the reconstructed TMPs The quantities 120593119890and 120593
119888are the
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
Computational and Mathematical Methods in Medicine 7
0 10 20 30 40 50 600
002
004
006
008
01
012
014
Time (ms)
Mea
n sq
uare
d er
ror (
MSE
)
MSE of GA-SVR methodMSE of DE-SVR methodMSE of PSO-SVR method
(a)
Rela
tive e
rror
s (RE
)
RE of GA-SVR methodRE of DE-SVR methodRE of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
14
(b)
Cor
relat
ion
coeffi
cien
t (CC
)
CC of GA-SVR methodCC of DE-SVR methodCC of PSO-SVR method
0 10 20 30 40 50 60Time (ms)
0
02
04
06
08
1
12
minus02
(c)
Figure 4 The performances of the reconstructed TMPs over 60 sampling times by using GA-SVR DE-SVR and PSO-SVR respectively(a) The mean square error (MSE) of the reconstructed TMPs (b) The relative errors (RE) of the reconstructed TMPs (c) The correlationcoefficient (CC) of the reconstructed TMPs
mean value of 120593119890and 120593
119888over the whole ventricular surface
nodes at time 119905 119899 is the number of nodes on the ventricularsurface
The MSE RE and CC of the three sets of reconstructedTMPs over the 60 testing samples can be found in Figure 4In contrast to GA-SVR and DE-SVR the PSO-SVR methodcan yield rather better results with lower RE MSE and ahigher CC To validate the robustness of the three methodsin reconstructing the TMPs the quantities m-MSE m-REand m-CC that is the mean value of MSE RE and CC atdifferent testing time instants are presented Moreover std-MSE std-RE and std-CC that is the standard deviation ofMSE RE and CC over the whole testing samples are alsoinvestigated As shown in Figure 5 the mean value of theMSE RE and CC over the 60 testing samples obtained byGA-SVR DESVR and PSO-SVRmethods are presented andthe standard deviations of those 60 MSEs REs and CCs arealso provided
43 Comparison of Reconstruction Efficiency In solving theinverse ECG problem seeking the optimal parameters inthe settled range is time consuming Therefore how to suc-cessfully excogitate a relatively timesaving optimal methodmatters significantly in the practical application particularlyin the diagnosis of cardiac disease Here we figure out themean time of the three optimizationmethods in selecting theoptimal hyperparameters of SVR In order to forecast the 478TMPs over 60 testing samples on an epi- and endocardialsurface 478 sets of parameters (119862 120590
2 and 120576) have to beselected in each optimization method and with the testingdata 478models are built accordingly In this study themeantime for selecting the optimal parameters by adopting GADE and PSO is 20827 s 34253 s and 13016 s respectively Itcan be seen that PSO can lead to a convergence more quicklyand lessen more time than GA and DE Therefore accordingto the comparison of their efficiencies PSO-SVR methodsignificantly outperforms the GA-SVR and DE-SVR in the
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
8 Computational and Mathematical Methods in Medicine
GA-SVR DE-SVR PSO-SVR0
001
002
003
004
005
006m
-MSE
and
std-M
SE
(a)
GA-SVR DE-SVR PSO-SVR
05
04
03
02
01
0
m-R
E an
d std
-RE
(b)
GA-SVR DE-SVR PSO-SVR
11
1
09
08
07
06
05
04
m-C
C an
d std
-CC
(c)
Figure 5The m-MSE m-RE m-CC std-MSE std-RE and std-CC of 60 testing sampling times by using GA-SVR DE-SVR and PSO-SVRrespectively in terms of mean plusmn standard deviation (a) m-MSE and std-MSE (b) m-RE and std-RE and (c) m-CC and std-CC
reconstruction of the TMP which is the most efficient oneamong these three optimization methods
5 Conclusion
In the study of the inverse ECG problem SVR methodis a powerful technique to solve the nonlinear regressionproblem and can serve as a promising tool for performingthe inverse reconstruction of the TMPsThis study introducesthree optimization methods (GA DE and PSO) to deter-mine the hyperparameters of the SVR model and utilizesthese models to reconstruct the TMPs from the remoteBSPs Feature extraction using the KPCA is also adopted topreprocess the original input data The experimental resultsdemonstrate that PSO-SVR is a relatively effective methodin terms of accuracy and efficiency By comparison PSO-SVR method is superior to GA-SVR and DE-SVR methodswhose reconstructed TMPs are close to the simulated TMPsBesides PSO-SVR is much more efficient in determining theoptimal parameters and in building the predicting modelAccording to these results the PSO-SVR method can serveas a promising tool to solve the nonlinear regression problemof the inverse ECG problem
Acknowledgments
This work is supported by the National Natural ScienceFoundation of China (30900322 61070063 and 61272311)This work is also supported by the Research Foundation
of Education Bureau of Zhejiang Province under Grant noY201122145
References
[1] M Jiang J Lv C Wang L Xia G Shou and W HuangldquoA hybrid model of maximum margin clustering method andsupport vector regression for solving the inverse ECGproblemrdquoComputing in Cardiology vol 38 pp 457ndash460 2011
[2] C Ramanathan R N Ghanem P Jia K Ryu and Y RudyldquoNoninvasive electrocardiographic imaging for cardiac electro-physiology and arrhythmiardquo Nature Medicine vol 10 no 4 pp422ndash428 2004
[3] B F Nielsen X Cai and M Lysaker ldquoOn the possibility forcomputing the transmembrane potential in the heart with a oneshot method an inverse problemrdquo Mathematical Biosciencesvol 210 no 2 pp 523ndash553 2007
[4] G Shou L Xia M Jiang Q Wei F Liu and S CrozierldquoTruncated total least squares a new regularization method forthe solution of ECG inverse problemsrdquo IEEE Transactions onBiomedical Engineering vol 55 no 4 pp 1327ndash1335 2008
[5] C Ramanathan P Jia R Ghanem D Calvetti and Y RudyldquoNoninvasive electrocardiographic imaging (ECGI) applica-tion of the generalized minimal residual (GMRes) methodrdquoAnnals of Biomedical Engineering vol 31 no 8 pp 981ndash9942003
[6] M Jiang L Xia G Shou and M Tang ldquoCombination ofthe LSQR method and a genetic algorithm for solving theelectrocardiography inverse problemrdquo Physics in Medicine andBiology vol 52 no 5 pp 1277ndash1294 2007
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
Computational and Mathematical Methods in Medicine 9
[7] M Brown S R Gunn and H G Lewis ldquoSupport vectormachines for optimal classification and spectral unmixingrdquoEcological Modelling vol 120 no 2-3 pp 167ndash179 1999
[8] K Ito and R Nakano ldquoOptimizing Support Vector regressionhyper-parameters based on cross-validationrdquo in Proceedings ofthe International Joint Conference onNeural Networks vol 3 pp871ndash876 2005
[9] V N Vapnik ldquoAn overview of statistical learning theoryrdquo IEEETransactions on Neural Networks vol 10 no 5 pp 988ndash9991999
[10] B Scholkopf and A J Smola Learning with kernels [PhDthesis] GMD Birlinghoven Germany 1998
[11] M Jiang L Xia G ShouQWei F Liu and S Crozier ldquoEffect ofcardiac motion on solution of the electrocardiography inverseproblemrdquo IEEE Transactions on Biomedical Engineering vol 56no 4 pp 923ndash931 2009
[12] M Jiang L Zhu Y Wang et al ldquoApplication of kernel prin-cipal component analysis and support vector regression forreconstruction of cardiac transmembrane potentialsrdquo Physics inMedicine and Biology vol 56 no 6 pp 1727ndash1742 2011
[13] S S Keerthi ldquoEfficient tuning of SVM hyperparameters usingradiusmargin bound and iterative algorithmsrdquo IEEE Transac-tions on Neural Networks vol 13 no 5 pp 1225ndash1229 2002
[14] J S Sartakhti M H Zangooei and K Mozafari ldquoHepatitisdisease diagnosis using a novel hybridmethod based on supportvectormachine and simulated annealing (SVM-SA)rdquoComputerMethods and Programs in Biomedicine vol 108 no 2 pp 570ndash579 2011
[15] B Ustun W J Melssen M Oudenhuijzen and L M CBuydens ldquoDetermination of optimal support vector regressionparameters by genetic algorithms and simplex optimizationrdquoAnalytica Chimica Acta vol 544 no 1-2 pp 292ndash305 2005
[16] J Wang L Li D Niu and Z Tan ldquoAn annual load forecastingmodel based on support vector regression with differentialevolution algorithmrdquo Applied Energy vol 94 pp 65ndash70 2012
[17] K-Y Chen and C-H Wang ldquoSupport vector regression withgenetic algorithms in forecasting tourism demandrdquo TourismManagement vol 28 no 1 pp 215ndash226 2007
[18] G Hu L Hu H Li K Li and W Liu ldquoGrid resourcesprediction with support vector regression and particle swarmoptimizationrdquo in Proceedings of the 3rd International JointConference on Computational Sciences and Optimization (CSOrsquo10) vol 1 pp 417ndash422 May 2010
[19] N Cristianini and J Taylor An Introduction to Support VectorMachines Cambridge University Press Cambridge UK 2000
[20] A J Smola and B Scholkopf ldquoA tutorial on support vectorregressionrdquo Statistics and Computing vol 14 no 3 pp 199ndash2222004
[21] C C Chang andC J Lin ldquoLIBSVM a library for support vectormachinesrdquo 2001 httpwwwcsientuedutwsimcjlinlibsvm
[22] G Oliveri and A Massa ldquoGenetic algorithm (GA)-enhancedalmost difference set (ADS)-based approach for array thinningrdquoIET Microwaves Antennas and Propagation vol 5 no 3 pp305ndash315 2011
[23] R Storn and K Price ldquoDifferential evolutionmdasha simple andefficient heuristic for global optimization over continuousspacesrdquo Journal of Global Optimization vol 11 no 4 pp 341ndash359 1997
[24] M Varadarajan and K S Swarup ldquoNetwork loss minimizationwith voltage security using differential evolutionrdquoElectric PowerSystems Research vol 78 no 5 pp 815ndash823 2008
[25] K Zielinski P Weitkemper R Laur and K D KammeyerldquoParameter study for differential evolution using a power alloca-tion problem including interference cancellationrdquo EvolutionaryComputation vol 4 pp 16ndash21 2006
[26] R Poli K James and B Tim ldquoParticle swarm optimizationrdquoSwarm Intelligence vol 1 pp 35ndash57 2007
[27] L-P Zhang H-J Yu and S-X Hu ldquoOptimal choice of param-eters for particle swarm optimizationrdquo Journal of ZhejiangUniversity vol 6 no 6 pp 528ndash534 2005
[28] R Rosipal M Girolami L J Trejo and A Cichocki ldquoKernelPCA for feature extraction and de-noising in nonlinear regres-sionrdquoNeural Computing and Applications vol 10 no 3 pp 231ndash243 2001
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom
Submit your manuscripts athttpwwwhindawicom
Stem CellsInternational
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MEDIATORSINFLAMMATION
of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Behavioural Neurology
EndocrinologyInternational Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Disease Markers
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
BioMed Research International
OncologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Oxidative Medicine and Cellular Longevity
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
PPAR Research
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Immunology ResearchHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
ObesityJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Computational and Mathematical Methods in Medicine
OphthalmologyJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Diabetes ResearchJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Research and TreatmentAIDS
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Gastroenterology Research and Practice
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Parkinsonrsquos Disease
Evidence-Based Complementary and Alternative Medicine
Volume 2014Hindawi Publishing Corporationhttpwwwhindawicom