Upload
andypara88
View
216
Download
0
Embed Size (px)
Citation preview
8/8/2019 EIJAER2006
1/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
56
Artificial Neural Network based Hydro Electric Generation Modelling
Deepika Yadav., Naresh.R, Veena Sharma
Electrical Engineering Department, National Institute of Technology, Hamirpur, [email protected]
ABSTRACT
Hydropower generation is a function of discharge of the generating units and the difference
between the fore bay and tailrace levels of the reservoir, and is subject to penstock head
losses and to the generation unit efficiency factor, which in turn is a function of the reservoir
level and reservoir capacity. For this actual plant data from Sewa Hydroelectric Project
Stage-II which is a run-of-the river project having installed capacity of 120 MW situated in
Jammu region has been used.In this paper two hydro electric models such as reservoir level
versus capacity model and head loss versus water discharge rate have been studied which areused for calculating net head and further can be used for online implementation of
Availability Based Tariff in which day ahead scheduling is done. In order to accomplish this
task, data from plant is employed for training, validating and testing the artificial neural
network (ANN) model which provide accurate results. Thereafter with these models, a
comparison is made with multiple regression models in terms of their prediction accuracy,
mean square error and regression R value.
Keywords: Artificial neural network, ANOVA analysis, Levenberg-Marquardt algorithm,
hydroelectric generation models
1. Introduction
In hydro electric Power plants, for calculating energy and online implementation ofavailability based tariff in Indian power industry [Bhushan. 2005 Christensen et al. 1988Geetha et al. 2008 and Deshmukh et al. 2008], there is always a need to develop models for
hydro electric plant as each one is unique to its location and requirement. In most cases, theparameters to be estimated are of non-linear model with respect to control and state variables.
Here data used has been taken from Sewa Hydroelectric Project Stage-II, a run-of-the river project, which fulfills the partial requirements of the irrigation in state of Jammu and
Kashmir. The power house is located in a village called Mashka near the junction of Sewaand Ravi, The project will generate 533.52 million units in a 90% dependable year and also
provide 120 MW peaking capacity in the power system of northern region. This plant has a
small reservoir with maximum storage capacity of 0.9174 million cubic meters (MCM) and
average storage capacity of 0.2234 MCM. Moreover, elevation is taken from above mean sea
level through some level sensors, maximum reservoir level of this plant is 1200 meter andaverage reservoir level measured is 1184 meter.The project envisages 53 meter-high concrete
gravity dam, a 10,020 meter-long head race tunnel and its power house will be equipped with
3*40MW vertical Pelton turbine units with rated net head of 560m. Its geographical
coordinates having latitude 32 36 38 N to 32 41 00 N and longitude 75 48 46 E to
75 55 38 E is shown in figure 1, referred from site (www.nhpcindia.com)
Hydroelectric scheduling of the plants requires a judicious modeling of each of the hydro
electric plant for an improved efficiency, optimum use of water resources and to arrest
http://www.nhpcindia.com/8/8/2019 EIJAER2006
2/17
8/8/2019 EIJAER2006
3/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
58
Neurons are arranged in layers, an input layer, hidden layer(s), and an output layer. There isno specific rule that dictates the number of hidden layers. The function is largely established
based on the connections between elements of the network. In the input layer each neuron is
designated to one of the input parameters. The network learns by applying a back- propagation algorithm which compares the neural network simulated output values to theactual values and calculates a prediction error. The error is then back propagated through the
network and weights are adjusted as the network attempts to decrease the prediction error by
optimizing the weights that contribute most to the error. The training or learning of the
network occurs through the introduction of cycles of data patterns (epochs or iterations) to the
network. One problem with neural network training is the tendency for the network to
memorize the training data after an extended learning phase. If the network over learns the
training data it is more difficult for the network to generalize to a data set that was not seen
by the network during training. Therefore, it is common practice to divide the data set into a
learning data set that is used to train the network and a validation data set that is used to test
network performance. Figure 2 shows the representation of neural network diagram with
inputs ai, weights wi, hidden layer, and an output z [Beale et al.1996]. In the present study,neural network fitting tool (nftool) of MATLAB7.5 [Beale et al.1996] has been used.
Figure 2: Feed-forward neural network model
This research employed supervised learning where the target values for the output are
presented to the network, in order for the network to update its weights. Supervised learning
attempts to match the output of the network to values that have already been defined. After
training network verification is applied in which only the input values are presented to the
network so that the success of the training can be established an algorithm that trains ANN 10to 100 times faster than the usual back propagation algorithm is the Levenberg-Marquardt
algorithm. While back propagation is a steepest descent algorithm, the Levenberg-Marquardt
algorithm is a variation of Newton's method [Hagan et al. 1994]. In this paper, the
Levenberg-Marquardt algorithm has been employed which is an approximation to Newton's
method. Suppose a function V (x), which we want to minimize with respect to the parameter
vector (x), then Newton's method would be
)x(V1
)]x(V2
[)x( --=D (1)
8/8/2019 EIJAER2006
4/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
59
where, 2 V(X) is the Hessian matrix andV(X) is the gradient. If we assume V (X) is asum of square of errors, given by
=
=N
1i(x)2
ieV(x) (2)
then )x(e)x(T
J2)x(V = (3)
)x(S)x(J)x(TJ2)x(V2 += (4)
where, e(x) is the error vector, J ( x ) is the Jacobian matrix given by
(5)
and S(x) is given by
=
=N
1i)x(ie
2)x(ie)x(S (6)
Neglecting the second-order derivatives of the error vector, i.e., assuming that S(X) 0, theHessian matrix is given by
)x(J)x(T
J)x(V2 = (7)
and substituting (7) and (3) into (1) we obtain the Gauss-Newton update, given by)x(e)x(
TJ
1)]x(J)x(
TJ[x --=D (8)
The advantage of Gauss-Newton over the standard Newton's method is that it does not
require calculation of second-order derivatives. Nevertheless, the matrix JT
(x) J(x) may not be
invertible. This is overcome with the Levenberg-Marquardt algorithm, which consists in
finding the update given by
)x(e)x(T
J1
]I)x(J)x(T
J[x -+-=D m (9)
When the scalar is very small or null, the Levenberg-Marquardt algorithm becomes Gauss-
Newton, which should provide faster convergence, while for higher values, when the first
term within brackets of (9) is negligible with respect to the second term within brackets, the
algorithm becomes steepest descent. Hence, the Levenberg-Marquardt algorithm provides anice compromise between the speed of Gauss-Newton and the guaranteed convergence of
steepest descent [Hagan et al. 1994]. Here, performance function which is the sum of squared
errors between the target outputs and the network's simulated outputs (as is typical in training
feed forward networks) is proportional to .Thus, is decreased after each successful step
(reduction in performance function) and is increased only when a tentative step would
increase the performance function. In this way, the performance function is always reduced at
=
nx
(x)N
e
2x
(x)N
e
1x
(x)N
e
nx
(x)2
e
2x
(x)2
e
1x
(x)2
e
nx
(x)1
e
2x
(x)1
e
1x
(x)1
e
J(x)
K
MOMM
L
L
8/8/2019 EIJAER2006
5/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
60
each iteration of the algorithm. This algorithm appears to be the fastest method for training
moderate-sized feed forward neural networks.
3. Reservoir Level versus Capacity Model
The modeling of water stored in a reservoir forms a crucial part of any hydroelectric
operational study because it determines the gross head of each plant. This model relates theelevation to the volume of water stored in the reservoir.
3.1 Modelling Using ANN
In this model of reservoir level (m) versus capacity (MCM), the actual river data using
nftool of MATLAB was used to train the neural network model. To use this tool, first
convert the data into MATLAB data file and provide as input to nftool. The system takes60% of data for training, 20% for validation and 20% for testing, so that the same data will
not be used for testing. It requires many runs to converge or to get expected training. Oncethe system was trained then it tests with remaining sample of data for testing. Figure 3 shows
the performance accuracy employing different numbers of hidden nodes.
Figure 3: Accuracy versus number of trials with different hidden neurons
Table 1: Performance evaluation of Training, Validation and Testing
Number
ofHidden
Neurons
Operation MSE
Best
MSE
Worst
MSE
Average
R
Best
R
Worst
R
Average
Standard
deviationMSE
Standard
deviationR
2 Training 0.15 3.0573 1.21259 0.999 0.985 0.9948 1.36880 0.00582
2 Validatio 0.12 5.5908 1.42512 0.999 0.996 0.9978 0.00154 0.00154
2 Testing 0.02 2.5515 0.96882 0.999 0.989 0.9963 0.00427 0.00427
3 Training 0.03 5.1939 1.72241 0.999 0.967 1.7224 2.36757 0.01391
3 Validatio 0.74 22.247 5.86132 0.999 0.962 0.9907 0.01573 0.01573
8/8/2019 EIJAER2006
6/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
61
In order to check the sensitivity of neural network performance, initialization of connection
weights, training, validation and testing operations have been performed with five
independent random trials for weight initialization as listed in table 1. The comparison of the
mean squared error values indicates the average squared difference between outputs and
targets, which is used to assess the network performance and is given as
M
))m(d)m(y(MSE
M
1m
2
= -= (10)
where y(m) and d(m) are the network output, and the desired output at any sample m
respectively and M is the length of the investigated data sets. In table1, standard deviation
and correlation coefficient R values provide how well model is close to actual values. In other
words it provides a measure of how well future outcomes are likely to be predicted by the
model, hence it is desired that R square values to be very high i.e., close to 1. The
performance was evaluated in terms of the correlation coefficient R, computed as
=
==
-
---=
M
1m
2
i
M
1m
2
mm
2M
1mm
2
)yy(
)yy()yy(R (11)
where, ym = the observed dependent variable
my = the fitted dependent variable for the independent variable x m
=y mean, =M
yy m
m
xm= the independent variable in the mth
trial
=
-M
1m
2)yy(
m represents total sum of squares, while =-
M
1m
2
mm)yy( represents residual
sum of squares
R Square is a measure of the explanatory power of the model. Here for best model chosen RSquare is 0.999939, 0.999902 and 0.988785 for training, validation and testing respectively
as shown in figure 4(a).
In figure 4 (a) the dashed line is the perfect fit line where outputs and targets are equal to
each other. The circles are the data points and coloured line represents the best fit between
outputs and targets. Here it is important to note that circles gather across the dashed line, so
our outputs are not far from targets. From the graph, it can be realized that the best hidden
unit with 99% accuracy is with 5 neurons with third trial for this model. Figure 4(b) depicts
the training, validation and test mean square errors for Levenberg-Marquardt algorithm with
5 hidden neurons. The training stops when MSE do not change significantly. The comparison
of actual and predicted results by neural network model for reservoir level versus reservoir
3 Testing 0.11 54.645 18.9078 54.64 0.966 0.9857 0.98576 0.01442
4 Training 0.13 1.5519 0.43776 0.999 0.992 0.9977 0.62321 0.00316
4 Validatio 0.18 1.5 0.63667 0.999 0.995 0.9982 0.54863 0.00192
4 Testing 0.10 0.6541 0.44774 0.999 0.995 0.9987 0.21082 0.00196
5 Training 0.01 0.8933 0.26670 0.999 0.996 0.9987 0.36300 0.00155
5 Validatio 0.06 1.3630 0.52426 0.999 0.970 0.5242 0.49348 0.01291
5 Testing 0.13 7.52 1.97857 0.999 0.988 2.4399 3.12214 0.00476
8/8/2019 EIJAER2006
7/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
62
capacity model has been shown in figure 5(a) which clearly reveals that neural networkmodel has tracked the experimental data closely. Error plot between actual and predicted
results by neural network model has been shown in figure 5(b), which is within quite
acceptable limits.
Figure 4 (a): Regression plots for actual and predicted results by feed-forward neural
network model for training, validation, testing samples and all data set (b): shows training,
validation and testing mean square errors for Levenberg-Marquardt algorithm with 5 neurons.
8/8/2019 EIJAER2006
8/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
63
Figure 5 (a): Plot of actual and predicted results for Reservoir level versus Capacity model
using Levenberg-Marquardt algorithm (b): shows Error Plot between Actual and Predicted
Results by neural network model
8/8/2019 EIJAER2006
9/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
64
3.2 Modelling Using ANOVAAnalysis of variance [Draper et al. 1998 and Myers et al. 1993] is a parametric procedure
Figure 6: ANOVA results obtained for Reservoir level versus Capacity
used to determine the statistical significance of the difference between the means of two ormore groups of values. ANOVA techniques are applied on the polynomial of different
degrees by fitting them on to data for reservoir level and reservoir capacity.
8/8/2019 EIJAER2006
10/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
65
Figure 7: Plot for R Value, Standard error and Accuracy
The graphical curves of fit obtained are shown in figure 6(a). In figure 6(b) the residuals
using ANOVA model for different polynomials have been plotted. From the analysis of the
above graphical results it is evident that the higher order polynomial provides better fit in this
case. Further analysis to regression R Value, Standard error and Accuracy has been plotted in
figure 7 for different degrees of polynomials. The details of ANOVA analysis approach can
be referred in [Naresh et al. 2009].
4. Head Loss versus Water Discharge Rate Model
When determining head (falling water), gross or static head and net or dynamic head must beconsidered. Gross head is the vertical distance between the top of the penstock and the point
where the water hits the turbine. Net head is gross head minus the pressure or head losses dueto friction and turbulence in the penstock. These head losses depend on the type, diameter,
and length of the penstock piping, and the number of bends or elbows. Gross head can be
used to estimate power availability and determine general feasibility, but net head is used to
calculate the actual power available. In addition, electricity demand and generation
scheduling depend on water released during the next day through generators and/or gates.
The amount is determined by the water level of the reservoir of the preceding day and the
water arriving from runoff. Here in this section the head loss (m) versus water discharge rate(m
3s
-1) model analysis has been outlined.
4.1 Modelling Using ANN
To evaluate neural network performance, initialization of connection weights, training,
validation and testing has been performed with five independent random trials for weight
initialization as listed in table 2. Figure 8 shows the performance plot employing different
8/8/2019 EIJAER2006
11/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
66
numbers of hidden units. It can be noted from the graph that with increase in number ofneurons in the hidden layer, accuracy also increases. From the graph, it can be realized that
the best hidden unit with good accuracy is achieved with 5 neurons units on third trial.
Figure 8: Accuracy versus number of trials with different hidden neurons
Table 2: Performance evaluation of Training, Validation and Testing
Numberof
Hidden
Neurons
Operation MSE
Best
MSE
Worst
MSEAvera
ge
R
Best
R
Worst
R
Average
Standarddeviation
MSE
Standarddeviation
R
2 Training 0.00590 0.039 0.022 0.999 0.9999 0.99995 0.01296 2.91E-
2 Validatio 0.00495 0.187 0.063 0.999 0.9988 0.99960 0.07951 0.00048
2 Testing 0.00607 0.704 0.156 0.999 0.9997 0.99991 0.30692 0.00012
3 Training 9.85E- 0.008 0.003 0.999 0.9999 0.99998 0.00440 7.80E-
3 Validatio 1.74E- 0.012 0.003 0.999 0.9999 0.99999 0.00516 1.22E-
3 Testing 1.83E- 0.073 0.016 0.073 1.83E- 0.01624 0.03209 0.03209
4 Training 1.11E- 0.000 0.000 0.999 0.9999 0.99999 0.00027 4.47E-
4 Validatio 0.00036 0.007 0.003 0.999 0.9999 0.99998 0.00351 0.00351
4 Testing 2.12E- 0.000 0.000 0.999 0.9999 0.99999 0.00016 0.00016
5 Training 2.41E- 5.485 1.218 0.999 0.9906 0.99781 2.39617 0.00407
5 Validatio 0.00300 1.299 0.307 0.999 0.9980 0.99952 0.56078 0.00082
5 Testing 0.00182 18.86 5.436 0.999 0.9999 0.99548 8.23760 0.00564
8/8/2019 EIJAER2006
12/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
67
In this network, training is provided for 100 epochs, minimum MSE for best model in case oftraining is 2.41E-08 as shown in figure 9 (b). The training stops when MSE do not change
significantly.Validation vectors are used to stop training early if the network performance on
the validation vectors fails to improve or remains the same for max fail epochs in a row, forthis model in MSE noted for validation is 3.01E-03 as in figure 9(b).Test vectors are used asa further check that the network is generalizing well, but do not have any effect on training
Neural Network Toolbox, minimum MSE for this model in case of testing is 2.14E-03 as
shown in figure 9(b). Here for best model chosen R Square is 0.999999, 0.999997 and
0.999997 for training, validation and testing as shown in figure 9 (b). As per the model 99%
of variation in dependent variable has been explained by independent variable.
Figure 9 (a): Regression plots for actual and predicted results by feed-forward neural
network model for training, validation, testing samples and all data set (b): shows training,
validation and testing mean square errors for Levenberg-Marquardt algorithm with 5 neurons.
8/8/2019 EIJAER2006
13/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
68
Figure 10 (a): Plot of actual and predicted results for head loss versus water discharge rate
model using Levenberg-Marquardt algorithm Figure 10 (b): Error plot between actual andpredicted results by NN Model
8/8/2019 EIJAER2006
14/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
69
The comparison between actual and predicted results by neural network model is shownabove in figure 10 (a). From figure 10 (b) it can be seen that small amount of errors are
present in the beginning of the predicted results and afterwards the neural network model is
predicting quite well.
4.2 Modelling Using ANOVA
Figure 11: ANOVA results obtained for head loss versus water discharge rate
It can be noted from figure 11(a) that although difference between the cubic,4th
,5th
and 6th
degree Polynomials were small and predicted data gets converged with the actual data for all.
The best fitting was obtained for 3rd
order model with 99.9893% accuracy with minimum
standard error obtained among all as shown in figure 12. Also, this model provides good
8/8/2019 EIJAER2006
15/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
70
significant fit with regression R value 0.99. More details regarding this model can be found in[Naresh et al. 2009].
Figure 12: Plot for R Value, Standard error and Accuracy
5. Conclusion
In this paper two hydro electric models, reservoir level versus capacity model and head loss
versus water discharge rate model have been developed using artificial neural network.
During modelling process 60% of the available data is used for training, 20% is used for
cross validation and 20% is used for testing. The data for each class are chosen randomly
from the data set. Further, the developed models are compared with ANOVA approach.
Predicted results by ANN and ANOVA analysis are quite cohesive to the actual results of
Sewa river. However, ANN proved to be a useful tool to predict and estimate non-linear
relationship between the variables because of its adaptive capability. These models in a way
help in effective utilization of available water that can help us to fight against uncertainty,especially for the optimal generation, scheduling of the run-of- river project in a multipurpose
context and also can be used as an effective input to the decision support system for real-time
operation of reservoir systems that results in increased power production and enhancedrevenue earnings in the process of planning and management of a water resources project.
8/8/2019 EIJAER2006
16/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
71
6. Acknowledgment
The authors are grateful for the kind support rendered by the Electronics division of BHEL
Bangalore, India for carrying out this work.
7. References
1. Beale,M.,Demuth,B.H.,and Hagan,M.T.1996.Neural Network Design.Pws Pub Co,
Boston, Massachusetts, U.S.A.
2. Bhushan et al., 2004: The Indian medicine, In IEEE Power Engineering Society
General Meeting, Vol.2, pp.2336 2339.
3. Buizza and Taylor, 2002: Neural network load forecasting with weather ensemble
predictions. In Proceedings of IEEE Trans. on Power Systems, 17(3), pp.626-632.
4. Christensen and Soliman, 1988: Optimization of the production of hydroelectric
power systems with a variable head. In Journal of Optimization Theory andApplications, Springer Netherlands, 58(2), pp.301-317.
5. Deshmukh et al., 2008: Optimal Generation Scheduling under ABT using Forecasted
Load and Frequency. In Proceedings of POWERCON 2008 & 2008 IEEE PowerIndia Conference (Joint International Conference), New Delhi, pp.1-6.
6. Draper and Smith, 1998: Applied Regression Analysis, 3rd edition John Wiley &
Sons, New York.
7. Fauset, 1994: Fundamentals of Neural Networks, 3rd edition, Prentice-Hall
International, New York.
8. Senjyu et al., 2002: One-hour-ahead load forecasting using neural network, IEEE
Trans. on Power Systems, 17(1), pp.113-118.
9. Geetha and Jayashankar, 2008: Generation Dispatch with Storage and Renewables
under Availability Based Tariff. In Proceedings of TENCON 2008, IEEE Region 10
Conference, pp.1-6.
10. Hagan and Menhaj, 1994: Training feed forward networks with the Marquardtalgorithm. IEEE Trans. on Neural Networks, 5(6), pp.989-993.
11. Subramanian et al.,1999: Reservoir Inflow Forecasting Using Neural Networks. In
Proceedings of the American Power Conference, Chicago, US, 61(1), pp.220-226.
12. Myers and Walpole, 1993: Probability and Statistics for Engineers and Scientist, 5thedition, Mac Millan Publishing Company, New York.
8/8/2019 EIJAER2006
17/17
INTERNATIONAL JOURNAL OF APPLIED ENGINEERING RESEARCH, DINDIGUL
Volume 2, No 1, 2010
Copyright 2010 All rights reserved Integrated Publishing Association
RESEARCH ARTICLE ISSN - 0976-4259
72
13. Naresh et al., 2009: Hydro Electric Generation Models for Hydro Power StationStudies. In Proceedings of National Conference-TACT-09, NIT-Hamirpur, India, 16-
17 March 2009, pp.261-264, ISBN:978-93-80043-10-4.
14. Randall and Tagliarini, 2002: Using feed forward neural networks to model the effectof precipitation on the water levels of the Northeast Cape Fear River. In Proceedings
of the IEEE Southeast Conference, Columbia, US, pp.338-342.
15. Wood and Wollenberg, 1984: Power Generation, Operation, and Control, 2nd edition,
John Wiley & Sons, New York.