5
li ~-H(RIAND A Comparative Study for the Estimation of Parameters in Nonlinear Models Kuldeep Kumar and M. A. Alsaleh School of Information Technology Bond University, Gold Coast Queensland 4229, Australia Transmitted by John Casti ABSTRACT The most commonly used numerical optimization techniques include the methods of Gauss-Newton, Newton-Raphson, gradient methods, including methods of steep- est ascent and descent, and Marquardt algorithm. Kumar [1] has recently proposed a new technique based on optimum exponential regression. Another noniterative procedure proposed in this paper is based on the principle of internal regression. In this paper, we have compared these methods using real data sets. 1. INTRODUCTION There is a vast literature on optimization methods for estimating the parameters of nonlinear models. Various concepts of optimization methods, properties of the estimators, and interpretation of the estimates are de- scribed in detail in Bard [2], Goldfeld and Quandt [3], Fletcher [4] and Powell [5], etc. Ratkowsky [6] studied the properties of the estimators of nonlinear models commonly used by scientists in agriculture research, biology, engineering, and other applied disciplines. He has also examined the nonlinear behavior and the consequences of intrinsic nonlinearity. Goldfeld and Quandt [3] made a numerical comparison of several optimization algorithms and examined the nonlinear estimation problems. Bard [2] also looked into the validity of the estimates, making inferences and examining special models in the form of differential equations. APPLIED MA THEMATICS AND COMPUTATION 77:179-183 (1996) © Elsevier Science Inc., 1996 0096-3003/96/$15.00 655 Avenue of the Americas, New York, NY 10010 SSDI 0096-3003(95)00211-Y

A comparative study for the estimation of parameters in nonlinear models

Embed Size (px)

Citation preview

li ~ - H ( R I A N D

A Comparat ive Study for the Est imation of Parameters in Nonlinear Models

Kuldeep Kumar and M. A. Alsaleh

School of Information Technology Bond University, Gold Coast Queensland 4229, Australia

Transmitted by John Casti

ABSTRACT

The most commonly used numerical optimization techniques include the methods of Gauss-Newton, Newton-Raphson, gradient methods, including methods of steep- est ascent and descent, and Marquardt algorithm. Kumar [1] has recently proposed a new technique based on optimum exponential regression. Another noniterative procedure proposed in this paper is based on the principle of internal regression. In this paper, we have compared these methods using real data sets.

1. INTRODUCTION

There is a vast literature on optimization methods for estimating the parameters of nonlinear models. Various concepts of optimization methods, properties of the estimators, and interpretation of the estimates are de- scribed in detail in Bard [2], Goldfeld and Quandt [3], Fletcher [4] and Powell [5], etc. Ratkowsky [6] studied the properties of the estimators of nonlinear models commonly used by scientists in agriculture research, biology, engineering, and other applied disciplines. He has also examined the nonlinear behavior and the consequences of intrinsic nonlinearity. Goldfeld and Quandt [3] made a numerical comparison of several optimization algorithms and examined the nonlinear estimation problems. Bard [2] also looked into the validity of the estimates, making inferences and examining special models in the form of differential equations.

APPLIED MA THEMA TICS AND COMPUTATION 77:179-183 (1996) © Elsevier Science Inc., 1996 0096-3003/96/$15.00 655 Avenue of the Americas, New York, NY 10010 SSDI 0096-3003(95)00211-Y

180 K. KULDEEP AND M. A. ALSALEH

The exponential regression is a relatively new nonlinear regression method for estimating the parameters of nonlinear models, proposed by Kumar [1]. It does not involve the need to obtain initial estimates for all parameters, and one does not have to worry about the singularity of the Hessian matrices. It also requires fewer iterations than other optimization methods. The idea behind exponential regression is to first obtain the non-linear parameters by means of an iterative procedure and then to estimate the remaining linear parameters in the model by ordinary least square.

The method of internal regression is an interesting method for estimating the parameters of non-linear models. The method was originally proposed by Hartley [7], and it does not require the iteration. The method is described in Section 2. In Section 3 we compare various methods, using real data sets and simulated data sets, and finally, we draw conclusions in Section 4.

2. METHOD OF INTERNAL REGRESSION

First-order linear difference equation

Yi - Y~- I = bYi + a (2.1)

is capable of generating exactly the exponential law

y = ~(1 - e'X). (2.2)

More generally, if y satisfies exactly the difference equation

( Yi+I -- Yi) = - l b ( Yi+l + Yi) + a (2.3)

then it also satisfies

y~ : ~(1 - f e k X , ) . (2.4)

Where ~ = ~, k = - 2 tanh l(½b), and f is constant of integration. We can write (2.3) as

2 - b 2a

Y~+I- 2 + b Y i + 2 + b

This equation can be treated as a linear auto-regressive equation and can be fitted using oridnary least squares. The estimates a and b can be used to solve ~ and k. The method was originally proposed by Hartley [7].

Parameter Estimation 181

3. COMPARISON OF VARIOUS METHODS

In this section, we compare the various procedures for estimating the parameters of non-linear models. Comparat ive study is based on the follow- ing factors:

(i) its reliability measured by the probability that convergence to the minimum does take place,

(ii) the number of iterations required before convergence is reached, (iii) the approximation to the true values of the parameters measured in

terms of residual sum of squares.

All the calculations were done using the SAS statistical software system.

EXAMPLE 1: Let us consider the model

y : 01 (1 - - 02 e x p (03x)). (3.1)

For simplicity, let 01 = 1, 02 = 1, and 03 = - 1 so the model reduces to

y = 1 - e x p ( - x ) . (3.2)

Let x 0 1 2 3 4 y .000 .632 .865 .950 .982;

we have fitted model (3.2) using the Gauss-Newton method, the Newton- Raphson method, the Marquardt method, the steepest descent method, and the newly proposed method of internal regression and exponential regres- sion. Table 1 summarizes the results obtained by various methods.

TABLE 1

(COMPARISONS AMONG THE VARIOUS METHODS)

No of Convergence 01 02 03 S(0) iterations met?

Gauss-Newton 1 Newton-Raphson 0.898 Marquardt 1 Steepest descent 0.937 Internal

Regression 1 Exponential

Regression 1

1 - 0.9996 0 6 yes

1.02 - 2 0.0308 4 yes

I - 0.9996 0 7 yes

1.02 - 1.49 .0116 51 no

1 - 1 0 NA NA

1 - 1 0 N A N A

182 K. KULDEEP AND M. A. ALSALEH

From Table 1, it is clear that the method of internal regression and exponential regression works quite well. Among the optimization methods, the Gauss-Newton method performed the best, followed closely by the Marquardt method. Although the Newton-Raphson method requires the least iterations, the estimated 0 are not very close to the true values. The steepest descent method fails to converge even after 51 iterations.

EXAMPLE 2: Consider the data given in Hartley [7]. The data has been taken from the fertilizer experiment in which Y, represents the yields of wheat corresponding to the fertilizer rate X~.

zi 0 4 8 12 16 20

y~ 127 151 379 421 460 426"

The same model (3.1) is used here. If we apply the principle of internal regression, the estimate of 0 comes out to be

0 = [520.41 0.850 -0 .104] r

A comparative study of the internal regression method with other optimiza- tion method is given below in Table 2.

It can be observed from Table 2 that the error sum of squares for internal regression is 10340.2 which is the lowest compared to the optionization methods. The Gauss-Newton, Newton-Raphson, and Marquardt methods give identical results with the Newton-Raphson method converging the fastest, followed by Marquardt method.

TABLE 2 (COMPARISONS AMONG THE VARIOUS METHODS)

No of Convergence 01 02 0 a S(0) iterations met?

Gauss-Newton 523.31 0.814 -0.0998 13390.09 Newton-Raphson 523.31 0.814 -0.0998 13390.09 Marquardt 523.31 0.814 -0.0998 13390.09 Steepest descent 500.00 0.810 -0.112 13550.11 Internal

Regression 520.41 0.850 -0.104 10340.62 Exponential

Regression 525.57 0.830 - 0.112 11450.6

13 yes 9 yes

11 yes 51 no

Parameter Estimation 183

6. CONCLUSION

In all the preceding examples, as well as in various simulation studies we observed that internal regression performs well compared to the optimiza- tion methods for the kind of models considered in this paper. Parameter estimates obtained by internal regression are very close to the true value of the parameters. The advantage of internal regression over optimization methods is that it does not require iterations to reach the optimum, hence, one does not have to worry about the sensitivity of initial estimates.

For the exponential models, the method of steepest descent is not recommended for general use because of its poor convergence properties. The Newton-Raphson method has much better convergence properties, and it works particularly well if a close initial estimate of the optimal point is used. The methods of Marquardt and Gauss-Newton were found to perform best for least square problems. In particular, the Marquardt method performs better with a poor initial estimate compared to the Gauss-Newton method.

REFERENCES

1 K. Kumar, Optimum exponential regression with one non-linear term, Applied Math and Computation, 50:51-58 (1992).

2 Y. Bard, Non-linear Parametric Estimation, Academic Press, New York, 1974. 3 S.M. Goldfeld, and R. E. Quandt, Non-linear Methos in Econometrics, North

Holland, Amsterdam, 1972. 4 R. Fletcher, In Unconstrained Optimization, Vol 1, Practical Methods of Opti-

mization, Wiley, New York, 1980. 5 M.J.D. Powell, A survey of numerical methods for unconstrained optimization,

in Perspectives on Optimization: A Collection of Expository Articles, Addison Wesley, Philippines, 1972.

6 D.A. Ratkowsky, Handbook of Nonlinear Regression Models, Marcel Dekker, New York, 1990.

7 H.O. Hartley, The estimation of non-linear parametric by internal least square, Biometrika, vol. 32:32-45 (1947).