6
Proceedings of Intemational Joint Conference on Neural Networks, Montreal, Canada, July 31 - August 4, 2005 Processing Landsat TM Data Using Complex- Valued NRBF Neural Network Xiaoli Tao Department of Electrical and Computer Engineering University of Massachusetts Dartmouth Dartmouth, MA 02747 E-mail: [email protected] Abstract- This paper describes a novel classification technique - a Complex-Valued Normalized Radial Basis Function (NRBF) neural network classifier. Complex-valued weights are used in the supervised learning part of NRBF neural networks. Different from the original NRBF neural network, another activation function for the output was added in NRBF neural network This new neural network model improves the classification ability of NRBF neural networks regardless of the learning method in the unsupervised part. This classifier was tested with satellite multi-spectral image data. Classification results show that this new neural network model is more accurate and powerful than the conventional NRBF model and can solve classification problems more efficiently. I. INTRODUCTION Classification of satellite multi-spectral images allows people to take a multitude of spectral band data and obtain a thorough understanding of the types of land cover of the area of study. More importantly, it provides comprehensive information to improve the condition of the area. However, conventional statistical models are not sufficient to describe the remote sensing data because of their complicated structures. Since the end of the 1980's, many artificial neural network models have been used in the classification of remote sensing data. Among these models, the MLP (Multi-Layer Preceptron) has been the most popular. However, remote sensing data always involve many samples (due to multiple bands) and, as we know, MLP is computationally inefficient with its multi- layers and back-propagation properties. Compared with the MLP, RBF neural networks only have a single hidden layer, which results in exponentially decreasing computation complexity [1]. In recent years, several complex-valued neural networks have been proposed and demonstrated [2-5]. The complex-valued neural networks are the extended version of conventional real-valued neural networks. They deal with information in complex space using complex weighting factors and complex non-linear functions. In paper [6], the authors used complex-valued neurons in an RBF neural network and demonstrated their application. We propose a new Complex-Valued Normalized Radial Basis Function (NRBF) neural network classifier. In our experiments, we use different band Howard E. Michel Department of Electrical and Computer Engineering University of Massachusetts Dartmouth Dartmouth, MA 02747 E-mail: [email protected] combinations to explore the effect of each on the ability of the NRBF with both real-valued and complex-valued weights to perform classification. The experimental results show that the complex-valued NRBF is superior to the real-valued NRBF in most cases. Finally, we extract different parts of the original image with the help of MultiSpecW32 software based on the combination of band 2,3,4. Classification is performed on these extracted parts by the proposed models and the results are compared and analyzed. II. CLASSIFICATION METHOD RBF neural networks have a strong biological background. In the field of the brain cortex, local regulated and folded receptive field is the characteristic of the reflection of the brain. Based on this characteristic, Moody and Darken [7] proposed a new neural network structure, which is referred to as the RBF neural network. Figure 1 shows the basic topological structure of RBF neural network. Fig. 1. Topological structure of RBF neural networks A.. RBF Neuron Network In RBF neural networks, radial basis functions are embedded into a two layer feed-forward neural network. The network has a set of inputs and a set of outputs. Between the inputs and outputs there is a layer of processing units referred to as hidden units. Each hidden unit is implemented with a radial basis function. In RBF neural networks, the nodes of the hidden layer generate a local response of input prompting through the radial basis functions. The output layer of RBF neural networks 0-7803-9048-2/05/$20.00 02005 IEEE 3081

[IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

  • Upload
    he

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

Proceedings of Intemational Joint Conference on Neural Networks, Montreal, Canada, July 31 - August 4, 2005

Processing Landsat TM Data Using Complex-

Valued NRBF Neural NetworkXiaoli Tao

Department of Electrical and Computer EngineeringUniversity of Massachusetts Dartmouth

Dartmouth, MA 02747E-mail: [email protected]

Abstract- This paper describes a novel classificationtechnique - a Complex-Valued Normalized Radial BasisFunction (NRBF) neural network classifier. Complex-valuedweights are used in the supervised learning part of NRBFneural networks. Different from the original NRBF neuralnetwork, another activation function for the output wasadded in NRBF neural network This new neural networkmodel improves the classification ability of NRBF neuralnetworks regardless of the learning method in theunsupervised part. This classifier was tested with satellitemulti-spectral image data. Classification results show thatthis new neural network model is more accurate andpowerful than the conventional NRBF model and can solveclassification problems more efficiently.

I. INTRODUCTION

Classification of satellite multi-spectral images allowspeople to take a multitude of spectral band data and obtaina thorough understanding of the types of land cover of thearea of study. More importantly, it providescomprehensive information to improve the condition of thearea. However, conventional statistical models are notsufficient to describe the remote sensing data because oftheir complicated structures. Since the end of the 1980's,many artificial neural network models have been used inthe classification of remote sensing data. Among thesemodels, the MLP (Multi-Layer Preceptron) has been themost popular. However, remote sensing data alwaysinvolve many samples (due to multiple bands) and, as weknow, MLP is computationally inefficient with its multi-layers and back-propagation properties. Compared withthe MLP, RBF neural networks only have a single hiddenlayer, which results in exponentially decreasingcomputation complexity [1]. In recent years, severalcomplex-valued neural networks have been proposed anddemonstrated [2-5]. The complex-valued neural networksare the extended version of conventional real-valued neuralnetworks. They deal with information in complex spaceusing complex weighting factors and complex non-linearfunctions. In paper [6], the authors used complex-valuedneurons in an RBF neural network and demonstrated theirapplication. We propose a new Complex-ValuedNormalized Radial Basis Function (NRBF) neural networkclassifier. In our experiments, we use different band

Howard E. MichelDepartment of Electrical and Computer Engineering

University of Massachusetts DartmouthDartmouth, MA 02747

E-mail: [email protected]

combinations to explore the effect of each on the ability ofthe NRBF with both real-valued and complex-valuedweights to perform classification. The experimentalresults show that the complex-valued NRBF is superior tothe real-valued NRBF in most cases. Finally, we extractdifferent parts of the original image with the help ofMultiSpecW32 software based on the combination of band2,3,4. Classification is performed on these extracted partsby the proposed models and the results are compared andanalyzed.

II. CLASSIFICATION METHOD

RBF neural networks have a strong biological background.In the field of the brain cortex, local regulated and foldedreceptive field is the characteristic of the reflection of thebrain. Based on this characteristic, Moody and Darken [7]proposed a new neural network structure, which is referredto as the RBF neural network. Figure 1 shows the basictopological structure ofRBF neural network.

Fig. 1. Topological structure ofRBF neural networks

A.. RBFNeuron Network

In RBF neural networks, radial basis functions areembedded into a two layer feed-forward neural network.The network has a set of inputs and a set of outputs.Between the inputs and outputs there is a layer ofprocessing units referred to as hidden units. Each hiddenunit is implemented with a radial basis function. In RBFneural networks, the nodes of the hidden layer generate alocal response of input prompting through the radial basisfunctions. The output layer of RBF neural networks

0-7803-9048-2/05/$20.00 02005 IEEE 3081

Page 2: [IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

realizes the linear weighted combination of the output ofthe hidden basis functions.There is a large class of radial-basis functions. Thefollowing functions are of particular interest in the study ofRBF networks:

1. Multiquadrics:O(r) = (r2 +c2 )112 for some c>O and rER

2. Inverse Multiquadrics:

)(r)2 +c2 )1/=2 for some c>O and rE R

3. Gaussian functions:r2Ab(r)=exp{- r for somea>oand reR

Based on the RBF neural network structure and the chosenradial basis function, if f, (Xj) is the lth output of the

output layer, and 0, (Xj ) is the output of ith radial basisfunction, then the whole network forms a mapping

N,f,(xi)= Ai,lOi(Xi) (l)

i=l

where Xi is an M-dimensional feature vector; Nr is thenumber of the hidden units and A,, is the connectionweight between the ith hidden unit and Ith output unit. Thisweight shows the contribution of the hidden unit to thecorresponding output unit.

B. NRBF Neural Network

In some approaches, the output of the hidden layer isnormalized by the sum of all the radial basis functioncomponents just as in the Gaussian-mixtures estimationmodel [8]. The following equation shows the normalizedradial basis functions:

oi(X) =L V0WC ll) (2)

A network obtained using the normalized form for thebasis functions is called a normalized RBF neural networkand has some important properties. This normalized formbounds the hidden output in the range between 0 and 1,which can be interpreted as probability values inclassification application to indicate which hidden units ismost activated [9]. Moreover, we modified the localizedbehavior to non-localized behavior with this normalization.Using the above normalized form, we let f1 (Xj ) be the Ith

output of the output layer, and (Di (Xj ) be the normalizedradial basis functions. Therefore, the Equation (1) can bechanged to the following one:

Nr

f,(Xj ) = Edii(Xj)i=l

(3)

From the above descriptions, we know that NRBF neuralnetworks realize the nonlinear mapping Rn -> Rn througha linear combination of nonlinear basis functions.

C. NRBF Neural Network Training

We have demonstrated that there are various basisfunctions that can be used in NRBF, but we havespecifically selected the Gaussian basis functions forNRBF, as they are the most popular and widely used.Gaussians are fast decaying functions and it can beassumed that not all basis function units contributesignificantly to the network output. So if we fix thenumber of the hidden units, there are two sets ofparameters that need to be fixed: one is the centers c, andwidths a, of each basis function; the other is theconnection weights W between the output layer and thehidden layer. Consequently, learning in the NRBF neuralnetworks actually can be divided into two separate stages.When c, and a, are known, the output can be obtainedonly through the linear weighted combination of thehidden layer. The K-Means clustering method, which issimple and fast, has been the most popular method used toestimate the centers. However, the clustering result of theK-means method is very sensitive to its initial estimate andthe convergence often occurs at localized minima.Usually, multiple restarts are needed to acquire anappropriate clustering solution. In our previous work [10],we have suggested to use the spectral clustering method topartially avoid the problem. But the spectral methodinvolves computing the inverse matrix of the input data,which is computationally expensive if the input data pointsare massive. And under some cases, the classificationresults are still not satisfying even with the spectralmethod. Some efficient approaches need to be explored todeal with the problem. Based on the proposed NRBFneural network, we developed a novel classificationtechnique-CVNRBF (Complex-valued NormalizedRadial Basis Function) neural network classifier in nextsection.

D. Complex-valued NRBF Neural Network

Fig. 2. Topological structure of complex-valued NRBF NN

3082

Page 3: [IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

An advantage of complex-valued RBF network is that theoutput in the hidden layer is always a positive number. Thefirst part of the complex-valued training is the same as thetradition RBF neural network. In our experiments, both K-means and spectral clustering methods were adopted.When output of the hidden units was obtained by usingthese methods, only the connection weights between thehidden layer and the output layer should be trained. It isdifferent from the traditional NRBF neural network [6],since we added one more activation function in thearchitecture as in Figure 2. So the mapping from hiddenlayer and the output layer output is nonlinear, we can't usethe linear optimization method to get the optimal solution.We considered using the gradient descent to minimize theerror function. The gradient descent method does notguarantee convergence to globally optimal networkparameters. However, it does appear to converge toreasonable solutions in practice. With the gradient decentmethod to minimize the error signal, we can change thephase of each connection weight through the derivation.

Assume we have a group of input vectors

{p Xp E CN,p= 1,2,... and their mapping is in realvalue{Yp Yp E R, P = 1,2,... K}, where N is thedimension of the input vector. Therefore we can define thefollowing energy function as error function:

K2E =2 (YP - f(xp)I)2. (4)

2P=1Here, f(xp) is the output from the network of each

corresponding input vectorXp. As we have discussed, avariable in the complex domain has two independentcomponents, a magnitude and a phase. In the complex-valued NRBF neural network, the normalized output of thehidden units for each input vector Xp can be defined asfollowing ( for general use, we keep the hidden output in acomplex-valued formula in case it is necessary to use thecomplex-valued number in the hidden layer. ):

(?M (Xp) = 2MPe ip (5)

Additionally, the weights between the hidden layer and theoutput layer can be defined as

Wm = eiqiSince both the hidden output and the output arenormalized, the magnitude of the weight is 1. Thisdefinition shows one advantage of the complex-valuedweights since in the learning process, only the phase willbe changed. This implies that we do not need anyboundary limitation rule for the weights which is usuallyused in real artificial neural network. It is interesting tonote that using the phase instead of the magnitude of the

complex number allows the neuron to mimic physicalneurons, where the timing of arriving pulses is important.Therefore with the gradient decent method to minimize theerror signal, we can change the phase of each connectionweight through the derivation:

IKaE, -a(-yp-lf(,Y1)2)

A.= =_ P=12

aas; aoj(6)

From the architecture of NRBF neural network, we havethe detailed expression of the output before using theactivation function.

N

=,Yp) =M(XP)WMMA17N

N

= LIE pe%APe ql

M4=l

N= 1A ei( +ft,p)M=l

(I)

Here, N is the number of the hidden unit in the network, 1is the learning step. Expand (7), we have (8)

N

fXp) = Z(A.cos(M + OfP)+ iLf sin(GM +qp)) (8)M=1

Plugging (8) into (6), we have

AOi =--qaEA6j

K |NN

=fa(y,(P_ SCOS9,+RDp)] _fZ SfM +'P) )2}I P=I ~M=i M=i

(9)

N N

Let e9=[Z2M cOS(OM +VMP)I2 +[ZM Sifl(M + Mp)IM=1 M=l

D=E =_a {I K(p_ )

K7(Y -Y-)P=t DO 39IP=K1

K I/ a(e))

Therefore, we need to compute the derivation of

N N

a(01,^cosq,+qpd1 EUi~*)2a()=M=l M=l

(10)

N N

=21:,4fcos(f+qp)[(-sinfj +(p)]+2EAf sin(k+qw)[ coso, +(p)]M--I M=l

(1 1)

3083

Page 4: [IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

Finally, we have the following full expression formula forthe change of the phase for each connection weight:

CE K y4j

N N

(2:1ucos&+qpA(-sink9 +gp))+I:lusin&+pv)4 cose +gp))M=I M=1

r= 70_(in((gx,))cs +?)rfP))sinq- +4?p)) (12)

III. MULTI-CLASS CLASSIFICATION

Data processing involves multi-class classification andmany efficient methods have been recommended [II]. Inour NRBF neural network approach, the initialization ofthe output of each sample is necessary. Here, we adopteda simple but efficient method to initialize the output ofeach class: if all the training samples and the testingsamples need to be separated into K classes, then thedimension of the output vector is also defined as K. Theoutput vector is denoted by a unit vector with one entry as"1" and the others as "O"'s, in which the "1" indicates theclassification result of the input. After the neural networkis trained according the above methods, the output withrespect to each testing sample can be computed. Finally,we can use the following decision function to calculatewhich class the testing sample actually belongs to:

I if f (Xj) = arg max f, (Xj)

For example, if the output is (0.9,0.3,0.4,0.5,0.7), thenoutput of this sample actually is (1,0,0,0,0). This samplebelongs to the first class.

IV. EXPERIMENT RESULTS

Remote sensing data are collected by the Landsat 5satellite or the recently launched Landsat 7 satellite. TheLandsat satellite has an onboard sensor which is referred toas Thematic Mapper (TM). The TM sensor records thesurface reflectance of electromagnetic (EM) radiation fromthe sun in seven discrete bands that are broken intodifferent spectral regions of visible, infrared, and thermalinfrared. Table 1 is the band descriptions. Since 1972satellites have provided high-resolution multi-spectralimagery using high technology. The TM bands have beenselected to maximize their capabilities for detectingdifferent types of Earth resources [12].

The study area used in this experiment is the satelliteimage of New England. This data was obtained throughLandsat 7 ETM+ sensors on July 7h, 1999. The number ofpixels of this data set is 6600*6000 for band 1-7 exceptband 6 whose number is 3300*3000 because of itsdifferent resolution. In the following experiment, we use

all of the bands except band 6. We extracted the trainingsamples and the testing samples from the data set based onremote sensing expert knowledge. The number of thetraining samples used is 520. The number of the testingsamples used is 448. In order to avoid over-fitting, thesesamples are extracted from the different positions for eachcluster. As we have defined for multi-class classification,the output of each class is listed in Table 2.

TABLE ITM BAND DESCRIPTION

Band Wavelength Resolution Spectral Region(/1 m) (meters)

1 0.45-0.52 30 Visible Blue2 0.52-0.60 30 Visible Green3 0.63-0.69 30 Visible Red4 0.76-0.90 30 Reflective Infrared5 1 .55-1.75 30 Mid-Infrared6 10.40-12.50 60 Thermal Infrared7 2.08-2.35 30 Mid-Infrared

TABLE 2OUTPUTS FOR EACH CLASS

Class OutputVegetation 1 0 0 0 0Soil (sparse vegetation) 0 1 0 0 0Urban Land 0 0 1 0 0Deep water 0 0 0 1 0Shallow water 0 0 0 0 1

We tried this complex-valued NRBF model in theclassification of the remote sensing data. The number ofthe input units is equal to the number of bands used fortraining. In order to bind the input in the range from 0 to1, we normalize the input by the following formula:

l<i<P

= Xjk -min(Xik)jk max(Xik ) - min(Xik)

1<i<P l<i<P

Also, for better comparison, we compute the overallaccuracy that is defined as:

Accuracy = Number o f samples classified correctly *I,000/Total samples

We compare the results using real-valued weights with theresults using the complex-valued weights in the training ofNRBF neural network. And for each NRBF structure, wehave compared the results with different clustering methodin the unsupervised stage of the NRBF neural network.Here we only list the accuracy for the K-means method.Both the training accuracy and testing accuracy are listedin the Table 3. Additionally, we use different bandcombinations to show the role of each band in the

3084

Page 5: [IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

classification. From Table 3, we can see, the complexweighted NRBF neural network can improve theclassification result under most cases than the conventionalRBF neural network although it is not obviously. The mostimportant improvement is that the complex-valued neuralnetwork can give a good result when the real-valuedweight structure cannot do the classification successfullyunder some cases. In our work, we searched the optimalnumber by brute force (testing all the numbers in a certainrange). It is effective but computationally expensive. Wewill investigate a more principled approach to deal withthis issue in the future.

TABLE 3REAL-VALUED NRBF AND COMPLEX-VALUED NRBF

Bands (Input Data) Complex weights Real weights

(K-meansX%) (K-means)(%)

I 2 3 4 5 7 Training Testing Training Testing

_. . _ _ _ 90.77 89.06 Unclear Unclear

*. . _ _ _ 97.31 90.63 97.56 89.13

. _. L . _ 97.88 91.96 97.5 91.44

. . . . 96.73 93.97 99.55 97.99

. _ . . . _ 98.85 95.76 98.52 95.98

_ ._ . _ _ 93.08 89.96 95.06 95.16

_ . *. _ _ 97.12 94.87 97.94 93.22

_ _*.. _ _ 96592 93.97 Unclear Unclear

_ _ .*C. _ 9500 93.30 98.14 95.46

. . . . 98.65 96.43 98.52 96.72

. _ _ > . . 93.85 89.06 93.97 88.54

. .. . . _ 96.92 92.19 98.39 94.27

* . . 85.00 83.26 Unclear Unclear

_ _ .. . . 96573 95.76 Unclear Unclear

_. . . . . 98.85 94.42 98.26 93.37

. . . . . . 98.46 95.31 97.43 93.97_~~~~~~~~~~~~~~ _ _

In order to compare the classification ability of the newmodel with the old one more intuitively, we use the imageto show the classified result. Typically, with the help ofmultispec32 software, TM Bands 4, 3, and 2 can becombined to make false-color composite images whereband 4 represents red, band 3, green, and band 2, blue.With this band combination, vegetation appears as shadesof red, brighter reds indicating more vigorously growingvegetation. Sparse vegetation areas appear ranging fromgreens or browns depending on moisture and organicmatter content. Urban areas will appear blue-gray in color.Deep, clear water is dark blue to black in color, while

sediment-laden or shallow waters will appear lighter incolor. Using this method, we generate the satellite imageofNew England. Since the whole data is too massive, weonly extract parts of the image. The Figure 3 is the wholeand each white part indicates one test in our experiment.

~~~~~~~~~~~~Pmb*nioof T*J$

Tal4

Fig. 3. Positions for different tests

Since we cannot list all kinds of the classes, some placesare classified into the most possible class. Figure 4 showsthree tests. For clearer comparison, we choose the colorbar for each class in the classification result close to that inthe original generated image. For each test, the first one isthe original generated image. Figure (a) demonstrated theclassified result of old model with k-means method, and(b) demonstrated the classified result of new model with k-means method. All the tests use the same color bar.Comparing these figures, it is easy to see that Figure (b) ismore like original image than Figure (a) in each test. Andsince these tests are randomly extracted in different poisonin the whole image, it shows that the conclusion isuniversal.

V. CONCLUSIONS

In this paper, we described a novel classificationtechnique--CVNRBF (Complex-valued NormalizedRadial Basis Function) neural network classifier in thispaper. We demonstrated that using complex-valuedweights to optimize NR.BF neural networks is a great ideafor remote sensing data. It not only keeps the goodproperties of NRBF neural network but also improves thecomputation power of traditional RBF neural networks.From the results of the classification of the New Englandsatellite image data, we can see it improves the accuracy ofthe classification regardless of the different clusteringmethods. It also overcomes those shortcomings that thetraditional NRBF neural network needs more requirementsfor the input data. In paper [10], we have built a NRBFneural network for the classification work which combinedthe NRBF neural network with the spectral clusteringmethod. The classification result is improved. But under

3085

Page 6: [IEEE 2005 IEEE International Joint Conference on Neural Networks, 2005. - MOntreal, QC, Canada (July 31-Aug. 4, 2005)] Proceedings. 2005 IEEE International Joint Conference on Neural

some cases, the classification results are still not satisfying.Also, we have stated that since Spectral method involvescomputing the inverse matrix of the input data, which iscomputationally expensive if the input data points aremassive, some efficient approaches need to be explored todeal with the problem. From the current results, we have aconclusion that complex-weighted NRBF neural networkshould be an efficient replacement and more powerful.

-~~~~_ p-O*-Y vt

Color Bar

(a)Test I

(b)

REFERENCES

[1] S. Haykin. Neural Networks: A comprehensive Foundation. IEEEComputer Society Press, 1994.

[2] A. J. Noest. "Phase neural networks," Neural InformationProcessing systems, D.Z. Anderson, ed., American Institute ofPhysics, New York, pp.541-59, 1988.

[3] A. Hirose. "Proposal of fully complex-valued neural networks,"IJCNN'92 Baltimore Proc.lV, pp.152-157, 1992.

[4] A. Hirose. "Dynamics of fully complex-valued neural networks,"Electron. Lett., vol.38, pp.1492-1493,1992.

[5] T. Nitta. "A complex numbered version of the back-propagationalgorithm, " WCNN'93 Portland Proc.!I!, pp. 576-579, 1993.

[6] S. Chen, S. McLaughlin and B. Mulgrew. "Complex-valued radialbasis function network, Part 1: network architecture and learningalgorithms" EURASIP Signal Processing Journal vol.35, pp. 19-31,1994.

[7] J. Moody and C. Darken. "Fast Learning in Networks of Locally-turned Processing Units" Neural Computation, vol.1, pp. 281-294,1989.

[8] 1. Cha and S. A. Kassam. "RBFN restoration of nonlinearlydegraded images" IEEE Trans. On Image Processing, vol. 5, no. 6,pp. 964-975, 1996.

[9] R J. Howlett and L. C. Jain. Radial Basis Function Networks 112,Physica-Verlag Heidelberg, 2001.

[10] X-L. Tao, and H. E. Michel. "Classification of Multi-spectralSatellite Image Data Using Improved NRBF Neural Networks,"Proceedings ofSPIE, vol. 526741, 2003.

[11] E. Mayoraz and E. Alpaydin. "Support vector machines formulticalss classification" Proceeding ofthe International Workshopon Artificial Neural Networks, IDIAP Technical Report 98-06,1999.

[12] R A. Schowengerdt. Remote Sensing: Models and methods forimage processing. Academic Press, 1997.

(a) (b)Test 4

(a) (b)Test 9

Fig. 4. Three comparisons oftwo methods

3086