8

Click here to load reader

Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

Embed Size (px)

Citation preview

Page 1: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

SHORT NOTE

Multiresolution and Multispectral Data FusionUsing Discrete Wavelet Transform with IRSImages: Cartosat-1, IRS LISS III and LISS IV

Anil Z. Chitade & S. K. Katiyar

Received: 29 September 2010 /Accepted: 14 June 2011 /Published online: 6 July 2011# Indian Society of Remote Sensing 2011

Abstract Image fusion techniques integrate compli-mentary information from multiple image sensor datasuch that the new images are more suitable for thepurpose of human visual perception and computerbased processing tasks for extraction of detailinformation. As an important part of image fusionalgorithms, pixel-level image fusion can combinespectral information of coarse resolution imagery withfiner spatial resolution imagery. Ideally, the method usedto merge data sets with high-spatial and highspectralresolution should not distort the spectral characteristicsof the high-spectral resolution data. This paper describesthe Discrete Wavelet Transform (DWT) algorithm forthe fusion of two images using different spectraltransform methods and nearest neighbor resamplingtechniques. This research paper investigates the perfor-mance of fused image with high spatial resolutionCartosat-1(PAN) with LISS IV and Cartosat-1(PAN)sensor images with the LISS III sensor image of IndianRemote Sensing satellites. The visual and statisticalanalysis of fused images has shown that the DWTmethod outperforms in terms of Geometric, Radiomet-ric, and Spectral fidelity.

Keywords IRS (Indian Remote Sensing) . Imagefusion .Wavelet transform . Cartosat-1 . LISS III .

DWT. Pixel . Spectral fidelity

Introduction

Remotely sensed images are being used for variousnatural resources mapping tasks. Availability of highspatial resolution and high spectral satellite images hasprovided more scope for the use of image fusiontechniques. The image fusion techniques are in use in anumber of applications such as remote sensing, medicalimaging, digital camera vision and military applications.Hence evaluation of the performances of different fusiontechniques objectively, systematically, and quantitativelyhas been recognized as an urgent requirement.

Remote sensing image fusion aims at integratingthe information conveyed by data acquired in differ-ent portions of the electromagnetic spectrum atdifferent spatial, temporal, and spectral resolutions;so that we can get multifarious multitemporal, multi-resolution, and multifrequency image data, for variouspurposes like feature extraction, modeling, and classifi-cation. Each kind of image may not receive all theinformation necessary for detecting an object by humanor computer vision. Therefore the composite or fusedimage is more useful for human perception as well as forautomatic computer analysis task such as featureextraction, segmentation, and object recognition. Vari-ous applications that would get benefitted from the use

J Indian Soc Remote Sens (March 2012) 40(1):121–128DOI 10.1007/s12524-011-0140-0

A. Z. Chitade (*) : S. K. KatiyarDepartment of Civil Engg, Maulana Azad NationalInstitute of Technology Bhopal,Bhopal 462051 MP, Indiae-mail: [email protected]: www.manit.ac.in

S. K. Katiyare-mail: [email protected]

Page 2: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

of this image fusion technique include display system insurveillance, aviation, remote sensing, automated ma-chine vision, and medical imaging.

A variety of image fusion methods have beendeveloped and most popular image-fusion methodsare those based on the IHS transformation, principalcomponent analysis and wavelet transform. Merginginformation from different imaging sensors involvetwo distinct steps (Chavez and Bowell 1988). First,the digital images from both sensors are geometricallyregistered to one another. Next, the informationcontents i.e. spatial and spectral are mixed togetherto generate a single data set that contains the best ofboth sets. The prime object of image fusion using anytechnique is not to distort the spectral characteristicsof the data which is important for calibration purposesand for ensuring that targets that are spectrallyseparable in the original data are still separable inthe merged data set (Chevez and Bowell 1988).

A number of models have been suggested to achieveimage merge. A forward-reverse RGB to HIS transformhas been used (Welch and Ehlers 1987) by replacing I(from transformed TM data) with SPOT pan image.However this technique is limited to three bands (R, G,and B). Chevez et. al (1991) used the forward-reverseprincipal components transforms with SPOT imagereplacing PC-1. Another technique (Schowengerdt1980a, b) combines a high frequency image derivedfrom the high spectral resolution LANDSAT-TM image.

The objective of this paper is to present the resultsof the discrete wavelet transform method for themerging of information contents of Cartosat-1(PAN)with LISS IV sensor images and Cartosat-1(PAN)with the LISS III sensor image of Indian RemoteSensing satellites.

Study Area and Data Sources

Investigations of the present work have been carriedout for the Bhopal city and adjoining area of Madhya

Pradesh state, India. For this work different spatialresolution data products of Indian Remote sensingsatellite sensors as mentioned in the Table 1, havebeen used for the research work. In order to cover themaximum range of land-use features, analysis of thiswork has been focused on the urban land-use ofBhopal city using Erdas-Imagine v.9.1 software.

Image Fusion Techniques

In any kind of Remotely Sensed image, its spatialresolution and spectral resolution are contradictoryfactors. For a given signal to noise ratio, a higherspectral resolution is often achieved at the cost oflower spatial resolution (Robert A. Schowengerdt).Image fusion techniques are therefore useful forintegrating a higher spatial resolution image withhigher spectral resolution image.

There are different algorithms for fusion of theimages. The standard techniques and a number ofrecently developed algorithms are Intensity-HueSaturation (IHS) fusion, color normalization (CN)spectral sharpening, Gram Schmidt spectral sharpen-ing, Brovey Transform, Principal Component (PC)fusion, Multiplicative fusion, Ehlers fusion, andWavelet fusion method. In this research the wavelettransform method for resolution merge has beenfocussed.

Discrete Wavelet Transform

Wavelet Resolution Merge

Wavelet resolution merge allows multispectral imagesof relatively low spatial resolution to be sharpenedusing a co-registered panchromatic image of relativelyhigher resolution. In addition to PAN-multispectralimage sharpening, this algorithm can also be used tomerge any two images by fusing information from

S. No. Date of acquisition Satellite sensor Ground samplingdistance (GSD

1 6th April 2006 PAA-IRS P5 2.5 m

2 10th Feb2005 IRS 1C -LISS III 23.5 m

3 29th January 2010 LISS IV 5.8 m

Table 1 Details Indianremote sensing satellite datasets for the study areamultispectral remote sensingdatasets for the study site

122 J Indian Soc Remote Sens (March 2012) 40(1):121–128

Page 3: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

several sensors into one composite image by takingfour different levels; signal, pixel, feature, andsymbolic. The present algorithm works at pixel level.

Wavelet Theory

Wavelet-based image reduction uses short, discretewavelets instead of long wave. Thus new transformis much more local. The wavelet can be parame-terized as a finite size moving window. A keyelement of using wavelets is the selection of basewaveform to be used i.e. basis. The input signal(image) is broken down into successively smallermultiples of this basis. Wavelets are derived wave-forms having lot of mathematically useful charac-teristics making preferable to simple sine or cosinefunctions. For example, wavelets are discrete; thatis, they have a finite length as opposed to sinewaves which are continuous and infinite in length.Once the basis waveform is mathematically de-fined, a family of multiples can be created withincrementally increasing frequency. For example,related wavelets of twice the frequency, three timesthe frequency, four times the frequency, etc. can becreated. Once the waveform family is defined, theimage can be decomposed by applying Co-efficientto each of the waveforms. The wavelets are rarelyeven calculated (Shensa 1992).

The signal processing properties of the discretewavelet transform (DWT) are determined by thechoice of high pass (Bandpass) filter (Shensa1992). For the commonly used, fast (Mallat 1989)discrete wavelet decomposition algorithm, a shift ofthe input image can produce large changes in thevalues of wavelet decomposition coefficients. Once

selected, the wavelets are applied to the input imagerecursively via a pyramid algorithm or filter bank.After filtering at any level, the low pass image ispassed to the next finer filtering in the filter bank.The high pass images are retained for latter imagereconstruction.

2D Discrete Wavelet Transform (DWT)

A 2D discrete wavelength transform of an imageyields four components i.e. Approximation Co-efficients WØ, horizontal co-efficient (WΨ

h) whichshows the variation along the columns, Vertical co-efficient(WΨ

v) which shows the variation along therows and the diagonal co-efficient WΨ

D whichshows the variation along the diagonals. hØ and hΨare respectively, the low pass and high pass waveletfilters used for decomposition. The rows of theimage are convolved with the low pass and high passfilters and the result is down sampled along thecolumns; which yields two sub images whosehorizontal resolutions are reduced by a factor oftwo. Both sub images are again filtered column wisewith same low-pass and high-pass filters and downsampled along rows. Thus for each input image, weget four sub images each reduced by a factor of fourcompared to the original image. Schematically it isshown in Fig. 1.

2D Inverse Discrete Wavelet Transform (DWT)

The reduced components of the input images are passedas input to the low pass and high pass reconstructionfilters h̃Ø and h̃Ψ as shown in Fig. 2. The sequence ofsteps is the opposite of that in the DWT.

Fig. 1 Schematic diagramof the discrete wavelettransform – DWT.(Source- Erdas Imagine 9.1field guide)

J Indian Soc Remote Sens (March 2012) 40(1):121–128 123

Page 4: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

Methodology

The basic theory of the decomposition is that animage can be separated into high-frequency and low-frequency components. These two images contain allof the information in the original image. The samecould be done by high-pass on image andcorresponding low-frequency image could be derived.Any image can be broken into various low and highfrequency components using various high and lowpass filters. The wavelet family can be thought of as ahigh-pass filter. Thus wavelet based high and lowfrequency images can be created from any inputimages.

It should be noted that the high-resolution image(PAN) is a single band and so the substitution image,from the multispectral image, must also be a singleband. There are tools available to compress themultispectral image into a single band for substitutionusing the IHS transform or PC transform. Waveletresolution merge is shown in Fig. 3.

The Image to image geometric registration processwas used for registration of LISS-III and LISS IV,taking Cartosat-1 as reference image, because theoriginal remote sensing image always have geometrictransformation, an accurate geo-registration process,by which a transformation that provides the mostaccurate match between two or more images isdetermined. This is employed as a necessary prelim-inary process of image fusion. Before rectification ofLISS III and LISS IV, initially the images are digitallyenlarged by a factor of two in both directions togenerate a pixel size similar to Cartosat-1 PAN data.For an accurate geo-registration of images 23 groundcontrol points (GCPs) well distributed in the studyarea was collected using DGPS (Make-MagellanPromark3) survey. The above images georeferencedusing 2nd order polynomial as transformation func-tions with root mean square (RMS) error of 0.41. Allbands of the original image were used for the fusionprocess by selecting the output format as unsigned8 bit. This was done for ensuring the image

Fig. 2 Inverse discretewavelet transform - DWT−1.(Source- Erdas Imagine 9.1field guide)

Fig. 3 Wavelet resolutionmerge. (Source- ErdasImagine 9.1 field guide)

124 J Indian Soc Remote Sens (March 2012) 40(1):121–128

Page 5: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

conformity and avoiding any loss of information. Allthe image processing operations were performedusing ERDAS Imagine v9.1. The second step is theintegration/fusion of geo-registered image. Thesefused images were then smoothened with a 3 by 3low pass filter to eliminate the blockiness introducedby 2xdigital enlargement (Chavez 1986). TheCartosat-1 pan image was the master image and theLISS-III and LISS-IV was the slave for geometric-registration. Here we used the Discrete Wavelet

Transform method for fusion of these two Cartosat-1pan with LISS-IV and Cartosat-1 pan with LISS-IIIimages (Figs. 4 and 5).

Analysis

The spectral characteristics in the data set generatedby using DWT with different spectral transformalgorithm are compared with the spectral characteris-

Fig. 4 Source images andfusion results (Cartosat-1+Liss-III)

J Indian Soc Remote Sens (March 2012) 40(1):121–128 125

Page 6: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

tic of the original LISS-III and LISS IV data. Thecomparison is made statistically.

Results and Discussion

The statistical analysis of fused images was done bycalculating the parameters as suggested by Pat SChavez Jr. et. al, and the same are given in theTables 2 and 3.

For all the images, we studied the statisticalparameters of the histogram and especially thestandard deviation. The value of the standard devia-tion is correlated with the possibility to recognizedifferent entities. The statistical control is necessary inorder to examine spectral information preservation.The fused image (Cartosat 1+ LISS III) produced bythis method using Principal Component spectraltransform presents exactly the same Minimum andMaximum values with the original multispectral

Fig. 5 Colour composite offused image for cartosat-1and LISS-IV using discretewavelet transform

126 J Indian Soc Remote Sens (March 2012) 40(1):121–128

Page 7: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

image for all the bands, which indicates no variationbetween the original multispectral and the fusedimage produced by this method. The standarddeviation values don’t really change. For examplethe standard deviation of the first band of the imageincrease from 75.018 (original multispectral image) to75.119 (fused image). The spatial resolution of thefused image is found out with this combination isexactly same as 2.5 m compared with the originalCartosat-1 image. Similarly we fused Cartosat1 withLISS IV, and apply same algorithm for entire image

for all bands and then Statistical parameters arecalculated.

Conclusions

Based on the above investigations, following are theconclusions:

For the Cartosat 1 and LISS III image fusionproduced by DWT with Principal Component spectraltransform presents exactly the same Minimum and

Table 2 Statistical parameters of the original and fused image with different spectral transform techniques (Cartosat1+ Liss-III)

Image Band Min Max Mean Median Mode Standard deviation Co-efficient of variation Skewness

Cartosat-1 PAN 1 0 746 85.285 107.82 0 57.661 0.6760 1.479

LISS-III 1 0 255 75.534 60 0 75.018 0.9931 1.004

2 0 255 94.122 84 0 86.259 0.9164 1.091

3 0 255 97.778 88 0 87.277 0.8926 1.120

Fused Image With singleband ST

1 0 255 105.216 103 103 69.112 0.6568 0.032

2 1 255 130.185 135 151 75.033 0.5763 −0.2773 1 255 135.380 42 141 73.980 0.5464 −0.076

Fused Image With PC ST 1 0 255 74.974 59 0 75.119 1.0019 0.998

2 0 255 93.375 83 0 85.760 0.9184 1.088

3 0 255 97.027 88 0 86.746 0.8940 1.118

Fused Image With IHS ST 1 0 255 74.967 58 0 74.967 1.0000 1.000

2 0 255 93.60 83 0 85.967 0.9184 1.088

3 0 255 97.197 88 0 86.951 0.8945 1.117

Table 3 Statistical parameters of the original and fused image with different spectral transform (ST) techniques (Cartosat-1+ LISS-IV)

Image Band(Layer)

Min Max Mean Median Mode Standarddeviation

Co-efficientof variation

Skewness

Cartosat-1 PAN 1 0 825 95.876 112.79 0 53.184 0.55471 1.80

LISS-IV 1 0 128 51.840 64 0 27.724 0.53479 1.86

2 0 239 91.821 108 0 46.704 0.50864 1.96

3 0 233 98.514 117 0 47.798 0.48518 2.06

Fused Image With singleband ST

1 0 128 51.187 63 0 27.525 0. 53773 1.85

2 0 238 90.982 107 0 46.376 0.50972 1.96

3 0 232 97.660 116 0 47.499 0.48637 2.05

Fused Image With PC ST 1 0 128 51.187 63 0 27.525 0.53773 1.85

2 0 238 90.982 107 0 46.376 0.50972 1.96

3 0 232 97.660 116 0 47.499 0.48637 2.06

Fused Image With IHS ST 1 0 150 51.154 63 0 27.468 0.53696 1.86

2 0 238 90.924 107 0 46.358 0.50985 1.96

3 0 231 97.592 116 0 47.479 0.48650 2.05

J Indian Soc Remote Sens (March 2012) 40(1):121–128 127

Page 8: Multiresolution and Multispectral Data Fusion Using Discrete Wavelet Transform With IRS Images_ Cartosat-1, IRS LISS III and LISS IV

Maximum (0 and 255) values with the originalmultispectral image for all the bands, whichindicates no variation between the original multi-spectral and the fused image produced by thismethod. The Standard deviation just varies withdecimals. Similarly The fused image (Cartosat I+ LISSIV) produced by DWT with Principal Componentspectral transform and single band spectral transformpresents approximately the same Minimum and Maxi-mum values(Difference of one) with the originalmultispectral image for all the bands, which indicatestoo little variation between the original multispectral andthe fused image produced by this method. The Standarddeviation just varies with one.

All the fusion techniques improve the resolutionand the Spectral result. The wavelet transformkeeps exactly the statistical parameters of theoriginal images. The Statistical analysis of thespectral characteristics of the data indicate that theresults generated by DWT with Principal Compo-nent spectral transform is least distorted and thelow frequency noise problem can be removeautomatically, colors are well preserved in the fusedimage, but unrealistic artifacts for spatial improve-ment are observed.

In the discrete wavelet transform, the algorithmdown samples the high spatial resolution input imageby a factor of two with each iteration. The low spatialresolution image will substitute exactly for an imageonly if the input images have relative pixel sizesdiffering by a multiple of two. Any other pixel sizeratios can result in degradation of the substitutionimage that may not be overcome by subsequentwavelet sharpening.

Acknowledgement The authors are specially thankful toDr. R.P. Singh, Director, MANIT Bhopal (M.P.) for hiskind permission to use the resources and Satellite Data inRemote Sensing and GIS Center of Civil Engg Depart-ment, Maulana Azad National Institute of Tech Bhopal,and Dr. S Sriniwas Rao, Sr. scientist, National RemoteSensing Center Hyderabad (A. P.) for his valuable guidancein this research paper.

References

Chavez, P. (1986). Digital merging of Landsat TM and digitizedNHAP data for 1: 24.000 scale image mapping, Photo-gramm. Eng. Remote Sens., 52, 1637–1646.

Chavez, P. S., & Bowell, J. A. (1988). Comparison of thespectral information content of Landsat Thematic Mapperand SPOT for three different sites in the Phoenix, Arizonaregion. Photogrammetric Engineering and Remote Sensing,54(12), 1699–1708.

Chevez, P. S., Jr., et al. (1991). Comparison of three differentmethods to merge multiresolution and multispectral data:Landsat TM and SPOT Panchromatic. Journal of Photo-grametric Engineering and Remote Sensing, 57(3), 295–303.

Mallat, S. (1989). Multifrenqency channel decomposition ofimage and wavelet models, IEEE Trans Signal Processing,37(12), 2091–2110.

Schowengerdt, R. A. (1980a). Remote sensing, Models, andMethods for Image Processing, (Third Edition) AcademicPress Elsevier, pp 379–380.

Schowengerdt, R. A. (1980b). Textbook of Remote Sensing –Models and Methods For Remote Sensing, IInd Edition,Academic Press Elsevier, pp 357–387.

Shensa, M. J. (1992). The discrete wavelet transform: weddingthe a\` trous and Mallat algorithms. IEEE Trans-actions on Signal Processing, 40, 2464–2482.

Welch, R. & Ehlers, W. (1987), Merging MultiresolutionSPOT HVR and Landsat TM Data, PE and RS, 53(3),301–303.

128 J Indian Soc Remote Sens (March 2012) 40(1):121–128