8
SIViP (2009) 3:137–144 DOI 10.1007/s11760-008-0065-4 ORIGINAL PAPER A modified statistical approach for image fusion using wavelet transform S. Arivazhagan · L. Ganesan · T. G. Subash Kumar Received: 31 March 2007 / Revised: 9 February 2008 / Accepted: 8 May 2008 / Published online: 4 June 2008 © Springer-Verlag London Limited 2008 Abstract The fusion of images is an important technique within many disparate fields such as remote sensing, robot- ics and medical applications. For image fusion, selecting the required region from input images is a vital task. Recently, wavelet-based fusion techniques have been effectively used to integrate the perceptually important information gener- ated by different imaging systems about the same scene. In this paper, a modified wavelet-based region level fusion algorithm for multi-spectral and multi-focus images is dis- cussed. Here, the low frequency sub-bands are combined, not averaged, based on the edge information present in the high frequency sub-bands, so that the blur in fused image can be eliminated. The absolute mean and standard devia- tion of each image patch over 3 × 3 window in the high- frequency sub-bands are computed as activity measurement and are used to integrate the approximation band. The per- formance of the proposed algorithm is evaluated using the entropy, fusion symmetry and peak signal-to-noise ratio and is compared with recently published results. The experimen- tal result proves that the proposed algorithm performs better in many applications. S. Arivazhagan (B ) Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Mepco Engineering College Post, Sivakasi 626 005, Tamil Nadu, India e-mail: [email protected] L. Ganesan Department of Computer Science and Engineering, Alagappa Chettiar College of Engineering and Technology, Karaikudi 623 004, India T. G. Subash Kumar Jasmin Infotech Pvt Ltd, Chennai 600 100, India Keywords Wavelet transform · Image fusion · Multi-focus images · Multi-spectral images · Fusion performance measure 1 Introduction Image fusion is the process of combining two or more images of the same scene into a single image, retaining important features, so that the fused image is more suitable for the purpose of human visual perception and computer process- ing. Compared with the input images, the new image con- tains more comprehensive, more accurate and more stable information. Therefore, with the availability of multi-sensor data in many fields such as remote sensing, medical imaging, machine vision and military applications, the multi-sensor image fusion has emerged as a new and promising research area [1]. The definition of multi-sensor fusion is very broad and fusion can take place at the pixel and feature level. Pixel-level fusion can be used to increase the information content asso- ciated with each pixel in an image through multiple image combination. Feature-level fusion can be used to increase the likelihood that a feature is retained and is used as a means of creating additional composite features. If both the spectral properties and the pixel spacing of input images are very sim- ilar, then the two images would generally be fused at the pixel level [2]. A number of methods at pixel level fusion have been proposed for merging multi-spectral and multi-focus images. In 1990, Carper et al. [3] fused the multi-spectral images using intensity-hue-saturation-based method. In the same year, Toet [4] discussed the fusion of images using Laplacian pyramid mergers. Haeberli [5] proposed a method for the fusion of multi-focus images using pyramid-based technique, which is based on the fact that the intensity of the 123

A Modified Statistical Approach for Image Fusion Using Wavelet Transform

Embed Size (px)

DESCRIPTION

IMAGE PROCESSING

Citation preview

Page 1: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

SIViP (2009) 3:137–144DOI 10.1007/s11760-008-0065-4

ORIGINAL PAPER

A modified statistical approach for image fusion using wavelet transform

S. Arivazhagan · L. Ganesan · T. G. Subash Kumar

Received: 31 March 2007 / Revised: 9 February 2008 / Accepted: 8 May 2008 / Published online: 4 June 2008© Springer-Verlag London Limited 2008

Abstract The fusion of images is an important techniquewithin many disparate fields such as remote sensing, robot-ics and medical applications. For image fusion, selecting therequired region from input images is a vital task. Recently,wavelet-based fusion techniques have been effectively usedto integrate the perceptually important information gener-ated by different imaging systems about the same scene.In this paper, a modified wavelet-based region level fusionalgorithm for multi-spectral and multi-focus images is dis-cussed. Here, the low frequency sub-bands are combined,not averaged, based on the edge information present in thehigh frequency sub-bands, so that the blur in fused imagecan be eliminated. The absolute mean and standard devia-tion of each image patch over 3 × 3 window in the high-frequency sub-bands are computed as activity measurementand are used to integrate the approximation band. The per-formance of the proposed algorithm is evaluated using theentropy, fusion symmetry and peak signal-to-noise ratio andis compared with recently published results. The experimen-tal result proves that the proposed algorithm performs betterin many applications.

S. Arivazhagan (B)Department of Electronics and Communication Engineering,Mepco Schlenk Engineering College, Mepco Engineering CollegePost, Sivakasi 626 005, Tamil Nadu, Indiae-mail: [email protected]

L. GanesanDepartment of Computer Science and Engineering,Alagappa Chettiar College of Engineering and Technology,Karaikudi 623 004, India

T. G. Subash KumarJasmin Infotech Pvt Ltd, Chennai 600 100, India

Keywords Wavelet transform · Image fusion · Multi-focusimages · Multi-spectral images · Fusion performancemeasure

1 Introduction

Image fusion is the process of combining two or more imagesof the same scene into a single image, retaining importantfeatures, so that the fused image is more suitable for thepurpose of human visual perception and computer process-ing. Compared with the input images, the new image con-tains more comprehensive, more accurate and more stableinformation. Therefore, with the availability of multi-sensordata in many fields such as remote sensing, medical imaging,machine vision and military applications, the multi-sensorimage fusion has emerged as a new and promising researcharea [1].

The definition of multi-sensor fusion is very broad andfusion can take place at the pixel and feature level. Pixel-levelfusion can be used to increase the information content asso-ciated with each pixel in an image through multiple imagecombination. Feature-level fusion can be used to increase thelikelihood that a feature is retained and is used as a meansof creating additional composite features. If both the spectralproperties and the pixel spacing of input images are very sim-ilar, then the two images would generally be fused at the pixellevel [2]. A number of methods at pixel level fusion havebeen proposed for merging multi-spectral and multi-focusimages. In 1990, Carper et al. [3] fused the multi-spectralimages using intensity-hue-saturation-based method. In thesame year, Toet [4] discussed the fusion of images usingLaplacian pyramid mergers. Haeberli [5] proposed a methodfor the fusion of multi-focus images using pyramid-basedtechnique, which is based on the fact that the intensity of the

123

Page 2: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

138 SIViP (2009) 3:137–144

focused portion of the image will be high. With the abilityof multi-resolution analysis, wavelet transform is playingan important role in image feature extraction, image com-pression and image fusion since it has the ascendant perfor-mance in dealing with time-frequency signal [6]. The abilityof wavelet transform to localize information as a function ofspace and scale comes in handy in fusing information fromthe two images [7]. Jiang et al. [8] proposed a wavelet-basedalgorithm for the fusion of multi-focus images. A fusion ruleby calculating the wavelet transformation modulus maximaof input images at different bandwidths and levels is dis-cussed in [9]. A fusion algorithm based on Lifting WaveletTransform has been proposed in [10]. Hill et al. [11] in 2002showed that the better fusion can be obtained if the imagesare fused using complex wavelets. Du et al. [12] fused thesatellite images of different scales using wavelet transform.A comparative study on various image fusion methods ispresented in [13]. Recently, Zheng et al. [14] used SupportValue Transform for multi-source image fusion, where theoriginal multi-source images are decomposed into sequenceof support value images plus the low-frequency component.The decomposed low-frequency components of multi-sourceimages are merged into the resulting low-frequency compo-nents by averaging method.

The fused images are blurred in such image fusiontechniques, where low-frequency components are normallycombined by averaging method. In this paper, an enhancedalgorithm for region-based image fusion is proposed, whichcombines low frequency sub-bands based on the edge infor-mation present in the high frequency sub-bands, so that, blurin the fused image can be eliminated.

This paper is organized as follows. Section 2 gives abrief introduction to Discrete Wavelet Transform (DWT).Section 3 discusses the image fusion algorithm and Sect. 4discusses the parameters to measure the fusion performance.In Sect. 5 experimental results and discussion are provided.The performance of the proposed method is discussed inSect. 6. and concluding remarks are given in Sect. 7.

2 Discrete wavelet transform

Wavelets are functions generated from one single function� by dilations and translations [15]. The basic idea of thewavelet transform is to represent any arbitrary function asa superposition of wavelets. Any such superposition decom-poses the given function into different scale levels where eachlevel is further decomposed with a resolution adapted to thatlevel [16].

The DWT is identical to a hierarchical sub band systemwhere the sub bands are logarithmically spaced in frequencyand represent octave-band decomposition. By applying DWT,the image is actually divided, i.e., decomposed into four sub

Fig. 1 Image decomposition. a One level, b two level

bands and critically sub sampled as shown in Fig. 1a. Thesefour sub bands arise from separable applications of verticaland horizontal filters. The sub bands labeled LH1, HL1 andHH1 represent the finest scale wavelet coefficients, i.e., detailimages while the sub band LL1 corresponds to coarse levelcoefficients, i.e., approximation image. To obtain the nextcoarse level of wavelet coefficients, the sub band LL1 aloneis further decomposed and critically sampled. This results intwo-level wavelet decomposition as shown in Fig. 1b. Sim-ilarly, to obtain further decomposition, LL2 is used. Thisprocess continues until some final scale is reached.

The values or transformed coefficients in approximationand detail images (sub-band images) are essential features,which are useful for image fusion. The features derived fromthese DWT transformed images are shown here as useful forimage fusion, and are discussed in the next section.

3 Image fusion algorithm

Since the wavelet coefficients having large absolute valuescontain the information about the salient features of theimages such as edges and lines, a good fusion rule is to takethe maximum (absolute values) of the corresponding waveletcoefficients [17].

The block diagram of an Image Fusion System is shownin Fig. 2. Here, wavelet transform is first applied on inputimages. Then the wavelet coefficients of sub-images (i.e., thedifferent sub bands at different scales) are fused using fusionrules. Finally, the fused image is obtained by applying theinverse DWT on the fused wavelet coefficients. There aretwo types of fusion rules for fusing the sub-images, such as,(i) Pixel-based and (ii) Region-based. In pixel-based fusion,pixel-by-pixel wavelet coefficients are fused, either by select-ing the maximum (or minimum) value of the correspond-ing co-efficient in the two sub-images, or by selecting theweighted coefficient of the two sub-images. In region-basedmethod, the fusion rule is based on the statistical quantitiesof local windows of size n ×n. Here, the statistical quantities

123

Page 3: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

SIViP (2009) 3:137–144 139

Fig. 2 Image fusion system

Fused Image Fusedwavelet

coefficients

Fusion rules

DWT

DWT ),(2 nmf

Fusiondecision

map

Inverse

DWT ),( nmg

Image 1

Image 2

),(1 nmf

of each image patch over 3 × 3 windows are computed as anactivity measurement associated with the pixel centered inthe window. The fusion rule is that if the activity measure-ment of one window in image-1 is greater (or smaller) thanthe corresponding window in image-2, then the value of thecenter pixel of the window in image-1 will be considered asthe new pixel value. The commonly used activity measure-ments are average value, standard deviation and energy.

The image fusion algorithm, proposed in this paper isgiven as follows. The input images are first subjected to twolevels of DWT decomposition. The wavelet coefficient inlow frequency sub-bands, i.e., either LL1

2 (of the first image)or LL2

2 (of the second image) is chosen based on the com-bined edge information in the corresponding high frequencysub-bands, i.e., LH1

2, HL12 and HH1

2 or LH22, HL2

2 and HH22.

Here, mean and standard deviation over 3 × 3 windows areused as activity measurement to find the edge informationpresent in the high frequency sub-bands. The 3 × 3 win-dows in the high frequency sub-bands are centered at thecorresponding pixel of LL2. In cases, where the activity mea-sures do not satisfy the criterion, given in step 2 of the algo-rithm, the average of wavelet coefficients in LL1

2 and LL22 can

be taken. In addition, the wavelet coefficients in high fre-quency sub-bands, i.e., either LH1

m or LH2m , HL1

m or HL2m

and HH1m or HH2

m (m = 1, 2 represents the levels of DWTdecomposition) will be chosen or combined based on stan-dard deviation as activity measure, as given in step 3 and 4of the algorithm. Then, the final fused image is obtained byapplying the inverse DWT on the fused wavelet coefficients.

Algorithm Step 1 Apply DWT decomposition (level 2) toinput images, f1(m, n) and f2(m, n).Step 2 For each pixel of LLk

2(m, n) (i.e., approximation or lowfrequency sub band), identify 3 × 3 windows CLHk

2(m, n), CHLk

2(m, n) and CHHk2(m, n), centered at the cor-

responding pixels in LHk2(m, n), HLk

2(m, n) and HHk2(m, n)

sub-bands (i.e., detail or high-frequency sub bands), wherek = 1, 2 and represents the images involved in fusion.

(a) Find CAk2(m, n) = abs{CLHk

2(m, n)} +abs{CHLk2

(m, n)} + abs{CHHk2(m, n)}

(b) Find the mean MAk2(m, n) and standard deviation S Ak

2(m, n) of CAk

2(m, n) as activity levels.(c) Fuse wavelet coefficients of LL2 using the Fusion rule

as follows:

F(m, n)

=

⎧⎪⎪⎪⎪⎪⎪⎨

⎪⎪⎪⎪⎪⎪⎩

LL12(m, n); if MA1

2(m, n) > MA22(m, n)

andSA12(m, n) > SA2

2(m, n)

LL22(m, n); if MA1

2(m, n) < MA22(m, n)

and SA12(m, n) < SA2

2(m, n)

{LL12(m, n)

+LL22(m, n)}/2; Otherwise

Step 3 For each pixel of LHkm(m, n) (i.e., high frequency sub

band), find the standard deviation, SDkm over 3×3 windows,

where k = 1, 2 represent the images involved in fusion andm = 1, 2 represents the levels of DWT decomposition. Fusethe wavelet coefficients of LHk

m(m, n) sub-band using thefusion rule as follows:

F(m, n)

=

⎧⎪⎪⎨

⎪⎪⎩

LH1m(m, n); if : SD1

m(m, n) > SD2m(m, n)

LH2m(m, n); if : SD1

m(m, n) < SD2m(m, n)

{LH1m(m, n)

+LH2m(m, n)}/2; Otherwise

Step 4 Repeat step 3 for other high frequency sub bandsStep 5 Apply Inverse DWT to get the fused image.

4 Parameters to measure the fusion performance

Standard deviation of the difference between the ideal imageand the fused image can be used as a performance measureof the fusion scheme. However, in a practical situation, anideal image may not be available. Hence, mutual informa-tion-based criterion is used as the measure for evaluatingthe performance [10]. In order to compare the effectivenessof the proposed fusion algorithm Entropy, Fusion Symmetryand Peak Signal to Noise Ratio are used for evaluation.

Entropy (H): The Entropy of an image is the measure ofinformation present in an image and is defined as follows:

123

Page 4: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

140 SIViP (2009) 3:137–144

H = −L−1∑

i=0

P(i) log P(i) (1)

where L denotes the gray-level of the image, P(i) denotesthe ratio of the number of pixels with gray-level i(ni ) to thetotal number of pixels in the image (n), i.e., P(i) = ni/n.

Fusion symmetry (FS): Suppose A and B are input images,while F is a fused image, the contribution of the image A (orB) to the fused image F can be found using FS to evaluatethe fusion performance.

FS = abs

(MAF

MAF − MB F− 0.5

)

(2)

where MAF denotes mutual information of images A and F .MB F denotes mutual information of images B and F . Mea-sure of mutual information gives the amount of correlationbetween two distributions [18]. Given two images A and F ,the mutual information is defined as

MAF =∑

x,y

PAF (x, y) logPAF (x, y)

PA(x)PF (y)(3)

where PA(x) and PF (y) are the probability density functionsin individual images and PAF (x, y) is the joint probabilitydensity function. Estimations for the joint and marginal den-sity functions can be obtained by simple normalization of thejoint and marginal histograms of both the images.

Peak signal to noise ratio (PSNR) : The PSNR is definedas follows:

PSNR = 20 ∗ log

(255

RMSE

)

(4)

where RMSE is the root mean square error. Suppose,R is thestandard reference image and F is the fused result image,then the RMSE is defined as

RMSE =√

∑Mi=1

∑Nj=1[R(i, j) − F(i, j)]2

M × N(5)

A better fusion performance is characterized by higherentropy, lower fusion symmetry, smaller RMSE and higherPSNR.

5 Experimental results and discussion

The proposed fusion algorithm, explained in Sect. 3 is appliedon a set of multi-focus and multi-spectral images. For thepurpose of comparison and to show the effectiveness of theproposed method, the same set of input images are subjectedto image fusion algorithm, used in [1], but using separable

orthogonal wavelet transform, such as Daubechies tap 4 fil-ter, called hereafter as old method where the LL or approx-imation sub bands of DWT decomposed images are fusedby a simple pixel-by-pixel averaging technique. But in ourproposed method, approximation sub bands are fused usinginformation from corresponding detail sub bands followingthe modified algorithm discussed in Sect. 3.

5.1 Fusion of multi-focus images:

The proposed fusion algorithm is applied on a set of multi-focus images, such as (i) clock, (ii) book and (iii) bottle andthe results obtained are shown in Fig. 3, where the column(a) and (b) represent the multi-focus input images while thecolumn (c) shows the corresponding fused image, obtainedwith the old method and column (d) shows the fused imageobtained using the proposed method.

The right portions of images in Fig. 3a are distinct, whereasthe left portions are blurred. On the contrary, the right por-tions of the images in Fig. 3b are blurred, whereas the leftportions are distinct. Figure 3c shows the fused image usingthe old method where both portions are distinct, but at thesame time the left and right portions are slightly blurred com-pared to the distinct portions in Fig. 3b and Fig. 3a, respec-tively. Figure 3d shows the fused image, obtained using theproposed method where the visual quality of the image isbetter than the fused image obtained using old method.

5.2 Fusion of multi-focus microscopic images:

Since the microscope has a limited depth of field, the imagecannot be completely focused [19]. The various parts of theobject are out of focus in each image when they are acquiredat different distances using microscope [20]. The proposedfusion algorithm is applied on a set of multi-focus micro-scopic images, such as (i) Cell and (ii) texture and the resultsobtained are shown in Fig. 4, where the column (a) and (b)represent the multi-focus input images while the column (c)shows the corresponding fused image, obtained with the oldmethod and column (d) shows the fused image, obtainedusing the proposed method.

Figure 4a, b show a set of microscopic images at variousdepths. Figure 4c shows the fused image, obtained using theold method where all portions of the cell are distinct, but atthe same time they are blurred. Figure 4d shows the fusedimage, obtained using the proposed method where the visualquality of the image is better.

5.3 Fusion of infrared and visible images:

The proposed fusion algorithm is applied to the infra red(IR) and Visible images obtained from [21]. Figure 5a shows

123

Page 5: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

SIViP (2009) 3:137–144 141

Fig. 3 Results of multi-focusimage fusion for clock, bookand bottle images. a, bmulti-focus images; c fusedimage (old method); d fusedimage (proposed method)

Fig. 4 Results multi-focusmicroscopic image fusion.a, b Multi-focus microscopicimages; c fused image (oldmethod); d fused image(proposed method)

Fig. 5 Results of fusion ofinfrared and visible images.a Infrared image; b visibleimage; c fused image (oldmethod); d fused image(proposed method)

infrared image, in which the contour of a person could beseen, but the bridge could not be seen. In Fig. 5b, which isa visible light image, the contour of bridge could be seen,

whereas the person could not be seen. Figure 5c shows thefused image, obtained using the old method, in which theperson appears distinct, while the edges of the bridge are

123

Page 6: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

142 SIViP (2009) 3:137–144

Fig. 6 Results of fusion of FIRand NIR images. a FIR image; bNIR image; c fused image (oldmethod); d fused image(proposed method)

Fig. 7 Results of fusion of aeroand satellite images. a Aeroimage; b satellite image; c fusedimage (old method); d fusedimage (proposed method)

Fig. 8 Results of fusion ofMMW and visible images.a MMW image; b visible image;c fused image (old method);d fused image (proposedmethod)

blurred. Figure 5d shows the fused image, obtained usingthe proposed method, in which both person and bridge arevisible with clear edges than the old method.

5.4 Fusion of FIR and NIR images:

Figure 6a, b show a pair of Far Infra Red (FIR) and NearInfra Red (NIR) images. The FIR image is bright, the objectsin it are easily recognized, and also its contrast is higher.But, most parts of NIR image are dim, it is difficult to recog-nize the objects, while the roads and the background in theimage are bright. Figure 6c is the fused image obtained usingthe old method, which contains the features from both FIRand NIR images and also it is bright. Figure 6d shows thefused image, obtained using the proposed method in whichthe visual quality is improved than the old method.

5.5 Fusion of aero and satellite images:

Figure 7a, b show aero image and satellite image, respec-tively. The contrast of aero image is higher and their edgesare distinct. But, the contrast of satellite image is lower andtheir edges are blurred. Figure 7c is the fused image obtainedusing the old method. Figure 7d is the fused image, obtained

using proposed method which contains the features of bothaero image and satellite images. Further, its contrast is higherand the edges of the objects can be recognized easily, com-pared with the fused image, shown in Fig. 7c.

5.6 Concealed weapon detection:

The effectiveness of the proposed algorithm can be provedby extending it to the application of concealed weapon detec-tion. Figure 8a, b show MMW and Visible images of a groupof persons. Figure 8c shows the image fused obtained usingthe old method in which the image is darkened. Figure 8dshows fused image obtained using the proposed method inwhich the quality of the image is much improved. Moreover,the weapon, i.e., the pistol concealed by the third person fromthe left can be easily identified visually.

From all the pictorial results, given in Figs. 3–8, it isobserved that the fusion results, obtained with the old method,are blurred. This is mainly due to the simple averaging ofapproximation coefficients. But, the proposed algorithmchoose/combine the approximation coefficients based on theedge information present in the high frequency sub-bandsand results in better fusion results, as evident in the pictorialresults.

123

Page 7: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

SIViP (2009) 3:137–144 143

Table 1 Performance of multi-focus image fusion

Sl. no. Images Fusion performance evaluation Visual quality

Old method Proposed method

Entropy Fusion symmetry PSNR Entropy Fusion symmetry PSNR Old method Proposed method

1 Clock 5.0707 0.0038 31.2046 5.0678 0.0837 33.3894 Blurred Good

2 Book 4.9910 0.0025 29.5388 5.0545 0.0078 38.0545 Blurred Good

3 Bottle 5.1832 0.0002 27.3026 4.2371 0.0002 31.2725 Blurred Good

4 Texture 5.0588 0 30.1730 5.1075 0.0143 32.6585 Blurred Good

5 Cell 3.2771 0.0584 − 3.2772 0 − Blurred Good

Table 2 Performance of multi-spectral image fusion

Sl. No Image1 Image2 Fusion performance evaluation Visual quality

Old method Proposed method

Entropy Fusion symmetry Entropy Fusion symmetry Old method Proposed method

1 Infrared Visible 4.6324 0.0043 5.0097 0.0407 Good Improved

2 Far-IR Near-IR 5.2025 0.0016 5.2313 0.0019 Good Good

3 Aero Satellite 4.4074 0.0012 4.5187 0.0732 Good Good

4 MMW Visible 4.0874 0.0011 4.5136 0.0066 Dim Improved

6 Performance evaluation and analysis

The performance of the old and proposed fusion methods arecompared with the objective fidelity criterion measures suchas Entropy, Fusion Symmetry and PSNR.

Table 1 shows the performance of the fusion of multi-focusimages, where as the old method indicates the algorithm pro-posed in [1]. Theoretically, the Entropy and PSNR of thefused image should be high whereas the fusion symmetryshould be small.

From the Table 1, it is clear that when multi focus clockand bottle images are fused, the entropy is high and fusionsymmetry is low in the old method, whereas the PSNR andthe visual quality of the fused image is much improved in theproposed method than in the old method. Though the fusionsymmetry is low in the old method for multi-focus book andmicroscopic texture images, entropy, PSNR and the visualquality of the fused image is much improved in the proposedmethod than in the old method. In the fusion of microscopicmulti-focus cell image, the fusion symmetry is low, entropyis high in the proposed method and the visual quality of thefused image is much improved than in the old method. Sincethe reference image (which can be formed by cutting therequired portions of the input images) is not available, thePSNR is not obtained.

Table 2 shows the performance of the fusion of multi-spectral images. The fusion is performed on the set of (i)infrared and visible, (ii) far infrared and near infrared, (iii)aero and satellite and (iv) MMW and visible images. The

entropy of the fused images is high in the proposed methodbut the fusion symmetry is low in the old method. However,the visual qualities of the fused images are improved in theproposed method than in the old method.

7 Conclusion

In this paper, a new improved fusion algorithm based onwavelet transform is presented. Fusion performance was eval-uated using objective fidelity criterion such as entropy, fusionsymmetry and PSNR and the proposed method is comparedwith a recent algorithm. All fusion performance measures ofthe proposed method are comparable and better to the mea-sures obtained for the results with the old method. This ismainly due to the fusion of approximation coefficients basedon the edge information in the high frequency sub-bandsinstead of simple averaging of approximation coefficients.Also, the strength of our proposed method is emphasizedwith more number of multi focus and multi spectral images.Moreover, the visual quality of all the fused images, obtainedusing the proposed method shows very good improvementwhich is evident from the results shown.

References

1. Bin, L., Jiaxiong, P.: Image fusion method based on short supportsymmetric non-separable wavelet. Int. J. Wavel. Multi-Resolut.Inf. Process. 2(1), 87–98 (2004)

123

Page 8: A Modified Statistical Approach for Image Fusion Using Wavelet Transform

144 SIViP (2009) 3:137–144

2. Cohen, A., Kovacevic, J.: Wavelets: the mathematical back-ground. IEEE Proc. 84(4), 514–522 (1996)

3. Carper, J.W., Lilles, T.M., Kiefer, R.W.: The use of intensity-huesaturation transformations for merging SPOT panchromatic andmulti-spectra image data. Photogr. Eng. Remote Sens. 56, 459–467(1990)

4. Toet, A.: Hierarchical image fusion. Mach. Vis. Appl. 3(1), 1–11(1990)

5. Haeberli, P.: A multi-focus method for controlling depth of field(1994). http://www.sgi.com/grafica/depth

6. Bruce, L.M., Cheriyadat, A., Burns, M.: Wavelets Getting Perspec-tive. IEEE Potentials pp. 24–27 (2003)

7. Daubechies, I.: The wavelet transform, time-frequency localiza-tion and signal analysis. IEEE Trans. Inf. Theory 36(5), 961–1005(1990)

8. Zhi-guo, J., Dong-bing, H., Jin, C., Xiao-kuan, Z.: A waveletbased algorithm for multi-focus micro-image fusion. In: Proceed-ings of the Third International Conference on Image and Graphics(ICIG’04), 0-7695-2244-0/04 (2004)

9. Qu, G., Zhang, D., Yan, P.: Medical image fusion by wavelet trans-form modulus maxima. Opt. Express 184 9(4) (2001)

10. Ranjith, T., Ramesh, C.: A lifting wavelet transform based algo-rithm for multi-sensor image fusion. CRL Tech. J. 3(3), 19–22(2001)

11. Hill, P., Canagaraj, N., Bull, D.: Image fusion using complexwavelets. In: BMVC, pp. 487–496 (2002)

12. Du, Y., Vachon, P.W., Vander Sanden, J.J.: Satellite image fusionwith multi-scale wavelet analysis for marine applications. Can. J.Remote Sens. 29(1), 14–23 (2003)

13. Wang, Z., Ziou, D., Armenakis, C., Li, D., Li, Q.: A comparativeanalysis of image fusion methods. IEEE Trans. Geosci. RemoteSens. 43(6), 1392–1402 (2005)

14. Zheng, S., Shi, W.-Z., Liu, J., Zhu, G.-X., Tian, J.-W.: Multisourceimage fusion method using support value transform. IEEE Trans.Image Process. 16(7), 1831–1839 (2007)

15. Rao, R.M., Bopardikar, A.S.: Wavelet transforms: introduction totheory and applications. Pearson Education Asia (2000)

16. Antonini, M., Barlaud, M., Mathieu, P., Daubechies, I.: Image cod-ing using wavelet transform. IEEE Trans. Image Process. 1(2),205–220 (1992)

17. Nikolov, S.G., Bull, D.R., Canagarajah, C.N., Halliwell, M., Wells,P.N.T.: Fusion of 2-D images using their multi-scale edges. In:ICPR 2000, pp. 3045–3048 (2000)

18. Haykins, S.: Communication systems, 4th edn. Wiley, New York(2001)

19. Image Analyzer plugins, http://meesoft.logicnet.dk/Analyzer/plugins/

20. RVC-Image Archiving software producers. http://www.rvc.nl/en/microscopy/multifocus.html

21. Fusion of visible and infrared images, http://www.geocities.com/alexjouan/image_fusion.htm

123