10
WAVELET BASED IMAGE FUSION Umed Paliwal 1032774

Wavelet based image fusion

Embed Size (px)

Citation preview

Page 1: Wavelet based image fusion

WAVELET BASED IMAGE FUSION

Umed Paliwal

1032774

Page 2: Wavelet based image fusion

Image Fusion

What ?Image fusion is to combine information from multiple images of the same scene.

Why ?The result of image fusion is a new image which is more suitable for human and machine perception. Fused image will be more informative than any of the input images.

Remote Sensing

In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. Image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics.

In satellite imaging, two types of images are available. The panchromatic image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.

Page 3: Wavelet based image fusion

Image Fusion

Conventional ApproachesIf we have a PAN image with high spatial resolution and a colored image with poor resolution, We can merge them to get an colored image with a high resolution.

• PCAApply Principal component transform to RGB image and replace the first component with PAN image

IHS Transform• Convert MS from RGB into IHS.

• Replace I component with PAN.

• Convert back to RGB.

Image Source:-Zhang et. al. , Wavelet based image fusion techniques — An introduction, review and comparison, ISPRS Journal of Photogrammetry & Remote Sensing 62 (2007) 249–263

Page 4: Wavelet based image fusion

Wavelet Transform

ImageImage detail is a result of high contrast between features, for example a light rooftop and dark ground, and high contrast in the spatial domain corresponds to high values in the frequency domain.

TransformWavelet transforms are essentially extensions of the idea of high pass filtering. The process of applying the DWT can be represented as a bank of filters. At each level of decomposition, the signal is split into high frequency and low frequency components; the low frequency components can be further decomposed until the desired resolution is reached. The conventional DWT can be applied using either a decimated or an undecimated algorithm

Image Source : - G. Pajares, J. Manuel , A wavelet-based image fusion tutorial, Pattern Recognition 37(2004) 1855-1872.

a) Image at first decomposition level b) Second Decomposition Level

Page 5: Wavelet based image fusion

Transform

In the decimated algorithm, the signal is down- sampled after each level of transformation. In the case of a two-dimensional image, downsampling is performed by keeping one out of every two rows and columns, making the transformed image one quarter of the original size and half the original resolution. The decimated algorithm can therefore be represented visually as a pyramid, where the spatial resolution becomes coarser as the image becomes smaller

The wavelet and scaling filters are one-dimensional, necessitating a two-stage process for each level in the multiresolution analysis: the filtering and downsampling are first applied to the rows of the image and then to its columns. This produces four images at the lower resolution, one approximation image and three wavelet coefficient, or detail, images. A, HD, VD, and DD are the sub-images produced after one level of transformation. The A sub-image is the approximation image and results from applying the scaling or low-pass filter to both rows and columns. A subsequent level of transformation would be applied only to this sub-image. The HD sub- image contains the horizontal details (from low-pass on rows, high-pass on columns), the VD sub-image contains the vertical details (from high-pass on rows, lows-pass on columns) and the DD sub-image contains the diagonal details (from high-pass, or wavelet filter, on both rows and columns). The decimated algorithm is not shift-invariant, which means that it is sensitive to shifts of the input image. The decimation process also has a negative impact on the linear continuity of spatial features that do not have a horizontal or vertical orientation. Source : - G. Pajares, J. Manuel , A wavelet-based image fusion tutorial, Pattern Recognition 37(2004) 1855-1872.

Page 6: Wavelet based image fusion

Image Fusion

The wavelet transform contains the low-high bands, the high-low bands and the high-high bands of the image at different scales, plus the low-low band of the image at coarsest level. Except for the low-low band which has all positive transform values, all other bands contain transform values in these bands correspond to sharper brightness values and thus to the salient features in the image such as edges, lines, and region boundaries.

After applying the wavelet transform, We have the the approximation image at the coarsest resolution and the detail images at all the resolutions for both the images. The next step is to merge the two approximations or detail structures obtained from Image 1 and 2 element wise by using a certain fusion method. Some of the fusion methods are : -

• Maximum – We can choose the maximum of the two elements.

• Minimum – Minimum of two elements.

• Mean – Mean of two elements.

• Image 1 - Element from image 1

• Image 2 – Element from Image 2

• Random

Page 7: Wavelet based image fusion

Methodology

The steps involved in fusion of images through wavelet transform are given below.

•  Get the images to be fused

• Apply the wavelet transform on both the images through chosen wavelet at the desired level

• Get the approximation and detail coefficients for both the images

• Merge the coefficients by desired fusion rule

• Apply Inverse discrete wavelet transform on the merged coefficients and get the fused image

Image Source :- H. Li, B.S. Manjunath, S.K. Mitra, Multisensor Image Fusion Using the Wavelet Transform, Graphical Models and Image Processing, Vol. 57, No. 3, 1995, 235-245.

Page 8: Wavelet based image fusion

GUI

Page 9: Wavelet based image fusion

Results

Image 1 Image 2 Fused Image

Image 1 is blurred in the left and Image 2 in the middle. The fused image is of better quality than both the input images. Image is decomposed using Biorthogonal Wavelets at for levels. The Approximation image merged by taking maximum and detailed structures by taking minimum.

Page 10: Wavelet based image fusion

References

G. Pajares, J.Manuel , A wavelet-based image fusion tutorial, Pattern Recognition 37(2004) 1855-1872.

V.P.S Naidu, J.R. Raol, Pixel-level Image fusion using wavelets and Principal component analysis, Defence Science Journal, Vol. 58,No. 3, May 2008, 338-352.

E. Stollnitz, T.D. Derose, D. Salesin, Wavelets in Computer Graphics, Theory and Applications, Morgan Kauffman Publishers Inc.

H. Li, B.S. Manjunath, S.K. Mitra, Multisensor Image Fusion Using the Wavelet Transform, Graphical Models and Image Processing, Vol. 57, No. 3, 1995, 235-245.