5
Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM. (2007). Statistical image fusion with generalised Gaussian and Alpha-Stable distributions. In 15th International Conference on Digital Signal Processing (DSP 2007) Wales, UK (pp. 268 - 271). Institute of Electrical and Electronics Engineers (IEEE). https://doi.org/10.1109/ICDSP.2007.4288570 Peer reviewed version Link to published version (if available): 10.1109/ICDSP.2007.4288570 Link to publication record in Explore Bristol Research PDF-document University of Bristol - Explore Bristol Research General rights This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/

Loza, AT. , Bull, DR., Canagarajah, CN., & Achim, AM. (2007 ......Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM.(2007). Statistical image fusion with generalised Gaussian and

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Loza, AT. , Bull, DR., Canagarajah, CN., & Achim, AM. (2007 ......Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM.(2007). Statistical image fusion with generalised Gaussian and

Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM. (2007).Statistical image fusion with generalised Gaussian and Alpha-Stabledistributions. In 15th International Conference on Digital SignalProcessing (DSP 2007) Wales, UK (pp. 268 - 271). Institute ofElectrical and Electronics Engineers (IEEE).https://doi.org/10.1109/ICDSP.2007.4288570

Peer reviewed version

Link to published version (if available):10.1109/ICDSP.2007.4288570

Link to publication record in Explore Bristol ResearchPDF-document

University of Bristol - Explore Bristol ResearchGeneral rights

This document is made available in accordance with publisher policies. Please cite only thepublished version using the reference above. Full terms of use are available:http://www.bristol.ac.uk/red/research-policy/pure/user-guides/ebr-terms/

Page 2: Loza, AT. , Bull, DR., Canagarajah, CN., & Achim, AM. (2007 ......Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM.(2007). Statistical image fusion with generalised Gaussian and

STATISTICAL IMAGE FUSION WITH GENERALISED GAUSSIANAND ALPHA-STABLE DISTRIBUTIONS

Artur Łoza, Alin Achim, David Bull and Nishan Canagarajah

Department of Electrical and Electronic EngineeringUniversity of Bristol, UK

Email: [artur.loza, alin.achim, dave.bull, nishan.canagarajah]@bristol.ac.uk

ABSTRACT

This paper describes a new methodology for multimodal image fu-sion based on non-Gaussian statistical modelling of wavelet coeffi-cients of the input images. The use of families of generalised Gaus-sian and alpha-stable distributions for modelling image wavelet co-efficients is investigated and methods for estimating distribution pa-rameters are proposed. Improved techniques for image fusion aredeveloped, by incorporating these models into the weighted averageimage fusion algorithm. The superior performance of the proposedmethods is demonstrated using multimodal image datasets.

Index Terms— Image fusion, statistical modelling, multimodal

1. INTRODUCTION

The purpose of image fusion is to combine information from mul-tiple images of the same scene into a single image that ideally con-tains all the important features from each of the original images.The resulting fused image will be thus more suitable for human andmachine perception or for further image processing tasks. Many im-age fusion schemes have been developed in the past. As is the casewith many recently proposed techniques, our developments are madeusing the wavelet transform, which constitutes a powerful frame-work for implementing image fusion algorithms [1, 2]. Specifically,methods based on multiscale decompositions consist of three mainsteps: first, the set of images to be fused is analysed by means of thewavelet transform, then the resulting wavelet coefficients are fusedthrough an appropriately designed rule, and finally, the fused imageis synthesized from the processed wavelet coefficients through theinverse wavelet transform. This process is depicted in Fig. 1.

The majority of early image fusion approaches, although effec-tive, have not been based on strict mathematical foundations. Only inrecent years have more rigorous approaches been proposed, includ-ing those based on estimation theory [3]. A Bayesian fusion methodbased on Gaussian image model has been proposed in [4]. Recentwork on non-Gaussian modelling for image fusion has been pro-posed in [2], where the image fusion prototype method [5], combin-ing images based on the “match and saliency” measure (variance andcorrelation), has been modified and applied to images modelled bysymmetric α-stable distributions. In this paper we extend the workpresented in [2]. We discuss different possibilities of reformulatingand modifying the original Weighted Average (WA) method [5] inorder to cope with more appropriate statistical model assumptions

The authors are grateful for the financial support offered to project 2.1‘Image and video sensor fusion’ by the UK MOD Data and Information Fu-sion Defence Technology Centre.

1

.

.

.

N

.

.

..

.

.

.

.

DT−CWT

DT−CWT

F I

Fusionrule Fused

coefficients

I

IFusedimage

DT−CWT−1

F

.

Fig. 1. Pixel-based image fusion scheme using the DT-CWT.

like the generalized Gaussian and the alpha-stable. We use a rela-tively novel framework, that of Mellin transform theory, in order toestimate all statistical parameters involved in the fusion algorithms.

The paper is organized as follows: In Section 2, we providesome necessary preliminaries on generalised Gaussian and alpha-stable processes and present results on the modelling of subband co-efficients images. Section 3 describes the modified WA algorithmsfor wavelet-domain image fusion, which are based on heavy-tailedmodels. Section 4 compares the performance of the new algorithmswith the performance of other conventional fusion techniques ap-plied to sequences of multimodal test images. Finally, in Section 5we conclude the paper with a short summary and suggest areas offuture research.

2. STATISTICAL MODELLING OF MULTIMODAL

IMAGES WAVELET COEFFICIENTS

2.1. The Generalized Gaussian Distribution

The generalized Gaussian density function proposed in [6] is givenby

fs,p(x) =1

Z(s, p)· e−|x/s|p (1)

where Z(s, p) = 2Γ(1/p)s/p is a normalisation constant andΓ(t) =

∫ ∞

0e−uut−1du is the well-known Gamma function

In (1), s (scale parameter) models the width of the probabil-ity density function (pdf) peak (standard deviation), while p (shapeparameter) is inversely proportional to the decreasing rate of thepeak. The Generalised Gaussian Distribution (GGD) model includesthe Gaussian and Laplacian pdfs as special cases, corresponding top = 2 and p = 1, respectively.

The advantage of GGD models consists in the availability ofanalytical expressions for their pdfs as well as of simple and efficientparameter estimators. On the other hand, Symmetric Alpha-Stable(SαS) distributions are much more flexible and rich. For example,they are also able to capture skewed characteristics.

268

1-4244-0882-2/07/$20.00 c©2007 IEEE

Authorized licensed use limited to: UNIVERSITY OF BRISTOL. Downloaded on January 27, 2009 at 06:52 from IEEE Xplore. Restrictions apply.

Page 3: Loza, AT. , Bull, DR., Canagarajah, CN., & Achim, AM. (2007 ......Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM.(2007). Statistical image fusion with generalised Gaussian and

−100 −50 0 50 100

0.001

0.003

0.010.02

0.05

0.10

0.25

0.50

0.75

0.90

0.95

0.980.99

0.997

0.999

Data

Pro

babi

lity

Normal Probability Plot

−100 −50 0 50 100

10−4

10−3

10−2

10−1

SαS: α = 1.3572, γ = 9.5536; GG: s = 7.8149, p = 0.79128

−100 −50 0 50 100

10−4

10−3

10−2

10−1

SαS: α = 1.3602, γ = 8.9976; GG: s = 6.9525, p = 0.77123

(a) (b) (c)

Fig. 2. An example of the corresponding normal probability plots for a hyperspectral image (band 1) (a); logarithmic probability density plotsof the modelling results of hyperspectral image wavelet coefficients (1st level, 2nd orientation): (a) band 1, (b) band 2

SαSGGD

. . . . . data

2.2. Alpha-Stable Distributions

The SαS distribution is best defined by its characteristic function

ϕ(ω) = exp(jδω − γ|ω|α), (2)

In the equation (2) α is the characteristic exponent, taking values0 < α ≤ 2, δ (−∞ < δ < ∞) is the location parameter, andγ (γ > 0) is the dispersion of the distribution. For values of α inthe interval (1, 2], the location parameter δ corresponds to the meanof the SαS distribution.The dispersion parameter γ determines thespread of the distribution around its location parameter δ, similar tothe variance of the Gaussian distribution. The smaller the character-istic exponent α is, the heavier the tails of the SαS density. Gaussianprocesses are stable processes with α = 2, while Cauchy processesresult when α = 1. In fact, no closed-form expressions for the gen-eral SαS pdfs are known except for these two special cases.

One consequence of heavy tails is that only moments of orderless than α exist for the non-Gaussian alpha-stable family members,i.e., E|X|p < ∞ for p < α. However, Fractional Lower OrderMoments (FLOM) of SαS random variables can be defined and aregiven by [7]:

E|X|p = C(p, α)γp

α for −1 < p < α (3)

where C(p, α) = 2p+1Γ( p+1

2)Γ(− p

α)/(α

√πΓ(− p

2))

2.3. Modelling Results of Wavelet Subband Coefficients

The dataset used in this study, Aviris, contains images selected fromthe public AVIRIS 92AV3C hyperspectral database [8]. In this work,we analyse pairs of manually selected bands. We proceed in twosteps: first, we assess whether the data deviate from the normal dis-tribution and if they have heavy tails. To determine that, we make useof normal probability plots. An example of such a plot is shown inFig. 2(a), demonstrating the heavy tailed characteristic of an image.Then, we check if the data is in the stable or generalized Gaussiandomains of attraction by estimating the characteristic exponent α,and shape parameter p, respectively, directly from the data, with theuse of the Maximum Likelihood (ML) methods. On analyzing theexamples shown in Fig. 2(b)–(c) one can observe that the SαS dis-tribution is superior to the generalized Gaussian distribution becauseit provides a better fit to both the mode and the tails of the empiri-cal density of the actual data. Nevertheless, the figure demonstratesthat the coefficients of different subbands and decomposition levelsexhibit various degrees of non-Gaussianity. Our modelling results,

briefly shown in this section, clearly point to the need for the designof fusion rules that take into consideration the non-Gaussian heavy-tailed character of the data to achieve close to optimal image fusionperformance.

3. MODEL-BASED WEIGHTED AVERAGE SCHEMES

In this section we show how the WA method can be reformulated andmodified in order to cope with more appropriate statistical model as-sumptions like the generalized Gaussian and the alpha-stable. Forthe completeness of the presentation we first recall the original method(based on [5] and [2]):

1. Decompose each input image into subbands.

2. For each highpass subband pair X, Y :a) Compute saliency measures, σx and σy.

b) Compute matching coefficient

M =2σxy

σ2x + σ2

y

, (4)

where σxy stands for covariance between Xand Y .

c) Calculate the fused coefficients using the

formula Z = WxX + WyY as follows:

• if M > T (T = 0.75) then Wmin = 0.5(1 − 1−M

1−T

)

and Wmax = 1 − Wmin (weighted average mode,

including mean mode for M = 1),• else Wmin= 0 & Wmax= 1 (selection mode),

• if σx > σy Wx = Wmax and Wy = Wmin,

else Wx = Wmin and Wy = Wmax.

3. Average coefficients in lowpass residual.

4. Reconstruct the fused image from the

processed subbands and the lowpass residual.

Essentially, the algorithm shown above considers two differentmodes for fusion: selection and averaging. The overall fusion ruleis determined by two measures: a match measure that determineswhich of the two modes is to be employed and a saliency measurethat determines which wavelet coefficient in the pair will be copiedin the fused subband (selection model), or which coefficient will beassigned the larger weight (weighted average mode). In the follow-ing we show how the salience measures can be estimated adaptively,in the context of Mellin transform theory, for both GG and SαS dis-tributions.

Proc. of the 2007 15th Intl. Conf. on Digital Signal Processing (DSP 2007) 269

Authorized licensed use limited to: UNIVERSITY OF BRISTOL. Downloaded on January 27, 2009 at 06:52 from IEEE Xplore. Restrictions apply.

Page 4: Loza, AT. , Bull, DR., Canagarajah, CN., & Achim, AM. (2007 ......Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM.(2007). Statistical image fusion with generalised Gaussian and

3.1. Saliency Estimation Using Mellin Transform

Following the arguments in [9], the use of Mellin transform has beenrecently proposed, as a powerful tool for deriving novel parameterestimation methods based on log-cumulants [10]. Let f be a functiondefined over �+. The Mellin transform of f is defined as

Φ(z) = M[f(u)](z) =

∫+∞

0

uz−1

f(u)du (5)

where z is the complex variable of the transform. By analogy withthe way in which common statistics are deducted based on FourierTransform, the following rth order second-kind cumulants can bedefined, based on Mellin Transform [10]

kr =drΨ(z)

dzr

∣∣z=1

(6)

where Ψ(z) = log(Φ(z)). Following the analogy further, the methodof log-moments can be applied in order to estimate the two param-eters of the pdf function f (GGD or SαS). To be able to do this,the first two second-kind cumulants are required. These can be esti-mated empirically from N samples yi as follows

ˆk1 =

1

N

N∑i=1

[log(xi)] and ˆk2 =

1

N

N∑i=1

[(log(xi) −ˆk1)

2] (7)

3.1.1. Log-moment Estimation of the GG Model

In this section we show how the saliency and match measures (4)can be computed for samples coming from GG distributions. Specif-ically, the variance terms appearing in the denominator of (4) needto be estimated differently, depending on which member of the GGfamily is considered. By plugging the expression of the GG pdfgiven by (1) into (5) and after some straightforward manipulations,one gets

Ψ(z) = log Φ(z) = z log s+ log Γ(z) − log p (8)

which is the second-kind second characteristic function of a GG den-sity. Calculating the first and second order second-kind cumulants(6) gives

k1 =dΨ(z)

dz|z=1 = log s+

ψ0

(1

p

)p

(9)

and

k2 = F (p) =dΨ2(z)

dz2|z=1 =

ψ1

(1

p

)p2

(10)

respectively, where ψn(t) = dn+1

dtn+1 log Γ(t) is the polygamma func-tion. The shape parameter p is estimated by computing the inverseof the function F . Then, p can be substituted back into the equationfor k1 in order to find s (and consequently the saliency measure), ora ML estimate of s could be computed from data x as

s =

(p

N

N∑n=1

|xn|p

)1/p

= σ

⎛⎝Γ

(1

p

)Γ(

3

p

)⎞⎠

1/2

(11)

3.1.2. Log-moment Estimation of the SαS Model

For the case SαS of we obtain the following results for the second-kind cumulants of the SαS model (see [2] for detailed derivations)

k1 =α− 1

αψ(1) +

log γ

α(12)

andk2 =

π2

12

α2 + 2

α2(13)

The estimation process simply involves now solving (13) for α andsubstituting back in (12) to find the value of the dispersion parameterγ (the saliency measure). We should note that this method of esti-mating SαS parameters was first proposed in [11]. Here, we haveused an alternative derivation of the method (based on Mellin trans-form properties), originally proposed in [2].

Since in the case of SαS distributions classical second ordermoments and correlation cannot be used, new match and saliencymeasures need to be defined. In [2] we proposed the use of a sym-metrized and normalised version of the above quantity, which en-ables us to define a new match measure for SαS random vectors.The symmetric covariation coefficient that we used for this purposecan be simply defined as

Corrα(X,Y ) =[X,Y ]α[Y,X]α[X,X]α[Y, Y ]α

(14)

where the covariation of X with Y is defined in terms of the previ-ously introduced FLOM by [12].

[X,Y ]α =E(XY <p−1>)

E(|Y |p)γY (15)

where xp = |x|psign(x). It can be shown that the symmetric co-variation coefficient is bounded, taking values between -1 and 1. Inour implementation, the matching and similarity measures are com-puted locally, in a square-shaped neighbourhood of size 3×3 aroundeach reference coefficient.

4. RESULTS

In this section, we show results obtained using the model-based ap-proach to image fusion described in this paper. As an example, wechose to illustrate the fusion of images from AVIRIS dataset (seeSection 2.3). Apart from the original WA fusion method [5], andcommonly used choose-max (MAX) scheme [1], we have includedthe two algorithms described in this paper, i.e. the weighted aver-age schemes based on the SαS and on GGD modelling of waveletcoefficients, and two particular cases of these, corresponding to theCauchy (CAU) and Laplacian (LAP) densities, respectively. Twocomputational metrics were used to evaluate the quality of fusion:a quality index measuring similarity (in terms of illuminance, con-trast and structure) between the input images and the fused images[13] (Q1); and the Petrovic metric [14] (Q2) measuring the amountof edge information transferred from the source images to the fusedimage.

The metric values, averaged over 10 pairs of dataset images arepresented in Table 1. The results obtained show that the two met-rics rank GGD and LAP as the best and MAX as the worst fusionmethod. The rankings of the remaining methods vary, however thedifferences between metric values are small.

The experimental results are shown in Fig. 3. Although quali-tative evaluation by visual inspection is highly subjective, it seemsthat the best results are achieved by the GGD-based technique. Ingeneral, both modelling approaches to fusion resulted in more con-sistent fused images compared to slightly ’rugged’ surfaces obtainedwith MAX. It appears that our systems, perform like feature detec-tors, retaining the features that are clearly distinguishable in each ofthe input images.

Table 1. Quality rankings of fusion methods in decreasing order

Q1 GGD LAP CAU SAS WA MAX0.928 0.928 0.928 0.927 0.926 0.920

Q2 GGD LAP WA SAS CAU MAX0.798 0.798 0.795 0.792 0.791 0.786

270 Proc. of the 2007 15th Intl. Conf. on Digital Signal Processing (DSP 2007)

Authorized licensed use limited to: UNIVERSITY OF BRISTOL. Downloaded on January 27, 2009 at 06:52 from IEEE Xplore. Restrictions apply.

Page 5: Loza, AT. , Bull, DR., Canagarajah, CN., & Achim, AM. (2007 ......Loza, AT., Bull, DR., Canagarajah, CN., & Achim, AM.(2007). Statistical image fusion with generalised Gaussian and

BAND 1 BAND 2 MAX WA

LAP GGD CAU SAS

Fig. 3. Examples of the original and fused images

5. CONCLUSIONS AND FUTURE WORK

In this paper, we proposed new statistical model-based image fusionmethods by reformulating the well-known WA scheme in order toaccount for the heavy-tailed nature of data.

We have shown through modelling experiments that images usedin our experiments and their corresponding wavelet coefficients havehighly non-Gaussian characteristics that can be accurately describedby GGD or SαS statistical models.

In the multiscale domain, we employed the local dispersion ofwavelet coefficients as saliency measure, while symmetric covaria-

tion coefficients were computed in order to account for the similar-ities between corresponding patterns in the pair of subbands to befused. A similar approach has been applied to GGD parameters es-timation, resulting in a novel estimator based on the variance of thelogarithmically scaled random variable.

The fusion results show that in general the best performance isachieved by the GGD-based fusion methods followed by the SαS-based techniques.

An interesting direction in which this work could be extendedis the development of algorithms that will additionally capture theinherent dependencies of wavelet coefficients across scales. Thiscould be achieved by the use of multivariate statistical models. Re-search in this direction is under way and will be presented in a futurecommunication.

6. REFERENCES

[1] S. G. Nikolov, P. Hill, D. Bull, and N. Canagarajah, “Waveletsfor image fusion,” in Wavelets in Signal and Image Analysis,A. Petrosian and F. Meyer, Eds., pp. 213–244. Kluwer Aca-demic Publishers, 2001.

[2] A. M. Achim, C. N. Canagarajah, and D. R. Bull, “Complexwavelet domain image fusion based on fractional lower ordermoments,” in Proc. of the 8th International Conference on

Information Fusion, Philadelphia PA, USA, 25–29 July, 2005.

[3] Rick S. Blum, “On multisensor image fusion performance lim-

its from an estimation theory perspective,” Information Fusion,vol. 7, no. 3, pp. 250–263, Sep 2006.

[4] R. Sharma and M. Pavel, “Adaptive and statistical image fu-sion,” Society for Information Display Digest, vol. 17, no. 5,pp. 969–972, May 1996.

[5] P. Burt and R. Kolczynski, “Enhanced image capture throughfusion,” in Proc. 4th International Conference on Computer

Vision, Berlin 1993, pp. 173–182.

[6] S G Mallat, “A theory for multiresolution signal decomposi-tion: the wavelet representation,” IEEE Trans. Pattern Anal.

Machine Intell., vol. 11, pp. 674–692, July 1989.

[7] C L Nikias and M Shao, Signal Processing with Alpha-Stable

Distributions and Applications, John Wiley and Sons, 1995.

[8] Online:, “Airborne visible/infrared imaging spectrometer,”Available at http://aviris.jpl.nasa.gov/.

[9] B Epstein, “Some applications of the Mellin transform instatistics,” The Annals of Mathematical Statistics, vol. 19, pp.370–379, Sep 1948.

[10] J M Nicolas, “Introduction aux statistiques de deuxiemeespece: applications des log-moments et des log-cumulants al’analyse des lois d’images radar,” Traitement du Signal, vol.19, pp. 139–167, 2002.

[11] X Ma and C L Nikias, “Parameter estimation and blind channelidentification in impulsive signal environment,” IEEE Tran.

Sign. Proc., vol. 43, no. 12, pp. 2884–2897, Dec. 1995.

[12] G Samorodnitsky and M S Taqqu, Stable Non-Gaussian

Random Processes: Stochastic Models with Infinite Variance,Chapman and Hall, New York, 1994.

[13] N. Cvejic, A. Łoza, C. N. Canagarajah, and D. R. Bull, “Asimilarity metric for assessment of image fusion algorithms,”International Journal Of Signal Processing, vol. 2, no. 2, pp.178–182, 2005.

[14] V. S. Petrovic and C. S. Xydeas, “Sensor noise effects onsignal-level image fusion performance.,” Information Fusion,vol. 4, pp. 167–183, 2003.

Proc. of the 2007 15th Intl. Conf. on Digital Signal Processing (DSP 2007) 271

Authorized licensed use limited to: UNIVERSITY OF BRISTOL. Downloaded on January 27, 2009 at 06:52 from IEEE Xplore. Restrictions apply.