30
Accepted Manuscript Fusion of Hyperspectral and Panchromatic Images Using an Average Filter and a Guided filter Jiahui Qu, Yunsong Li, Wenqian Dong PII: S1047-3203(18)30011-7 DOI: https://doi.org/10.1016/j.jvcir.2018.01.006 Reference: YJVCI 2118 To appear in: J. Vis. Commun. Image R. Received Date: 23 February 2017 Revised Date: 20 October 2017 Accepted Date: 11 January 2018 Please cite this article as: J. Qu, Y. Li, W. Dong, Fusion of Hyperspectral and Panchromatic Images Using an Average Filter and a Guided filter, J. Vis. Commun. Image R. (2018), doi: https://doi.org/10.1016/j.jvcir.2018.01.006 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

Accepted Manuscript

Fusion of Hyperspectral and Panchromatic Images Using an Average Filter anda Guided filter

Jiahui Qu, Yunsong Li, Wenqian Dong

PII: S1047-3203(18)30011-7DOI: https://doi.org/10.1016/j.jvcir.2018.01.006Reference: YJVCI 2118

To appear in: J. Vis. Commun. Image R.

Received Date: 23 February 2017Revised Date: 20 October 2017Accepted Date: 11 January 2018

Please cite this article as: J. Qu, Y. Li, W. Dong, Fusion of Hyperspectral and Panchromatic Images Using an AverageFilter and a Guided filter, J. Vis. Commun. Image R. (2018), doi: https://doi.org/10.1016/j.jvcir.2018.01.006

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customerswe are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, andreview of the resulting proof before it is published in its final form. Please note that during the production processerrors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Page 2: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

1

Fusion of Hyperspectral and Panchromatic Images

Using an Average Filter and a Guided filter

Jiahui Qu, a,b

Yunsong Li,a,b*

Wenqian Dong a

a Xidian University, State Key Lab. of Integrated Service Networks, School of Telecommunications Engineering,

No.2, South Taibai Street, Hi-Tech Development Zone, Xi’an, China, 710071

bXidian University, Joint Laboratory of High Speed Multi-source Image Coding and Processing, School of

Telecommunications Engineering, No.2, South Taibai Street, Hi-Tech Development Zone, Xi’an, China, 710071

Abstract. The fusion of hyperspectral and panchromatic images aims to generate a fused image with high spatial

and high spectral resolutions. This paper proposes a novel hyperspectral pansharpening method using an average

filter and a guided filter. Based on the traditional component substitution methods, we propose a new and simple

method to extract the spatial information of the HS image by average filtering at first. Then to solve the significant

spectral distortion, a guided filter is utilized to obtain more detailed spatial information from the PAN image which

has been sharpened. The appropriate injection gains matrix is generated by selecting the optimal value of the

tradeoff coefficient. The spatial detail is finally injected into each band of the interpolated HS image to achieve the

fused image. Experimental results demonstrate that the proposed method can obtain more spatial information and

preserve more spectral information in both subjective and objective evaluations.

Keywords: Hyperspectral (HS) image, panchromatic (PAN) image, guided filter, average filter, component

substitution (CS).

* Corresponding Author, E-mail: [email protected]

1 Introduction

Image fusion is a process which can synthesize and extract the information of two or more

different images to obtain improved information by using a certain algorithm [1]. Remote

sensing image fusion is an important part of image fusion. Remote sensing image fusion aims to

combine the information of different spectral and spatial resolutions images to achieve more

Page 3: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

2

useful information [2]. Low spatial resolution hyperspectral (HS) image and high spatial

resolution panchromatic (PAN) image are provided by various sensors. The HS image is a low

spatial resolution image but has high spectral resolution. The PAN image is a high spatial

resolution image with low spectral resolution. Hence the fusion of hyperspectral and

panchromatic images is a meaningful technology because it can produce a fused image with the

high spectral resolution of the former and the high spatial resolution of the latter.

A large number of methods have been proposed for the fusion of hyperspectral and

panchromatic images [3]. They can be broadly separated into five classes: component

substitution (CS) algorithms, multiresolution analysis (MRA) algorithms, matrix factorization

algorithms, Bayesian algorithms, and hybrid algorithms [4]. CS algorithms and MRA algorithms

are traditional fusion methods. CS approaches convert the HS image into another data space

where spectral and spatial information are separated. Then the spatial component is replaced by

the PAN image [5]. The CS techniques contain algorithms such as the generalized intensity-hue-

saturation (GIHS) [6], the principal component analysis (PCA) [7], [8], the Gram-Schmidt (GS)

[9], and the GS Adaptive (GSA) [10] method. Those CS methods have three main advantages: 1.

High fidelity of the spatial details [11], 2. Fast and simple implementation [6], and 3. Good

robustness [11]. But the CS techniques also have a serious shortcoming. The methods can

generate significant spectral distortion [12].

Page 4: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

3

MRA approaches inject the spatial information of the PAN image into the HS image. MRA

approaches include algorithms, such as the smoothing filter-based intensity modulation (SFIM)

[13], the MTF-Generalized Laplacian Pyramid (MTF-GLP) [14], the MTF-GLP with High Pass

Modulation (MTF-GLP-HPM) [15], and the decimated wavelet transform (DWT) [16] method.

The advantages of the MRA methods are spectral consistency and temporal coherence [17]. The

main shortcoming is the complicated implementation [3].

Matrix factorization approaches, Bayesian approaches, and hybrid approaches are proposed

recently. Matrix factorization algorithms and Bayesian algorithms are model based methods.

They perform well but are accomplished with high computational cost, e.g., the coupled

nonnegative matrix factorization (CNMF) [18], the Bayesian HySure [19], the Bayesian Sparsity

promoted Gaussian prior (Bayesian Sparse) [20] [21], and the Bayesian Naive Gaussian prior

(Bayesian Naive) [22] method. Hybrid methods use concepts from different methods integrated

into one method, such as the guided filter PCA (GFPCA) [23] method. The GFPCA method can

preserve the spectral information well. However, the GFPCA method produces lots of blurs

because the spatial information is not sufficiently integrated in the fused product.

This paper proposes a novel hyperspectral image fusion method with an average filter and a

guided filter (AFGF) to solve the problems mentioned above. Based on the model of component

substitution methods, we propose a simple and effective approach to retrieve the spatial

Page 5: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

4

information of the HS image by average filtering at first. Then to overcome the serious spectral

distortion of component substitution methods, the guided filter is used for extracting more

detailed spatial information from the PAN image, which has been sharpened. Finally, the

appropriate injection gains matrix is presented to reduce the spectral and spatial distortion.

Experimental results illustrate that the method using the enhanced PAN image can achieve better

effects. Experimental results also demonstrate that the proposed fusion method performs superior

in terms of subjective and objective evaluations.

2 Related Work

2.1 Component Substitution Pansharpening Methods

Component substitution (CS) is a popular and classical pansharpening method. CS technique

projects the HS image into another space to separate spectral and spatial information [12]. Then

the transformed spatial information is substituted by the histogram-matched PAN image. Finally,

the fused image is obtained by applying the inverse spectral transformation [3]. Many CS

pansharpening methods are extended from multispectral images to hyperspectral images.

A general formulation of the CS method is given by [24]

( )F

kk kHS HS g P I

(1)

where F

kHS and kHS are the thk band of the final fused image and the interpolated HS image, P

Page 6: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

5

is the PAN image, kg is the injection gain, and I represents the spatial information of the HS

image that is defined as

1

m

ii

i

I HS

(2)

where m is the number of the HS image bands, and i

is the weight that measures the

proportional spectral overlap between each HS band and the PAN image [12]. The weighting

factor i is generally calculated as1 m .

In CS methods, the weight i and the injection gain kg of the generalized intensity-hue-

saturation (GIHS) [6] approach are defined as 1i m and 1kg . In [9], the weight i and

the injection gain kg of the Gram-Schmidt (GS) orthogonalization method are defined as

1i m and cov( , ) var( )kkg HS I I , where cov( , )A B denotes the covariance operation

and var( )C denotes the variance operation. In [10], an enhanced GS Adaptive (GSA) method is

proposed, in which the weight i is obtained by minimizing the mean-square error (mse)

between the estimated spatial information I and the decimated PAN image. In addition, in [26],

an adaptive IHS method is introduced, in which the weighting factor i is calculated by solving

the following optimization problem

1

2

,...,1

minn

m

ii

i

P HS

1. . 0,..., 0ns t (3)

In [26], the injection gain kg is a weighting matrix which is produced by the edges of the PAN

Page 7: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

6

image and is given by

4exp( )kg

P

(4)

where and

are the tradeoff parameters, and P is the gradient of the PAN image.

2.2 Guided Filter

A guided filter was proposed by He et al. [27]. The guided filter is an edge-preserving smoothing

filter which is derived from a local linear model. It has numerous image processing applications,

such as detail enhancement, HDR compression, image matting, dehazing, etc. It can transfer

structures from the guidance image G to the output image O . The guided filter can also

preserve the gradient and avoid the reversal artifacts. It can be applied to the image fusion

because the guided filter has the advantages mentioned above.

In [27], the guided filter assumes that the output image O is a linear transformation of the

guidance image G in a local window kw centered at pixel k , i.e.

,i k i k kO a G b i w (5)

where kw is a window of size (2 1) (2 1)r r and the coefficients ka and kb are assumed to

be constant in kw . ( , )k ka b can be estimated by minimizing the squared difference between the

output image O and the input image I in the window kw :

2 2( , ) (( ) )

k

k k k i k i k

i w

E a b a G b I a

(6)

Page 8: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

7

where is a regularization parameter. The constants ka and

kb can be directly estimated by

linear ridge regression [28]

2

1

ki i k ki w

k

k

G I Iw

a

(7)

k k k kb I a (8)

where w is the number of pixels in kw , k and 2

k are the mean and variance of G in kw , and

kI is the mean of I in kw . The value of output image iO will change when it is computed in

different widows because several different local windows centered at the pixel k contain pixel

i . So the values of coefficients ka and kb are averaged. The final guided filter is computed as

follows:

i i i iO a G b (9)

where 1i

i kk wa a

w and 1

ii kk w

b bw

. In this Paper, we use the following formula to

represent the guided filter operation

( , )O f I G (10)

here, f represents the guided filter function, G is the guidance image, I is the input image,

and O is the output image.

3 Proposed Method

Fig.1 shows the main processes of the proposed hyperspectral pansharpening technique with

Page 9: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

8

average and guided filters. The proposed approach mainly consists of three steps. First, the

spatial information of the HS image is extracted by average filtering. Then, a guided filter is

applied to obtain more detailed spatial information from the PAN image which has been

sharpened. Finally, the injection gains matrix is generated.

The traditional component substitution methods rely on substituting the spatial information

component of the HS image with the PAN image. They generate serious spectral distortion. In

order to overcome this problem, the proposed method utilizes the guided filter to transfer the

spatial information from the enhanced PAN image to the HS image, without causing spectral

distortion. The injection gains matrix of the proposed method is also generated to restrain the

spectral and spatial distortion. The proposed method is based on the component substitution

approach. To obtain the spatial information of the HS image I , the common method is to solve

an optimization problem. We present a simple method which makes use of the average filter to

reduce the amount of calculation.

3.1 Extracting the Spatial Information of HS Image using an Average Filter

According to traditional component substitution approaches, the first step of the proposed

method obtains the spatial information of the HS image. Firstly, the low resolution HS image is

interpolated to match the scale of the PAN image. Let us denote this upsampled image as HS .

Page 10: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

9

HS

HS

Upsamping

PAN

Sharpening

sPAN

HSB

Average

filtering Base layer

HSD

Detail layer

-

Spatial information

I

Guided filtering

kgInjection gains

FHS

GFI

Fig. 1 Flowchart of the proposed approach

Then, all bands of the HS image are decomposed into two-scale representations using an

average filter. The base layer of the thk band can be described as

*k

kHS

B HS W (11)

where HS

B represents the base layer of the HS image, kHS

B represents the base layer of the

thk band, kHS represents the thk band of the HS image, and W represents the average filter.

The detail layer is obtained by subtracting the base layer from the HS image. This process can

be depicted as

HS HS

D HS B (12)

Page 11: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

10

where HS

D represents the detail layer of the HS image. The detail layer contains the details of

each band. Finally, the spatial information of HS image is obtained by adding the detail layer of

all bands together as follows:

1

mk

HSk

I D

(13)

where I is the spatial information of the HS image, m is the number of bands, and k

HSD is the

detail layer of the thk band.

3.2 Transferring Spatial Details Using a Guided Filter

In this step, we transfer the spatial details from the PAN image to the spatial information of HS

image I using a guided filter to reduce the spectral distortion. As shown in Fig. 1, the PAN

image is sharpened to enhance the details first. The detail enhancement of the PAN image can be

accomplished using the LOG (Laplacian of Gaussian) operator as follows. First, a Gaussian filter

is applied to the PAN image to reduce noise.

*gPAN PAN G

(14)

where G is a Gaussian filter. Then we make use of the Laplace operator to sharpen and enhance

the image gPAN . This procedure can be described as

2

s g gPAN PAN PAN

(15)

where sPAN represents the enhanced PAN image, and

2

gPAN represents the Laplace operator.

Page 12: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

11

The spatial detail of the sPAN image is clearer than that of the PAN image. Next, the guided

filter is applied to the spatial information of the HS image I with the sharpened image sPAN

serving as the guidance image.

( , )GF sI f I PAN

(16)

here, GFI is the obtained spatial details, I is the input image, and f represents the guided filter

function.

3.3 Generating the Injection Gains Matrix

This step generates the injection gains matrix. The injection gains matrix kg of the proposed

method is generated with reducing the spectral and spatial distortion as follows. The proportions

between each pair of the HS image bands should remain unchanged to preserve the spectral

information. Thus, the injection gains kg can be defined as

1(1/ )

k

k m

kk

HSg

m HS

(17)

where m is the number of bands, and kHS is the thk band of the interpolated image HS .

Then, we define a tradeoff coefficient to restraint the spatial distortion:

1(1/ )

k

k m

kk

HSg

m HS

(18)

The optimal value of the tradeoff coefficient is selected by spatial data analysis, which is

Page 13: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

12

shown in section 4. Finally, the fused image FHS is obtained by injecting the spatial details GFI

into the interpolated image HS for each band

Fkk k GFHS HS g I

(19)

4 Experimental Results and Analysis

In this part, the experimental results of the proposed hyperspectral pansharpening technique with

an average filter and a guided filter (AFGF) are compared with six state-of-the-art image fusion

methods. There are the principal component analysis (PCA) method [7], [8], the Gram-Schmidt

adaptive (GSA) method [10], the hybrid method based on a guided filter and PCA (GFPCA)

method [23], the coupled nonnegative matrix factorization (CNMF) method [18], the MTF-GLP

with High Pass Modulation (MTF-GLP-HPM) method [15], and the Bayesian Sparsity promoted

Gaussian prior (Bayesian Sparse) [20] [21] method. Several quality measures are used for

assessing the performance of different hyperspectral image fusion methods.

4.1 Quality Measures

In the experiments, four well-known indexes are adopted for evaluating the property of the

image fusion methods, i.e., the cross correlation (CC) [3], the spectral angle mapper (SAM) [29],

the root mean squared error (RMSE) [3] and the erreur relative global adimensionnelle de

synthese (ERGAS) [30]. Those quality measures can reflect the similarity between the obtained

Page 14: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

13

fused HS image F and the reference HS image R.

1)Cross correlation (CC)

The CC which is a spatial index measures the geometric distortion between the obtained fused

HS image F and the reference HS image R. It is defined as

1

1( , ) ( , )

m

k k

k

CC F R CCS F Rm

(20)

where m is the number of the HS image bands, kF and kR are the thk band of the fused

image and the reference image, and CCS is the cross correlation of a single-band image, which is

defined as

1

2 2

1 1

( )( )( , )

( ) ( )

n

i M i Ni

n n

i M i Ni i

M NCCS M N

M N

(21)

where iM and iN are the pixel values of one band of the fused image and the reference image,

M and N are the mean, and n is the number of pixels for one band. The ideal value of the

CC is 1.

2)Spectral angle mapper (SAM)

The SAM which is a spectral index reflects the spectral distortion between the obtained fused HS

image F and the reference HS image R. It is defined as

1

1( , ) ( , )

n

i i

i

SAM F R SAM F Rn

(22)

Page 15: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

14

where n is the number of pixels for one band, iF and iR are the spectral vectors of the fused

image and the reference image, and given the spectral vectors A , mB R ,

2 2

,( , ) arccos

A BSAM A B

A B

(23)

where , TA B A B is the inner product between A and B. The ideal value of the SAM is 0.

3)Root mean squared error (RMSE)

The RESM is a global index measure of the difference between the obtained fused image F and

the reference image R. It is defined as

( , )*

FF R

RMSE F Rn m

(24)

where ( )T

FA trace A A is the Frobenius norm of A, n is the number of pixels for one band,

and m is the number of the HS image bands. The smaller the RMSE value, the more similar is

the fused image to the reference image. The optimal value of the RMSE is 0.

4)erreur relative global adimensionnelle de synthese (ERGAS)

The ERGAS, which is also a global index, reflects the entire quality of the obtained fused image

F. It is defined as

2

1

1( , ) 100

m

k

k k

RMSEERGAS F R d

m

(25)

Where d is the ratio between the pixel sizes of the PAN image and the HS

Page 16: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

15

image, k k k FRMSE F R n ,

m is the number of the HS image bands, and k is the

mean of the thk band of the reference image R. The optimal value of the ERGAS is 0.

4.2 Analysis of the Influence of Parameter

In the proposed method, the parameter controls the quantity of the injected spatial

information and restraints the spatial distortion of fused HS image. Thus, the tradeoff coefficient

is controlled by the indexes CC, RMSE and ERGAS which can reflect the degree of spatial

distortion. Larger CC and smaller RMSE, ERGAS values indicate better fused result. In order to

analyze the influence of parameter , the proposed method is performed on the Moffett Field

dataset with different settings. Table 1 shows the objective performance of the proposed

method with different tradeoff coefficient settings. As shown in Table 1, when the tradeoff

coefficient is 0.2, the fused results achieve the best performance. We have also tested the

fusion performance of the proposed method on some hyperspectral images, and found that

0.2 also acquires the optimal effects there. Therefore, in the proposed method, the tradeoff

coefficient is set to 0.2.

Table 1. Objective performance of the proposed method with different tradeoff coefficient settings.

0 0.1 0.2 0.3 0.4 0.5

CC 0.9344 0.9591 0.9660 0.9617 0.9505 0.9355

RMSE 0.0344 0.0276 0.0255 0.0293 0.0371 0.0469

ERGAS 5.8745 4.6899 4.2779 4.8319 6.0891 7.7109

Page 17: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

16

4.3 Experimental Results

In order to assess the performance of the proposed average filtering and guided filtering based

fusion method (AFGF), we test two kinds of semi-synthetic datasets, which are the hyperspectral

remote sensing images and the hyperspectral images of natural scenes. According to [27], since

the guided filter transfers the detailed structures in the proposed method, a suitable filer size

r which should not be too large or too small and a small blur degree are preferred. According

to the image pixel size, the parameters are set to 58r and 610 . In the experiments, the

pixel values of every test images are normalized to the range of 0-1.0 to reduce the quantity of

calculation.

1)Hyperspectral remote sensing images

The first experiment is performed on a semi-synthetic dataset derived from the Moffett field

dataset. This hyperspectral remote sensing dataset was acquired by the Airborne Visible Infrared

Imaging Spectrometer (AVIRIS) operated by NASA/JPL [4]. A mixed urban and rural scene is

contained in this dataset. The HS images are characterized by 224 bands and acquired in the

spectral range of 0.4-2.5μm . Fig. 2(a) shows a HS image which is used as the reference image.

According to the Wald’s protocol [31], the simulated HS image is obtained by blurring and

downsampling the reference image with a ratio of 5, and the simulated PAN image is obtained

by means of averaging the bands of the visible range of the reference image. Fig. 2(b) shows the

Page 18: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

17

simulated PAN image. The HS image is 37 79 with 100m spatial resolution and the PAN

image is 185 395 with 20m spatial resolution.

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)

Fig. 2 Moffett field dataset fusion results. (a) Reference HS image. (b) Synthetic PAN image. (c) GSA method. (d)

PCA method. (e) MTF-GLP-HPM method. (f) GFPCA method. (g) CNMF method. (h) Bayesian Sparse method. (i)

AFGF method without sharpening the PAN. (j) AFGF method.

Fig. 2 shows the Moffett field dataset fusion results obtained by each method. From Fig.

2(c)-(d), it can be observed that the results produced by GSA and PCA methods have high

fidelity in rendering the spatial information, but generate significant spectral distortion.

Moreover, the GSA and PCA methods may add some extra gaps, especially at the top area of the

fused images. The result of the MTF-GLP-HPM method has better spectral performance, but

Page 19: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

18

brings in serious spatial distortion in some regions. Fig. 2(f) shows that the GFPCA method

generates lots of blurs in some areas and the spatial details are not sufficient. Fig. 2(g) displays

that the CNMF method obtains better spectral performance but the spatial information added is

limited in some details. From Fig. 2(h), it can be seen that the Bayesian Sparse method works

well, but the river at the bottom area of the fused image has insufficient spatial details. Fig. 2(i)

shows the result of the AFGF method which the PAN image is not sharpened in the section 3.2.

A comparison of Fig. 2(i) and Fig. 2(j) reveals that the result of the AFGF method has better

performance in spatial aspects. Thus, it is necessary to sharpen the PAN image in the proposed

frame. By comparison, the result shown in Fig. 2(j) proves that the proposed method performs

superior in both spectral and spatial aspects.

Table 2. Objective quantitative assessments of the Moffett field dataset fusion results shown in Fig. 2.

Indexes GSA PCA MTF-

GLP-HPM GFPCA CNMF

Bayesian

Sparse

AFGF not

sharpened AFGF

CC 0.9530 0.9066 0.8919 0.9130 0.9393 0.9523 0.9489 0.9660

SAM 9.4454 12.0214 7.0966 12.2348 8.2791 9.4260 9.3379 7.3007

RMSE 0.0324 0.0444 0.1339 0.0424 0.0347 0.0313 0.0311 0.0255

ERGAS 5.3450 7.2843 28.8234 7.0149 5.8157 5.3132 5.3177 4.2779

The objective quantitative assessments of different methods are shown in Table 2. It is

obvious that, for the Moffett field dataset, the CC index of the proposed method is the biggest,

and the RMSE and ERGAS values of this approach are the smallest. The SAM value ranks as the

Page 20: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

19

second. These results demonstrate that the AFGF method shows the best fusion performance.

(a) (b) (c) (d) (e)

(f) (g) (h) (i) (j)

Fig. 3 Hyperion dataset fusion results. (a) Reference HS image. (b) Synthetic PAN image. (c) GSA method. (d)

PCA method. (e) MTF-GLP-HPM method. (f) GFPCA method. (g) CNMF method. (h) Bayesian Sparse method. (i)

AFGF method without sharpening the PAN. (j) AFGF method.

Another semi-synthetic dataset derived from the Hyperion instrument is utilized to assess

the fusion performance of the proposed approach. This hyperspectral dataset has been acquired

by the EO-1 spacecraft operated by NASA [4]. A class of Earth observation data is provided in

this dataset. These hyperspectral images consist of 242 bands. They cover the range of 0.4-2.5

μm . Fig. 3(a) shows the reference HS image. According to the Wald’s protocol [31], the

synthetic HS image is provided by blurring and downsampling the reference HS image with a

ratio of 3, and the synthetic PAN image is acquired by averaging the bands of the visible range.

Fig. 3(b) shows the synthetic PAN image. The HS image is 58 72 with 30m spatial resolution

Page 21: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

20

and the PAN image is 174 216 with 10m spatial resolution.

The Hyperion dataset fusion results achieved by different methods are shown in Fig. 3. As

can be observed from the Fig. 3(c) and (d), the results achieved by the GSA, and PCA methods

generate serious spectral distortion. Fig. 3(e) displays that the MTF-GLP-HPM method obtains

better performance in spatial aspect, but has slightly spectral distortion. As shown in Fig. 3(f),

the GFPCA method is significantly blurred. Fig. 3(g) and (h) describe that the CNMF and

Bayesian Sparse methods work well, but certain areas of the fused images have deficient spatial

details. Similar to the first experiment, the fused result of the AFGF method for which the PAN

image is not sharpened is shown in Fig. 3(i). By comparing it with Fig. 3(j), it can be seen that

the AFGF method using the enhanced PAN image contains more spatial detail and is less

spectrally distorted. In total, the proposed method is more similar to the reference HS image in

spatial and spectral aspects.

Table 3. Objective quantitative assessments of the Hyperion dataset fusion results shown in Fig. 3.

Indexes GSA PCA MTF-

GLP-HPM GFPCA CNMF

Bayesian

Sparse

AFGF not

sharpened AFGF

CC 0.9507 0.9502 0.9420 0.8880 0.8969 0.8795 0.9403 0.9508

SAM 2.4700 2.4891 2.4803 3.2906 2.4830 3.5371 2.4121 2.4123

RMSE 0.0061 0.0106 0.0068 0.0159 0.0118 0.0129 0.0105 0.0069

ERGAS 2.7031 3.4075 2.6705 5.1613 3.8485 4.2760 4.0567 2.6472

Table 3 reveals the objective evaluation results of the Hyperion dataset. From Table 3, it

Page 22: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

21

can be seen that, for the AFGF method, the indexes CC, SAM, and ERGAS obtain the best

values. The RMSE value appears second best. The results prove the superior performance of the

proposed method.

2)Hyperspectral images of natural scenes

The third experiment is accomplished on a semi-synthetic dataset which is the Manchester

university dataset [32]. This dataset contains of 50 Hyperspectral images of natural scenes and

were provided by David H. Foster et al. These hyperspectral images of natural scenes which

were almost all taken under a clear sky have been acquired during the summers of 2002 and

2003. Mixed rural scenes and urban scenes are contained in this dataset. These images are

characterized by 33 bands in the spectral range of 400-720nm.

Fig. 4(a) shows a HS image from this database, which serves as the reference image. The

original HS image is 1344 1024 . The number of rows and columns of the HS image is

600 800 for this experiment. According to the Wald’s protocol [31], the simulated HS image is

acquired by applying a blurring to the reference image and then downsampling it with a ratio of

5. The simulated PAN image is provided by pansharpening the spectral response. The simulated

HS image is 160 120 . The simulated PAN image is 800 600 .

Page 23: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

22

(a) (b) (c)

(d) (e) (f)

(g) (h) (i)

Fig. 4 Hyperspectral image of natural scenes fusion results. (a) Reference HS image. (b) GSA method. (c) PCA

method. (d) MTF-GLP-HPM method. (e) GFPCA method. (f) CNMF method. (g) Bayesian Sparse method. (h)

AFGF method without sharpening the PAN. (i) AFGF method.

The hyperspectral image of natural scenes fusion results are shown in Fig. 4(b)-(i). By

comparison, the results of the GSA, PCA, GFPCA and CNMF methods produce spectral

distortion, especially in the area of the red ball. Besides, the result of the CNMF method shown

in Fig. 4(f) is a bit blurry. As shown in Fig. 4(d), it can be seen that the MTF-GLP-HPM method

has serious spectral and spatial distortion. The fused result of the Bayesian Sparse method

obtains better spectral performance, but the spatial information of some areas shows slightly

fuzzy. Fig. 4(h) produced without a sharpened PAN image is less sharp than Fig. 4(i). It proves

Page 24: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

23

once again that sharpening the PAN image within the proposed frame is significant. The AFGF

method achieves better result in both spatial details and spectral information aspects. Meanwhile

the better performance of the AFGF method is apparent from the objective evaluation results in

Table 4. For the proposed method, the indexes SAM, RMSE and ERGAS are the smallest and

the CC is the largest. In total, for the hyperspectral image of natural scenes, the proposed method

achieves superior effects.

Table 4. Objective quantitative assessments of the hyperspectral image of natural scenes fusion results shown in

Fig. 4.

Indexes GSA PCA MTF-

GLP-HPM GFPCA CNMF

Bayesian

Sparse

AFGF not

sharpened AFGF

CC 0.9963 0.9939 0.7697 0.9894 0.9928 0.9953 0.9963 0.9969

SAM 2.2476 2.4590 4.1117 3.7124 3.2902 2.0441 1.9333 1.9333

RMSE 0.0103 0.0152 0.3026 0.0139 0.0090 0.0104 0.0091 0.0078

ERGAS 1.6775 2.0699 128.8305 2.9613 2.5918 1.9240 1.7001 1.5658

5 Conclusions

The intent of this paper is to introduce a new hyperspectral image fusion method which uses an

average filter and a guided filter. This method is based on the component substitution approach.

To reduce the amount of calculation, we first propose a simple method that utilizes the average

filter to obtain the spatial information of the HS image I . Subsequently, the PAN image is

sharpened to enhance the spatial detail. In order to avoid the spectral distortion, a guided filter is

Page 25: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

24

used in transferring the spatial information from the enhanced PAN image to the spatial

information of the HS image I . Then, the injection gains matrix is generated to reduce spectral

distortion and restraint spatial distortion. Experimental results demonstrate that the proposed

method is a more effective method compared with the state-of-the-art fusion methods both in

subjective and objective evaluations. The proposed algorithm has been applied to hyperspectral

pansharpening. In the future, whether the proposed method can be applied to the fusion of

infrared and visual images can be further researched.

Acknowledgments

The authors would like to thank the editors, and the anonymous reviewers for their insightful

comments and suggestions which have greatly improved this paper. This work was supported by

the National Science Foundation of China under Grants 61222101, 61272120, 61301287,

61301291 and 61350110239.

References

[1].Pohl C., and van Genderen JL, “Multisensor image fusion in remote sensing: concepts, methods

and applications,” International Journal of Remote Sensing, vol. 19, no. 5, 823-854, 1998. DOI:

10.1080/014311698215748.

[2].Jixian Zhang, “Multi-source remote sensing data fusion: status and trends,” International Journal

of Image and Data Fusion, vol. 1, no.1, 5-24, 2010. DOI: 10.1080/19479830903561035.

Page 26: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

25

[3].Laetitia Loncan, Luis B. Almeida, José M. Bioucas-Dias, Xavier Briottet, Jocelyn Chanussot,

Nicolas Dobigeon, Sophie Fabre, Wenzhi Liao, Giorgio A. Licciardi, Miguel Simões, Jean-Yves

Tourneret, Miguel A. Veganzones, Gemine Vivone, Qi Wei, and Naoto Yokoya, “Hyperspectral

pansharpening: a review,” IEEE Geoscience Remote Sensing Magazine, vol.3, no. 3, 27-46, 2015.

DOI: 10.1109/MGRS.2015.2440094.

[4].A. Mookambiga, and V. Gomathi, “Comprehensive review on fusion techniques for spatial

information enhancement in hyperspectral imagery,” Multidimensional Systems and Signal

Processing, vol.27, no.4, 863-889, 2016. DOI: 10.1007/s11045-016-0415-2.

[5].Christine Pohl, and John van Genderen, “Structuring contemporary remote sensing image fusion,”

International Journal of Image and Data Fusion, vol.6, no.1, 3-21, 2015. DOI:

10.1080/19479832.2014.998727.

[6].T.-M. Tu, S.-C. Su, H.-C. Shyu, and P. S. Huang, “A new look at IHS-like image fusion methods,”

Information Fusion, vol. 2, no. 3, 177–186, 2001. DOI: 10.1016/S1566-2535(01)00036-7.

[7]. P. S. Chavez and A. Y. Kwarteng, “Extracting spectral contrast in Landsat thematic mapper image

data using selective principal component analysis,” Photogrammetric Engineering & Remote

Sensing, vol. 55, no. 3, 339–348, 1989.

[8].V. P. Shah, N. Younan, and R. L. King, “An efficient pan-sharpening method via combined

adaptive PCA approach and contourlets,” IEEE Transactions on Geoscience and Remote Sensing,

vol. 56, no. 5, 1323–1335, 2008. DOI: 10.1109/TGRS.2008.916211.

[9].C. Laben and B. Brower, “Process for enhacing the spatial resolution of multispectral imagery

using pan-sharpening,” U.S. Patent 6 011 875, Jan. 4, 2000.

[10].B. Aiazzi, S. Baronti, and M. Selva, “Improving component substitution pansharpening through

multivariate regression of MS+pan data,” IEEE Transactions on Geoscience and Remote Sensing,

vol. 45, no. 10, 3230–3239, 2007. DOI: 10.1109/TGRS.2007.901007.

Page 27: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

26

[11].S. Baronti, B. Aiazzi, M. Selva, A. Garzelli, and L. Alparone, “A theoretical analysis of the

effects of aliasing and misregistration on pansharpened imagery,” IEEE Journal of Selected

Topics in Signal Processing, vol. 5, no. 3, 446–453, June 2011. DOI:

10.1109/JSTSP.2011.2104938.

[12].C. Thomas, T. Ranchin, L. Wald, and J. Chanussot, “Synthesis of multispectral images to high

spatial resolution: A critical review of fusion methods based on remote sensing physics,” IEEE

Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, 1301–1312, 2008. DOI:

10.1109/TGRS.2007.912448.

[13].J. G. Liu, “Smoothing filter based intensity modulation: A spectral preserve image fusion

technique for improving spatial details,” International Journal of Remote Sensing, vol. 21, no. 18,

3461–3472, 2000. DOI: 10.1080/014311600750037499.

[14].B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “MTF-tailored multiscale fusion of

high-resolution MS and pan imagery,” Photogrammetric Engineering and Remote Sensing, vol.

72, no. 5, 591–596, 2006. DOI: 10.14358/PERS.72.5.591.

[15]. G. Vivone, R. Restaino, M. D. Mura, G. Licciardi, and J. Chanussot, “Contrast and error-based

fusion schemes for multispectral image pansharpening,” IEEE Geoscience and Remote Sensing

Letters, vol. 11, no. 5, 930–934, 2014. DOI: 10.1109/LGRS.2013.2281996.

[16]. S. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,”

IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11, no. 7, 674–693, 1989.

DOI: 10.1109/34.192463.

[17]. B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “25 years of pansharpening: A

critical review and new developments,” in Signal Image Processing for Remote Sensing, 2nd ed.,

C. H. Chen, Ed. Boca Raton, FL: CRC Press, ch. 28, 533–548, 2011.

[18].N. Yokoya, T. Yairi, and A. Iwasaki, “Coupled nonnegative matrix factorization unmixing for

Page 28: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

27

hyperspectral and multispectral data fusion,” IEEE Transactions on Geoscience and Remote

Sensing, vol. 50, no. 2, 528–537, 2012. DOI: 10.1109/TGRS.2011.2161320.

[19].M. Simões, J. B. Dias, L. Almeida, and J. Chanussot, “A convex formulation for hyperspectral

image superresolution via subspace-based regularization,” IEEE Transactions on Geoscience and

Remote Sensing, vol. 53, no. 6, 3373-3388, 2015. DOI: 10.1109/TGRS.2014.2375320.

[20]. Q. Wei, N. Dobigeon, and J.-Y. Tourneret, “Bayesian fusion of multi-band images,” IEEE

Journal of Selected Topics in Signal Processing, vol. 9, no. 6, 1117–1127, 2015. DOI:

10.1109/JSTSP.2015.2407855.

[21]. Q. Wei, J. M. B. Dias, N. Dobigeon, and J.-Y. Tourneret, “Hyperspectral and multispectral image

fusion based on a sparse representation,” IEEE Transactions on Geoscience and Remote Sensing,

vol. 53, no. 7, 3658–3668, 2015. DOI: 10.1109/TGRS.2014.2381272.

[22].Q. Wei, N. Dobigeon, and J.-Y. Tourneret, “Fast fusion of multi-band images based on solving a

sylvester equation,” IEEE Transactions on Image Processing, vol. 24, no. 11, 4109–4121, 2015.

DOI: 10.1109/TIP.2015.2458572.

[23].W. Liao, X. Huang, F. Coillie, S. Gautama, A. Pizurica, W. Philips, H. Liu, T. Zhu, M. Shimoni,

G. Moser, and D. Tuia, “Processing of multiresolution thermal hyperspectral and digital color

data: Outcome of the 2014 IEEE GRSS data fusion contest,” IEEE Journal of Selected Topics in

Applied Earth Observations and Remote Sensing, vol. 8, no. 6, 2984–2996, 2015. DOI:

10.1109/JSTARS.2015.2420582.

[24].G. Vivone, L. Alparone, J. Chanussot, M. D. Mura, A. Garzelli, and G. Licciardi, “A critical

comparison among pansharpening algorithms,” IEEE Transactions on Geoscience and Remote

Sensing, vol. 53, no. 5, 2565–2586, 2015. DOI: 10.1109/TGRS.2014.2361734.

[25].Luciano Alparone, Bruno Aiazzi, Stefano Baronti, and Andrea Garzelli, Remote Sensing Image

Fusion, CRC Press Taylor & Francis Group, 2015.

Page 29: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

28

[26].S. Rahmani, M. Strait, D. Merkurjev, M. Moeller, and T. Wittman, “An Adaptive IHS Pan-

Sharpening Method,” IEEE Geoscience and Remote Sensing Letters, 746-750, 2010. DOI:

10.1109/LGRS.2010.2046715.

[27].K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and

Machine Intelligence, Vol.35, No.6, 1397-1409, 2013. DOI: 10.1109/TPAMI.2012.213.

[28].T. Hastie, R. Tibshirani, and J.H. Friedman, The Elements of Statistical Learning, Springer, 2003.

[29].R. H. Yuhas, A. F. H. Goetz, and J. W. Boardman. “Discrimination among semi-arid landscape

endmembers using the Spectral Angle Mapper (SAM) algorithm,” In Proceeding Summaries 3rd

Annual JPL Airborne Geoscience Workshop, 147–149, 1992.

[30].L. Wald, Data Fusion: Definitions and Architectures — Fusion of Images of Different Spatial

Resolutions, Les Presses del l´Ecole des Mines, ´Paris, France, 2002.

[31].L. Wald, T. Ranchin, and M. Mangolini, “Fusion of satellite images of different spatial

resolutions: Assessing the quality of resulting images,” Photogrammetric Engineering and

Remote Sensing, vol. 63, no. 6, 691-699, 1997.

[32].David H. Foster, Kinjiro Amano, Sérgio M. C. Nascimento, and Michael J. Foster, “Frequency of

metamerism in natural scenes,” Journal of the Optical Society of America. A, Optics, Image

Science, & Vision, vol.23, no.10, 2006. DOI: 10.1364/JOSAA.23.002359.

Page 30: Fusion of Hyperspectral and Panchromatic Images Using an …static.tongtianta.site/paper_pdf/0d784aa2-b81c-11e9-bfcd... · 2019-08-06 · 1 Fusion of Hyperspectral and Panchromatic

29

Highlights:

An average filtering and guided filtering based fusion method is proposed.

We utilize the average filter to extract the spatial information of the HS image.

The PAN image is sharpened to enhance the spatial detail.

The guided filter is utilized to transfer spatial information.

The proposed method is effective for hyperspectral pansharpening.