9
1728 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 4, APRIL 2015 Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization Rakesh C. Patel and Manjunath V. Joshi Abstract—Hyperspectral images (HSIs) have high spectral res- olution, but they suffer from low spatial resolution. In this paper, a new learning-based approach for super-resolution (SR) using the discrete wavelet transform (DWT) is proposed. The novelty of our approach lies in designing application-specific wavelet basis (filter coefficients). An initial estimate of SR is obtained by using these filter coefficients while learning the high-frequency details in the wavelet domain. The final solution is obtained using a sparsity-based regularization framework, in which image degra- dation and the sparseness of SR are estimated using the estimated wavelet filter coefficients (EWFCs) and the initial SR estimate, respectively. The advantage of the proposed algorithm lies in 1) the use of EWFCs to represent an optimal point spread function to model image acquisition process; 2) use of sparsity prior to preserve neighborhood dependencies in SR image; and 3) avoiding the use of registered images while learning the initial estimate. Experiments are conducted on three different kinds of images. Visual and quantitative comparisons confirm the effectiveness of the proposed method. Index Terms—Hyperspectral, regularization, sparsity, super- resolution (SR), wavelet. I. I NTRODUCTION H YPERSPECTRAL imaging sensors collect information in the form of reflectance spectrum in hundreds of con- tiguous narrow bands simultaneously. Being spectrally overde- termined, they provide ample spectral information to identify and differentiate spectrally unique materials [1]. Hence, they are widely used in wide range of military and civilian ap- plications. These applications require high spectral and high spatial resolution data. Unfortunately there is a kind of tradeoff between the spectral and the spatial resolutions of the HSI sensor [2]. Many times, it is not feasible to capture the high-resolution (HR) images due to the limitation in implementation such as higher requirements of memory, transmission bandwidth, power, and camera cost. Since HR imaging leads to better analysis, classification and interpretation one may look for an algorithmic approach [i.e., super-resolution (SR)] to obtain the HR images. The problem of SR has been attempted by many researchers since early 1980. There are many ways to improve spatial resolution of hyperspectral images (HSIs) Manuscript received August 22, 2013; revised December 30, 2013 and May 7, 2014; accepted June 28, 2014. The authors are with the Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar 382007, India (e-mail: patel_ [email protected]; [email protected]). Digital Object Identifier 10.1109/TGRS.2014.2346811 such as SR reconstruction using a few low-resolution (LR) images [3]–[5], pan-sharpening fusion using a coincident HR image [e.g., panchromatic (PAN)] followed by SR mapping (SRM) [6], [7], SRM after spectral unmixing [8]–[10], wavelet- based methods [11]–[13], etc. Depending on the number of LR images involved, the SR method is called multiframe [3]–[5] or single-frame SR [6]–[10]. Multiframe-based SR methods use subpixel-shifted LR observations of the same scene to obtain SR results, while single-frame approaches learn the detail information from an image database that has a large number of HR or LR and HR training images. An accurate registration of the LR images is critical in multiframe SR since the method is based on exploiting the nonredundancy available in the subpixel-shifted LR observations. When working on remotely sensed images, many times, it is difficult to obtain subpixel-shifted LR observations of the same scene. Therefore, in remote sensing, single-frame SR image mapping has become a popular area of research. Several SR techniques have been proposed based on single- frame SRM technique [6]–[10]. SRM-based techniques exploit the spatial information by making use of coincident HR image (e.g., PAN image) [6], [7] or unmixing model that describes the spatial distribution of the contents of mixed pixels [8]– [10]. An algorithm proposed by Foody in [6] uses a simple regression-based approach to enhance the spatial resolution of LR HSI using coincident HR image. An improved result is obtained by using an SR mapping technique, in which location of landcover classes are predicted by fitting class membership contours that results in reducing the blockiness in final SR output. Nguyen et al. [7] used a fused image as an additional source of information for SRM using a Hopfield neural net- work. The need for a secondary HR coincident image is the limitation of these algorithms. In a different approach, Gu et al. [8] proposed an SR algorithm that uses an indirect approach based on spectral mixture analysis, and the learning-based SRM is performed by using the back propagation neural network (BPNN). A set of HR training images unassociated with the test image are used for training the BPNN. An advantage of unmixing based methods is that no supplementary source of information associated with LR test image is required. There are considerable number of techniques, in which wavelet decomposition is used to increase the spatial resolution of remote sensing images [11]–[13]. These methods decompose images into a number of new images each having different spatial resolution. Need of the coincident HR auxiliary infor- mation is the main limitation of these methods. In addition, 0196-2892 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

Embed Size (px)

Citation preview

Page 1: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

1728 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 4, APRIL 2015

Super-Resolution of Hyperspectral Images: Useof Optimum Wavelet Filter Coefficients

and Sparsity RegularizationRakesh C. Patel and Manjunath V. Joshi

Abstract—Hyperspectral images (HSIs) have high spectral res-olution, but they suffer from low spatial resolution. In this paper,a new learning-based approach for super-resolution (SR) usingthe discrete wavelet transform (DWT) is proposed. The novelty ofour approach lies in designing application-specific wavelet basis(filter coefficients). An initial estimate of SR is obtained by usingthese filter coefficients while learning the high-frequency detailsin the wavelet domain. The final solution is obtained using asparsity-based regularization framework, in which image degra-dation and the sparseness of SR are estimated using the estimatedwavelet filter coefficients (EWFCs) and the initial SR estimate,respectively. The advantage of the proposed algorithm lies in 1) theuse of EWFCs to represent an optimal point spread functionto model image acquisition process; 2) use of sparsity prior topreserve neighborhood dependencies in SR image; and 3) avoidingthe use of registered images while learning the initial estimate.Experiments are conducted on three different kinds of images.Visual and quantitative comparisons confirm the effectiveness ofthe proposed method.

Index Terms—Hyperspectral, regularization, sparsity, super-resolution (SR), wavelet.

I. INTRODUCTION

HYPERSPECTRAL imaging sensors collect informationin the form of reflectance spectrum in hundreds of con-

tiguous narrow bands simultaneously. Being spectrally overde-termined, they provide ample spectral information to identifyand differentiate spectrally unique materials [1]. Hence, theyare widely used in wide range of military and civilian ap-plications. These applications require high spectral and highspatial resolution data. Unfortunately there is a kind of tradeoffbetween the spectral and the spatial resolutions of the HSIsensor [2].

Many times, it is not feasible to capture the high-resolution(HR) images due to the limitation in implementation suchas higher requirements of memory, transmission bandwidth,power, and camera cost. Since HR imaging leads to betteranalysis, classification and interpretation one may look foran algorithmic approach [i.e., super-resolution (SR)] to obtainthe HR images. The problem of SR has been attempted bymany researchers since early 1980. There are many waysto improve spatial resolution of hyperspectral images (HSIs)

Manuscript received August 22, 2013; revised December 30, 2013 andMay 7, 2014; accepted June 28, 2014.

The authors are with the Dhirubhai Ambani Institute of Informationand Communication Technology, Gandhinagar 382007, India (e-mail: [email protected]; [email protected]).

Digital Object Identifier 10.1109/TGRS.2014.2346811

such as SR reconstruction using a few low-resolution (LR)images [3]–[5], pan-sharpening fusion using a coincident HRimage [e.g., panchromatic (PAN)] followed by SR mapping(SRM) [6], [7], SRM after spectral unmixing [8]–[10], wavelet-based methods [11]–[13], etc. Depending on the number of LRimages involved, the SR method is called multiframe [3]–[5]or single-frame SR [6]–[10]. Multiframe-based SR methodsuse subpixel-shifted LR observations of the same scene toobtain SR results, while single-frame approaches learn thedetail information from an image database that has a largenumber of HR or LR and HR training images. An accurateregistration of the LR images is critical in multiframe SR sincethe method is based on exploiting the nonredundancy availablein the subpixel-shifted LR observations. When working onremotely sensed images, many times, it is difficult to obtainsubpixel-shifted LR observations of the same scene. Therefore,in remote sensing, single-frame SR image mapping has becomea popular area of research.

Several SR techniques have been proposed based on single-frame SRM technique [6]–[10]. SRM-based techniques exploitthe spatial information by making use of coincident HR image(e.g., PAN image) [6], [7] or unmixing model that describesthe spatial distribution of the contents of mixed pixels [8]–[10]. An algorithm proposed by Foody in [6] uses a simpleregression-based approach to enhance the spatial resolution ofLR HSI using coincident HR image. An improved result isobtained by using an SR mapping technique, in which locationof landcover classes are predicted by fitting class membershipcontours that results in reducing the blockiness in final SRoutput. Nguyen et al. [7] used a fused image as an additionalsource of information for SRM using a Hopfield neural net-work. The need for a secondary HR coincident image is thelimitation of these algorithms. In a different approach, Gu et al.[8] proposed an SR algorithm that uses an indirect approachbased on spectral mixture analysis, and the learning-based SRMis performed by using the back propagation neural network(BPNN). A set of HR training images unassociated with thetest image are used for training the BPNN. An advantage ofunmixing based methods is that no supplementary source ofinformation associated with LR test image is required.

There are considerable number of techniques, in whichwavelet decomposition is used to increase the spatial resolutionof remote sensing images [11]–[13]. These methods decomposeimages into a number of new images each having differentspatial resolution. Need of the coincident HR auxiliary infor-mation is the main limitation of these methods. In addition,

0196-2892 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

PATEL AND JOSHI: SR OF HSIs: USE OF WFCs AND SPARSITY REGULARIZATION 1729

all these methods use fixed wavelet basis such as Db4 intheir implementation, and they require accurate coregistrationto achieve acceptable results. Li et al. [14] characterized thewavelet coefficients by a mixed Gaussian distribution, and thedependencies between the coarser and the finer scale waveletcoefficients were modeled as prior by using the universal hiddenMarkov model. Recently, learning-based SR approaches forsingle wideband and multiband images have been explored bythe researchers to solve the SR problem [9], [15]–[19]. Thesemethods use a database of HR images or LR–HR image pairs inorder to learn the high-frequency details for SR. Use of sparsityas a prior for regularization of ill-posed problems has beenvalidated by many researchers [18], [20], [21].

Most of the earlier research on SR of HSI assumes implicitlyor explicitly that the point spread function (PSF) of a sensor isthe same over the entire spatial and spectral regions, dependingonly on the position of the detectors. However, in practice,PSF depends on various factors of a hyperspectral imager suchas fill factor of charge-coupled device (CCD) array, cameragain, zoom factor, imaging wavelength etc. [22]. The effect ofdiffraction is significant at higher wavelength in a hyperspectralimager. This results in spatially and spectrally varying PSF ofthe degradation function. In this paper, we address the problemof single-frame image SR using a learning-based approach inthe wavelet domain, where we obtain high-frequency contentsfrom HR training images unassociated with the test image.This eliminates the need of registration while obtaining thesefrequencies. Novelty of our approach lies in estimating thewavelet filter coefficients (WFCs) that take care of spectrallyvarying PSF. Here, we are not considering spatially varyingPSF, which is quite involved as this requires the estimation ofPSF at every pixel. The estimated WFCs (EWFCs) are thenused to learn high-frequency details in a given band in waveletdomain, obtaining an initial estimate of SR image. The final SRimage is obtained using the sparsity-based regularization thathas the observation model constructed using the EWFCs.

For the estimation of optimum WFCs, LR–HR pairs of HSIs,which are referred to as the training images, can be created bychanging either the configurations [23] or the height of the HSimager, a one-time and offline operation. Once a database iscreated, the LR images captured by a hyperspectral imager canbe super-resolved using our approach. This is greatly beneficial,as one can capture low spatial resolution HSIs (although itis capable of capturing HR HSIs) and reduce the memory,transmission bandwidth and power requirements. Efficacy ofthe proposed method is tested by conducting experiments onthree different data sets and results are compared with fourdifferent methods [18], [21], [24], [25]. The outline of thispaper is in the following.

In Section II, a block diagram description of the proposed ap-proach that gives the overview of the implemented algorithm isexplained. Section III describes estimation of optimum WFCs.A wavelet-based learning scheme to obtain the initial estimateof SR is described in Section IV. Section V addresses theregularization of an initially estimated SR image using sparserepresentation as a prior model. The experimental results arediscussed in Section VI. Some concluding remarks are drawnin Section VII.

Fig. 1. Detailed block diagram of proposed approach for HSI SR.

II. BLOCK DIAGRAM DESCRIPTION

OF THE PROPOSED APPROACH

Given an LR test image and a database of LR–HR HSIs, theproposed technique is implemented using the following steps,also illustrated in Fig. 1.

1) Form a training database of registered LR and HR HSIs.2) Reduce dimensionality of the HSIs using PCA.3) Estimate the optimum WFCs using a database created in

step 2.4) Using the EWFCs, obtain the initial SR estimate using the

DWT-based learning.5) Use regularization to obtain the final SR PCA image and

subsequently the SR HSI after inverse PCA.

We first create a database of registered LR–HR pairs ofHSIs (which contains materials/objects of interest in sufficientamount). In practice, if it is not feasible to capture LR–HR pairsfrom the imager, one may use only the HR training images, andLR images are obtained by simulation, using PSF of the imagerobtained using one of the methods described in [26].

To reduce computational burden, we first apply dimensional-ity reduction on LR–HR training images having B bands usingPCA [27]. This transformation incorporates most of the spectralvariability of HSI data in the first few κ principal components(PCs). It is reasonable to assume that the spectral signature ofthe materials/objects of interest is present in sufficient amountsin a reasonable number of spectral bands. Note that the numberof PCA components retained (κ) is application dependent, and

Page 3: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

1730 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 4, APRIL 2015

it can be increased at the cost of computational speed, andthe information loss and the reconstruction error may be madearbitrarily small in order to take care of classification accuracy.

The database of LR–HR PCA images can now be usedin DWT-based learning to obtain an initial estimate of SRHSI. However, DWT having conventional basis such as Haar,Daubechies, or Coiflet are not optimized over the class ofimages. This motivates us to estimate the WFCs optimizedfor a group of HSIs before learning the initial SR estimate.Using the LR–HR PCA data sets of training images, optimumWFCs are estimated for each PCA band individually. Thesecoefficients are then used in DWT-based learning to obtain theinitial SR estimate for the κ-test (LR) HSIs, which are also inthe PCA domain. These filter coefficients are also used to definethe PSF/degradation in the observation model that is used inregularization to obtain the final SR image.

One may note that there is no need of registration whilelearning initial SR estimate as we use only the HR databasewhile obtaining the initial SR estimate. This gives us freedomto include additional HR training HSIs in the database (i.e.,N + 1 to Q) after the dimensionality reduction, as shown inFig. 1. This inclusion would enhance the accuracy of initial SRestimate. Applying inverse DWT gives us initial SR estimatesof LR HSIs in PCA domain.

Our method of obtaining initial SR estimates do not considerthe contextual dependencies among pixels as it is patch based.This results in artifacts in the initial SR image around thepatch boundaries. Hence, regularization based on sparsity asprior is performed in order to obtain final solution for eachPCA component. The observation model used in regularizationis constructed for each LR image using κ-sets of EWFCs. Apatch-based approach is obtaining the sparse coefficients usingthe initially estimated SR PCA components, and they representthe dependence of an SR patch on its nearby patches. Note that,although the individual PCA bands are uncorrelated, spatialdependence exists within the pixels of a PCA image [28]. Ourfinal cost function being differentiable, a simple optimizationtechnique such as gradient descent is used to minimize the costfunction. This results in a final SR HSI in the PCA domain,applying inverse PCA transformation results in SR HSI.

III. ESTIMATION OF WFCs

Most of the earlier research uses DWT having fixed basis[11]–[13], [25], [29], to increase spatial resolution of images,hence do not guarantee the optimum performance. In contrast,in the proposed approach, we derive optimal WFCs, whichare then used in learning the initial SR estimate. Here, DWTbasis coefficients are not explicitly specified, instead they arecomputed from the signal (images in our work) itself.

The estimation of WFCs is carried out for each PCA com-ponent separately. We first take the DWT of HR PCA imagesof N training pairs and perform one- and two-level waveletdecomposition for the magnification factor (q) of 2 and 4,respectively. To estimate WFCs, we use the fact that the coarserpart of DWT transformed HR PCA image should be close to theLR PCA image in the mean-squared sense for all Npairs in the

database. Use of EWFCs for initial estimate learning ensuresbetter approximation to the SR.

As a compromise between computational burden and the per-formance, we have chosen a design length of 4 for the waveletfilter. For details of wavelet transform (WT), one may referto [30], [31]. Here, we describe the procedure to estimate theWFCs for one of the PCA bands of HSI. The same procedure isrepeated for all κ-bands.

Consider an LR PCA image having a size of M ×M andthe corresponding HR PCA image be of size 2M × 2M forq = 2. We write a system of equations that must be solved tofind low-pass (LP) WFCs l = (l0, l1, l2, l3). The high-pass (HP)filter coefficients h = (h0, h1, h2, h3) can then be determinedfrom l. DWT transformation matrix W2M is given by

W2M =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

l3 l2 l1 l0 0 0 · · · 0 00 0 l3 l2 l1 l0 · · · 0 0...

......

......

.... . .

......

l1 l0 0 0 0 0 · · · l3 l2h3 h2 h1 h0 0 0 · · · 0 00 0 h3 h2 h1 h0 · · · 0 0...

......

......

.... . .

......

h1 h0 0 0 0 0 · · · h3 h2

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦2M×2M

.

In this matrix, the upper half and lower half rows are LP(L) and HP (H) filter matrices, respectively, each of sizeM × 2M . Considering the fact that the 2-D DWT is separable,2-D transform Wt of 2-D image I of size 2M × 2M is

Wt = W2MIWT2M =

LILT LIHT

HILT HIHT. (1)

To derive the filter coefficients, we consider coarser part LILT

of size M ×M and design a transformation matrix W2M ,which has the desired filter coefficients. Considering the or-thonormality constraints, we have

3∑n=0

l2n = 1, and l0l2 + l1l3 = 0. (2)

For LP response, using discrete time Fourier transform (DTFT)of the sequence l, we obtain

H(0) =

3∑n=0

ln = 1, and H(π) =

3∑n=0

(−1)nln = 0. (3)

However, H(0) in (3) violets distance preserving property oforthogonal matrices [32]. To satisfy orthonormality conditionsand the LP condition, we must have

3∑n=0

ln = ±√2 and (4)

to satisfy right-hand side (2) we must have

[l2, l3]T = c[−l1, l0]

T for c �= 0. (5)

Equation (5) and H(π) in (3) lead to

l1 =1− c

1 + cl0 for c �= −1. (6)

Page 4: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

PATEL AND JOSHI: SR OF HSIs: USE OF WFCs AND SPARSITY REGULARIZATION 1731

Using (2) left-hand side, (5) and (6), we obtain

l=

(1+c√2(1+c2)

,1−c√2(1+c2)

,− c(1−c)√2(1+c2)

,c(1+c)√2(1+c2)

). (7)

A unique solution for the coefficient c is obtained by solvingthe following optimization problem:

ε = argmin∀c

N∑P=1

M∑m1=1

M∑n1=1

(Y

(P )m1,n1 −B

(P )m1,n1

)2

(8)

where Y (P ) is the P th LR PCA training image and B(P ) isthe coarser part of the DWT transformed [i.e., LILT in (1)]HR PCA training image number P and, m1, n1 indicate spatiallocations. Here, Y (P )

m1,n1 is known and B(P )m1,n1 can be expressed

in terms of c. Equation (8) is convex, hence simple optimizationtechnique such as gradient descent can be used to find optimumvalue of c, which in turn, can be used to determine optimum l[see (7)].

The HP filter coefficients can now be obtained, to sat-isfy the orthonormality condition of matrix W2M , as h = (l3,−l2, l1,−l0). It may be noted that the derived LP filter co-efficients from (8) are optimal in the mean-squared sense.Therefore, the corresponding HP filter should yield better edgedetails in the SR image.

IV. LEARNINIG INITIAL SR ESTIMATE

Learning-based approaches work well while obtaining themissing high-frequency details for super-resolving natural aswell as remotely sensed images [9], [15]–[17], [19]. Our learn-ing uses only HR HSI in the training database as opposedto LR–HR image pairs used by a few researchers [25], [29].Considering a decimation factor 2, we use two- and three-level wavelet decomposition using EWFCs for the test andtraining images, respectively. Fig. 2 shows the block schematicfor learning of detail wavelet coefficients for one of the testPCA image for q = 2. Fig. 2(a) shows the subbands 0 to V Iof the LR test image, whereas the dotted lines show subbandsV II − IX that have to be learned. Subband 0 represents thecoarser part of the DWT transformed image and subbands I −III and IV − V I represent the vertical (V), horizontal (H),and diagonal (D) details in level 2 and 1, respectively. Fig. 2(b)shows the three-level wavelet decomposition of HR trainingimages having subbands 0(r) − IX(r), r = 1, 2, . . . , Q. Thelearning procedure is described in detail as in the following.

Consider an LR test image of size M ×M pixels. Thecorresponding HR image is of size 2M × 2M pixels forq = 2. Using the idea of a zero tree concept [33], we fol-low the minimum mean-squared error (MSE) between theknown DWT coefficients of test and training images tolearn the unknown detail wavelet coefficients. Suppose φ(i, j)is the wavelet coefficient at location (i, j) in subband 0,where 0 ≤ i, j < M/4 of the LR test image. Correspond-ing detail coefficients in subbands I , II , and III are atlocations φ(i, j +M/4), φ(i+M/4, j), and φ(i+M/4, j +M/4), respectively and the coefficients in subband IV , V ,and V I correspond to blocks of size 2 × 2 {φ(p, q +

Fig. 2. Illustration of learning of detail wavelet coefficients for resolutionfactor q = 2 using a database of HR PCA images. (a) Two-level waveletdecomposition of test PCA image (LR observation). (b) Three-level waveletdecomposition of HR PCA training images.

M/2)p=2i+1,q=2j+1p=2i,q=2j }, {φ(p+M/2, q)p=2i+1,q=2j+1

p=2i,q=2j }, and

{φ(p+M/2, q +M/2)p=2i+1,q=2j+1p=2i,q=2j }, respectively. These

coefficients of I − V I are used in learning the missing 4 × 4blocks in subbands V II − IX . For a pixel at (i, j) in the testimage at subband 0, minimization is carried out as per (9) topick the missing block of size 4 × 4 in subband V II that givesus the horizontal (H) edge details

ε = argmin∀l,m,r

[φI(i, j +M/4)− φ

(r)I (l, m+M/4)

]2

+

⎡⎣2i+1∑p=2i

2j+1∑q=2j

φIV (p, q +M/2)

−2l+1∑l1=2l

2m+1∑m1=2m

φ(r)IV (l1,m1 +M/2)

⎤⎦2

. (9)

In this equation, r = 1, . . . , Q, 0 ≤ l, m < M/4, and φ(r)I and

φ(r)IV denote the wavelet coefficients for the rth training image

at Ist and IV th subbands, respectively. The correspondingwavelet detail coefficients from the subband V II of HR train-ing image are copied into the subband V II of the test image.This is repeated for every location in the subbands. Similarly,we can find V and D edge details in subband V III and IX ,respectively using (9) by changing subscripts and displacement(M/4,M/2) corresponding to V and D appropriately. By ap-plying inverse DWT to this learned image gives the initial SRestimate. A similar procedure is used on all PCA images to ob-tain the initial SR estimate for every test image in PCA domain.If the error (ε) is quite large, it signifies that the 4 × 4 patch doesnot have its corresponding HR representation in the database.To avoid such spurious learning, we consider the DWT coeffi-cients only when the error (ε) is less than a chosen threshold.

Page 5: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

1732 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 4, APRIL 2015

V. FINAL SOLUTION USING REGULARIZATION

Further refinement of the initial estimate is performed usingregularization to obtain a better solution by using the priorinformation about the solution. It needs to have a data fittingterm and a regularization term. Hence, we first model the imageformation in order to obtain the data fitting term.

A. Observation Model

Let Yβ , (β = 1, 2, . . . , κ), be the PCA transformed LR HSItest image of size M ×M and Zβ be the corresponding SRPCA image of size qM × qM . Assuming a linear model, theLR observation yβ can be expressed as

yβ = Aβzβ + nβ (10)

where yβ and zβ represent the lexicographically ordered vec-tors of size M2 × 1 and q2M2 × 1, respectively. Aβ is thedegradation matrix of size M2 × q2M2 that takes care ofdegradation that includes aliasing caused as a result of down-sampling. In the earlier research works on SR, matrix with fixedentries that has q2 nonzero entries in every row having valuesof 1/q2 was used as a degradation model [34] for all bands ofthe multispectral and HSIs [21], [35]–[37]. This means, an LRpixel is the average of light intensity that falls on the HR pixelsassuming that the entire area of a pixel is acting as the lightsensing area, and fill factor for the CCD array is unity for allspectral bands. In practice this is not true, and the incorporationof improved degradation model leads to better solution.

In our approach, we use the estimated LP WFCs l to con-struct the degradation matrix Aβ . Instead of considering the LRpixel as the averaging of HR pixels, we represent it as a linearcombination of HR pixels, i.e., zβ weighted appropriately byusing the estimated LP WFCs. In this case for an integer factorof q, matrix Aβ consists of q4 non-zero elements along eachrow at appropriate locations according to the 2-D convolutionequation between HR and estimated LP WFCs l (see LILT in(1)). Thus, the LR intensity represents the weighted average ofthe HR intensities over a neighborhood of q4 pixels corruptedwith additive noise. Here, the noise nβ is the independentand identically distributed (i.i.d.) vector with zero mean andvariance σ2

n and has same size as that of yβ . It is important tonote that we estimate LP WFCs for each PCA image separately,and hence, we are using a degradation operator optimized foreach band. In addition, the model has an overlap of HR pixels,horizontally and vertically. This relaxes the assumption that LRHSI is strictly defined by the specific detector area only.

B. Sparsity as a Prior

SR is an ill-posed inverse problem. There are infinite numberof solutions to (10). Hence, selection of appropriate model asthe prior information and use of regularization helps to obtainbetter solution. The use of sparsity as a prior for SR is exploredby many researchers [18], [20] using the trained dictionaries ofHR and LR patches. In our work, we do not require dictionarytraining, since we are using the dictionary constructed from theinitial estimate itself.

Since our objective is to preserve spatial correlation, weconsider that a patch in SR can be represented as a sparse

linear combination of the other patches, mostly nearby. Byimposing the condition that final solution should have the samesparseness as the ground truth (GT), we obtain an SR solutionthat preserves the spatial dependencies. However, GT is notavailable, but we do have the close approximation to SR in theform of the initial estimate, and we use the same in obtainingthe necessary sparse coefficients.

Suppose, DHp∈ Rn×K represents an overcomplete dictio-

nary of K atoms (K � n) formed by considering all thepatches (except measurement patch) in the initially estimatedSR PCA image represented as vectors. Let xp ∈ RK be thesparse approximation over this dictionary. The atoms representthe lexicographically ordered patches in the initial estimate.Then, a measurement vector zp ∈ Rn, i.e., a patch of initialestimate, can be represented as a linear combination of a fewatoms from the dictionary DHp

, i.e., sparse linear combinationof other patches in the image. Thus, zp can be written aszp = DHp

xp, where xp has very few nonzero entries, i.e., xp

is sparse. Note that DHphas column vectors, excluding the

patch under consideration, i.e., zp. Given zp and DHp, xp can

be obtained by solving the l1 minimization using a standardoptimization tool such as linear programming, by posing theproblem as

minxp∈RK

‖xp‖l1 s.t. zp=DHpxp, where ‖xp‖l1 =

K∑i=1

|xpi|. (11)

Using the aforementioned formulation, we find the sparse co-efficients for every patch of initial estimate in terms of otherpatches in the image, for the considered PCA band. In orderto remove unwanted abrupt variations across the patches ofSR image, we considered patches overlapped horizontally andvertically by 2 pixel rows and columns, respectively whileforming the dictionary.

C. Regularization With Sparsity Coefficients

Considering one of the PCA bands, regularization is carriedout as follows. Sparse coefficients obtained for every patch fromthe initial SR image serve as weights for the final SR image. Theregularization is obtained using a patch-based approach, whichprovides the constraint of sparsity in the final solution. Using adata-fitting term, and sparsity prior term, the cost function for asingle patch of PCA band β can be written as

εβ = argmin∀zp

‖yp −Apzp‖22 + λ∥∥yp −ApDHp

xp

∥∥22, (12)

where yp is the LR test patch, xp is the sparse coefficientvector, which is already estimated, zp is the SR patch tobe estimated, and Ap is degradation matrix taking care ofaliasing. Here, DHp

is the dictionary of SR atoms that has to beestimated and λ represents the weightage given to the sparsityterm, chosen empirically. Equation (12) is constructed for eachpatch, and the final cost consists of the sum of these. Here, Ap

is similar to Aβ explained in Section V-A, except having sizedifference according to lengths of LR and SR patches. It maybe noted that since we are regularizing the PCA transformedimages, we expect a better spectral consistency in the finalsolution. Inverse PCA gives us the SR image in spatial domain.

Page 6: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

PATEL AND JOSHI: SR OF HSIs: USE OF WFCs AND SPARSITY REGULARIZATION 1733

Before we proceed, we briefly explain the number of compar-isons required in learning the initial estimate in our approachas it adds to the overall computational complexity of ourapproach. Consider a test image of size M ×M decomposedinto W levels. We need to learn detail coefficients for eachof the coefficients in the coarse subband. For each of thecoefficients in this subband, the best matching coefficientsat finer level in the training database can be searched bycomparing

∑u=W−1u=0 22u coefficients within each of the detail

subbands. Considering subband corresponding to H details, wesearch for the best matching wavelet coefficients in the entireH subband of all the HR training images. The size of thesubband at W th level is M/2W ×M/2W , and it consists of(M/2W )

2coefficients. These coefficients are compared with

corresponding H coefficients at all locations (M/2W )2

in eachof the HR training images (Q) in the database. Thus, thenumber of comparisons required for learning H, V, and Ddetails are 3Q(M/2W )

4 ∑u=W−1u=0 22u. Although this involves

a significant number of comparisons, it will not cause compu-tational burden in our case due to the use of high-performancecomputers and because of the process being noniterative.

VI. EXPERIMENTS AND RESULT ANALYSIS

We performed experiments on three different data sets:1) single-band natural images; 2) natural HSIs, and 3) remotelysensed HSIs (AVIRIS). Due to the lack of availability of thetrue LR–HR pairs of HSIs, as a simple sanity check, theproposed SR approach is first tested on single but widebandnatural images. Data for this experiment include three sets ofsize 64 × 64, 128 × 128, and 256 × 256, captured by acomputer-controlled camera. These data sets are used to testthe effectiveness of our method in estimating the WFCs andlearning of the high-frequency details. The data sets used in theexperiments on hyperspectral data constitute images with 31and 224 spectral bands, respectively. For all our experiments,the step size for gradient descent algorithm was chosen as0.01. The weightage to sparsity term was set as λ = 0.23 inthe regularization (12) through a trial-and-error procedure forall experiments. We show the visual as well as quantitativecomparisons for experiments on all the three data sets. Differentquantitative measures used in our experiments are MSE [34],erreur relative globale adimensionnelle de synthese (ERGAS)[38], spectral angle mapper (SAM) [39], and Q2n [40].

A. Experiments on Single-Band Natural Images

Here, we used a database of 100 LR–HR real world grayscaleimage pairs captured by varying the optical zoom setting of asimple low cost computer-controlled camera. PCA is not usedin this experiment as all the images are single band only. AnLR image of size 64 × 64 is used as a test image and the HRimages of size 128 × 128 and 256 × 256 are used to obtain SRfor q = 2 and q = 4, respectively. Note that the true HR of thetest image is not used in the experiments but for computing thequantitative measure and for visual comparison only.

First of all, to demonstrate the effectiveness of the initial SRestimate in obtaining the final SR, we show the regularizationresults obtained by using two different images as initial SR

Fig. 3. SR results for q = 4 showing importance of initial estimate. (a) LRimage of size 64 × 64. (b) GT of size 256 × 256. (c) Bicubic interpolatedimage. (d) Learned SR image using EWFCs. (e) Regularization result whenusing bicubic interpolated image as initial SR estimate. (f) Regularization resultwhen using learned SR image as initial SR estimate (proposed approach).

TABLE IIMPORTANCE OF INITIAL ESTIMATE ON SINGLE-BAND “GANAPATI”

IMAGE FOR q = 4 IN TERMS OF MSE BETWEEN GT AND SR IMAGES

Fig. 4. Experimental results on single-band car image for q = 4. (a) LR testimage of size 64 × 64. (b) GT of size 256 × 256. (c) Bicubic interpolation(0.0079) [24]. (d) Initial SR image using Db4 wavelet (0.0067) [25]. (e) SRimage using Yang et al. method (0.0064) [18]. (f) Initial SR estimate usingEWFCs in proposed approach (0.0058).

estimates. Fig. 3(a) and (b) shows LR and GT images of Lord“Ganapati”. When we use the bicubically interpolated image ofFig. 3(c) as initial SR estimate to learn the sparsity coefficientsand perform regularization, we obtain the image shown inFig. 3(e). Similarly, the use of Fig. 3(d) as the initial SRestimate (which is more closer to GT image) in regularization,we obtain the result shown in Fig. 3(f). Comparing the imagesin Fig. 3(e) and (f), it is clearly observed that the SR imageof the proposed method has better details. We can see that theψ shape on the forehead of “Ganapati” is clearly visible inFig. 3(f) when compared with that shown in Fig. 3(e). This maybe because of better sparseness obtained using the proposedapproach for initial estimate. The benefit of using the learningin SR is also evident from the MSE values listed in Table I.

Finally, we discuss the experimental results on SR for thenatural Car image for q = 4 shown in Fig. 4. We selected 10LR–HR image pairs having edge and texture details matchingwith the Car image to estimate the WFCs and estimated c as

Page 7: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

1734 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 4, APRIL 2015

−0.01. Fig. 4(a) and (b) shows the LR test image and GT image,respectively. Image displayed in Fig. 4(c) correspond to thoseexpanded using bicubic interpolation. To show the effective-ness of EWFCs when compared with fixed Db4 wavelet, weshow the initial SR obtained using fixed wavelet basis Db4 byJiji et al. [25] in Fig. 4(d) and that obtained using the EWFCsby the proposed method in Fig. 4(f). Fig. 4(e) shows SR imageobtained using Yang et al. method [18]. It can be observed thatthe fading appears in spokes of the wheel in image of Fig. 4(c)and the blockiness is also visible in the spokes in Fig. 4(d) and(e). The initial SR estimate obtained using EWFCs in proposedapproach is closely matching with GT. Values in bracket inFig. 4 show MSE between GT and SR for quantitative com-parison. They show that the MSE while using the EWFCs inproposed method is significantly less when compared with fixedbasis (Db4)-based learning and other methods.

B. Experiments on Hyperspectral Data Sets

Experiments are conducted on two different hyperspectraldata sets. The first set consists of 31-band reflectance imagesof the natural scene, having a spectral range of 400–700 nmall acquired under the direct sunlight in clear or almost clearsky [41]. HSI cubes available in [42] and [43] are used in thisexperiment. Cropped region of “Scene 5” of HSIs of naturalscenes 2002 [42] is used as test data. Our second HSI data set iscomprised of 224-bands of AVIRIS HSI cubes available in [44].In this case, a cropped region of an urban area in Moffett Field isused as test data. After discarding a few bands having low signalto noise ratio (SNR) by visual inspection, 196 bands were usedfor SR. Care was taken to include bands having spectral range inaccordance to that of the test image while creating training data.

Here, we do not have the true LR–HR pairs of HSIs. Hence,the LR images were created from cropped images by usingfiltering and downsampling operations. The remaining regionsof original as well as other HSI cubes are used to generatetraining data sets. We used five sets of generated LR–HRtraining pairs cropped from the same HSI cube, excluding thetest image cube for estimating the WFCs. Use of the same cubeensures inclusion of a large number of materials and objects ofinterest. Additional 15 HR training HSIs were included whileestimating the initial SR for both the experiments. These imagesare different for natural HSI and AVIRIS HSI. Since the GTimages are not available, we consider original cropped HSIsof size 256 × 256 as GT and generated the LR HSIs of size128 × 128 and 64 × 64 by applying downsampling operationby a factor of q = 2 and q = 4, respectively. The SR algorithmwas then applied on LR HSIs.

In order to restrict the maximum spatial frequency in theimage, we use LP filtering operation, using a filter mask ofsize 5 × 5 before downsampling. The LP filtering operationwas performed and tested using three different kernels namely,nearest neighbor (NN), Gaussian filter with standard deviationof 0.5, and horizontal motion blur (5 pixels) with 30◦ rotation on31-band natural HSI. Table II shows the estimated values of cand LR reconstruction error (i.e., error between true LR and LRobtained using the filter coefficients derived from c) for PCAbands 1 and 2. We can see that as we change the filter mask, the

TABLE IIEFFECT OF DIFFERENT PSFs ON ESTIMATION OF WFCS FOR

31-BAND NATURAL HSI FOR q = 4 (HERE κ = 3)

Fig. 5. Experimental results of SR on PCA-1 of 31-band natural HSI forq = 4. (a) LR test image of size 64 × 64. (b) GT of size 256 × 256. (c) Bicubicinterpolation[24]. (d) Jiji et al. method [25]. (e) Zhao et al. method [21].(f) Yang et al. method [18]. (g) Proposed approach.

EWFCs also change, thus making it clear that the degradationoperation plays a significant role while SR. The values of MSEare significantly reduced when using the EWFCs. It shows thatthe use of EWFCs for SR purposes is better since in general theimaging operation may be modeled in different ways.

We next consider the visual and quantitative assessmentof SR using the HSIs. We used Gaussian filter prior to thedownsampling operation in order to generate LR HSIs. Aftertaking the PCA, we retained three images corresponding to thePCs with highest variance and applied the SR algorithm onthem. Results of the proposed algorithm on the 31-band naturalHSI is presented for PCA-I in Fig. 5 for q = 4. Fig. 5(a) and(b) displays the LR test image and the GT image, respectively.The SR results obtained using different methods are shown inFig. 5(c)–(g). SR obtained using proposed approach shown inFig. 5(g) has sharper borders of white colored table on the topleft corner while in Fig. 5(c)–(f) these borders appear blurred.One can see that the final SR image obtained using the proposedmethod displayed in Fig. 5(g) compares well with the GT. Thetext written on the ball is more clear in Fig. 5(g). The use ofEWFCs and regularization improves the results in our approachas evident from quantitative measures such as ERGAS, SAMand Q2n given in Table III. These measures show that theproposed approach better preserves spatial and spectral fidelityin the SR images compared with other methods.

In Fig. 6, the SR results on remotely sensed data acquiredusing AVIRIS hyperspectral imager is shown for a specificband 100. Results are listed for the first 3 PCA bands thatinclude 99.3% of spectral variability of HSI. Fig. 6(a) showsthe LR test image, and the original HSI band is displayed

Page 8: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

PATEL AND JOSHI: SR OF HSIs: USE OF WFCs AND SPARSITY REGULARIZATION 1735

TABLE IIIQUANTITATIVE MEASURES FOR SR OF

31-BAND NATURAL HSI FOR q = 4

Fig. 6. Experimental results of SR on 100th band of AVIRIS data for q = 4.(a) LR test image of size 64 × 64. (b) GT of size 256 × 256. (c) Bicubicinterpolation [24]. (d) Jiji et al. method [25]. (e) Zhao et al.. method [21].(f) Yang et al. method [18]. Proposed approach.

TABLE IVQUANTITATIVE EVALUATION METRICS OF AVIRIS SR FOR q = 4

in Fig. 6(b). From Fig. 6(c), we can see that the bicubicinterpolation blurs the image when upsampled and the high-frequency spatial details are lost. Thus, the interpolation failsto preserve the high-frequency details, hence not suitable forsolving the SR problem. Result of Zhao et al. [21] shown inFig. 6(e) is less blurred compared with bicubic interpolated im-age, but it shows artifacts and has loss of high-frequency details.Jiji et al. [25] approach results in improved visual quality thanthe interpolation and Zhao et al. [21] method, but the overallcontrast of the image is not preserved, as shown from Fig. 6(d).Sparsity-based SR result of Yang et al. [18] method shown inFig. 6(f) fails to preserve high-frequency details as evident fromvertical lines in the middle region of the image. As shown fromFig. 6(g), the use of EWFCs and sparsity regularization resultsin reduced artifacts and preserve high-frequency details. It givesbetter visual quality closely resembling the GT. We can see thatthe white patches visible in LR observation appear grayish inthe Jiji et al. method [See Fig. 6(d)], but the result is improvedin the proposed approach.

As far as the quantitative comparison is concerned, it is clearfrom Table IV that the proposed method provides scores that arecloser to the reference values when compared with other meth-ods. Lower value of ERGAS in the proposed method indicateslesser global distortion in SR HSI. We see that when compared

TABLE VCOMPUTATIONAL TIME IN SECONDS OF DIFFERENT ALGORITHMS

FOR LR IMAGE SIZE 64 × 64 AND q = 4

with other approaches the remaining measures such as SAM,Q2n are better for the proposed method. Values of Q2n inTable IV indicate minimum spatial and spectral distortions bythe proposed approach. Lower values for SAM indicate that theproposed method provides better spectral fidelity.

Finally, we discuss the timing complexity for SR of HSIs bya factor of 4. All algorithms were implemented using MATLAB7.0 on Intel Core i3 CPU M380 having an operating frequencyof 2.53 GHz. Comparisons of the running time of all themethods for different input images is given in Table V foreach experiment. Time complexity to estimate SR image inthe Jiji et al. [25] approach is several hours because of non-differentiable cost function used in regularization. The methodproposed by Zhao et al. [21] and Yang et al. [18] are quite timeconsuming since the dictionary learning is employed in bothapproaches. In addition, the method of Zhao et al. [21] super-resolve each band separately without using any dimensionalityreduction. However, the proposed approach works on reduceddimension and training of dictionary is not required, hencereduces time complexity.

VII. CONCLUSION

We have presented an SR approach for HSIs based on thedesign of an adaptive wavelet basis. Quantitative comparisonof score indexes shows that our method enhances spatial infor-mation without introducing significant spectral distortion.

Adaptive wavelet basis will have a positive impact on thesubsequent HSI processing applications, where high spatial andspectral resolution is desirable. We conclude that the waveletbasis can be tailored to take care of the variability in sensorcharacteristics. Our future work involves the incorporation ofspectral mixing models in order to improve the estimation ofWFCs and hence the SR.

REFERENCES

[1] D. Landgrebe, “Hyperspectral image data analysis,” IEEE Signal Process.Mag., vol. 19, no. 1, pp. 17–28, 2002.

[2] G. A. Shaw and H. h. K. Burke, “Spectral imaging for remote sensing,”Lincoln Lab. J., vol. 14, no. 1, pp. 3–28, Nov. 2003.

[3] T. Akgun, Y. Altunbasak, and R. M. Mersereau, “Superresolution recon-struction of hyperspectral images,” IEEE Trans. Image Process., vol. 14,no. 11, pp. 1860–1875, Nov. 2005.

[4] J. C. Chan, J. Ma, P. Kempeneers, and F. Canters, “Super-resolution en-hancement of hyperspectral CHRIS/Proba images with a thin-plate splinenonrigid transform model,” IEEE Trans. Geosci. Remote Sens., vol. 48,no. 6, pp. 2569–2579, Jun. 2010.

[5] H. Zhang, L. Zhang, and H. Shen, “A super-resolution reconstructionalgorithm for hyperspectral images,” Signal Process., vol. 92, no. 9,pp. 2082–2096, Sep. 2012.

[6] G. M. Foody, “Sharpening fuzzy classification output to refine the rep-resentation of sub-pixel land cover distribution,” Int. J. Remote Sens.,vol. 19, no. 13, pp. 2593–2599, 1998.

Page 9: Super-Resolution of Hyperspectral Images: Use of Optimum Wavelet Filter Coefficients and Sparsity Regularization

1736 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 4, APRIL 2015

[7] M. Q. Nguyen, P. M. Atkinson, and H. G. Lewis, “Superresolution map-ping using a hopfield neural network with fused images,” IEEE Trans.Geosci. Remote Sens., vol. 44, no. 3, pp. 736–749, Mar. 2006.

[8] Y. Gu, Y. Zhang, and J. Zhang, IEEE Trans. Geosci. Remote. Sens.,vol. 46, no. 5, pp. 1347–1358, May 2008.

[9] F. A. Mianji, Y. Zhang, and Y. Gu, “Resolution enhancement of hyper-spectral images using a learning-based super-resolution mapping tech-nique,” in Proc. IGARSS, 2009, pp. III-813–III-816.

[10] A. Villa, J. Chanussot, J. A. Benediktsson, and C. Jutten, “Spectral un-mixing for the classification of hyperspectral images at a finer spatialresolution,” IEEE J. Sel. Topics Signal Process., vol. 5, no. 3, pp. 521–533,Jun. 2011.

[11] R. B. Gomez, A. Jazaeri, and M. Kafatos, “Wavelet-based hyperspectraland multispectral image fusion,” in Proc. SPIE, 2001, pp. 36–42.

[12] J. Nunez et al., “Multiresolution-based image fusion with additive waveletdecomposition,” IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3,pp. 1204–1211, May 1999.

[13] Y. Zhang, S. De Backer, and P. Scheunders, “Noise-resistant wavelet-based Bayesian fusion of multispectral and hyperspectral images,” IEEETrans. Geosci. Remote Sens., vol. 47, no. 11, pp. 3834–3843, Nov. 2009.

[14] F. Li, X. Jia, D. Fraser, and A. J. Lambert, “Super resolution for remotesensing images based on a universal hidden markov tree model,” IEEETrans. Geosci. Remote Sens., vol. 48, no. 3, pp. 1270–1278, Mar. 2010.

[15] W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example based super-resolution,” IEEE Comput. Graph. Appl., vol. 22, no. 2, pp. 56–65,Mar./Apr. 2002.

[16] C. V. Jiji and S. Chaudhuri, “Single-frame image super-resolution throughcontourlet learning,” EURASIP J. Appl. Signal Process., vol. 2006, pp. 1–11, 2006.

[17] D. P. Capel and A. Zisserman, “Super-resolution from multiple viewsusing learnt image models,” in Proc. IEEE Comput. Soc. Conf. CVPR,2001, pp. II-627–II-634, 2.

[18] J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolutionvia sparse representation,” IEEE Trans. Image Process., vol. 19, no. 11,pp. 2861–2873, Nov. 2010.

[19] G. Chen, S.-E. Qian, J.-P. Ardouin, and W. Xie, “Super-resolution of hy-perspectral imagery using complex ridgelet transform,” IJWMIP, vol. 10,no. 3, pp. 1–22, May 2012.

[20] M. Elad and M. Aharon, “Image denoising via sparse and redundantrepresentations over learned dictionaries,” IEEE Trans. Image Process.,vol. 15, no. 12, pp. 3736–3745, Dec. 2006.

[21] Y. Zhao et al., “Hyperspectral imagery super-resolution by sparse repre-sentation and spectral regularization,” EURASIP J. Adv. Signal. Process.,vol. 2011, p. 87, Oct. 2011.

[22] A. P. Cracknell, “Synergy in remote sensing—What’s in a pixel?” Int. J.Remote Sens., vol. 19, no. 11, pp. 2025–2047, Jul. 1998.

[23] L. Kleiman, “Hyperspectral imaging: The colorimetric high ground,”SPIE Proc., Color Imag. VIII, Process., Hardcopy, Appl., vol. 5008,pp. 249–259, Jan. 2003.

[24] R. Keys, “Cubic convolution interpolation for digital image processing,”IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP* 29, no. 6,pp. 1153–1160, Dec. 1981.

[25] C. V. Jiji, M. V. Joshi, and S. Chaudhuri, “Single-frame image super-resolution using learned wavelet coefficients,” Int. J. Imag. Syst. Technol.,vol. 14, no. 3, pp. 105–112, 2004.

[26] B. Chalmond, “PSF estimation for image deblurring,” CVGIP: GraphicalModel Image Process., vol. 53, no. 4, pp. 364–372, Jul. 1991.

[27] I. T. Jolliffe, Principal Component Analysis, 2nd ed. New York, NY,USA: Springer-Verlag, 1986.

[28] R. C. Patel and M. V. Joshi, “Super-resolution of hyperspectral imagesusing compressive sensing based approach,” ISPRS Ann. Photogram.Remote. Sens. Spatial Inf. Sci., vol. I-7, pp. 83–88, Jul. 2012.

[29] P. P. Gajjar and M. V. Joshi, “New learning based superresolution: Useof DWT and IGMRF prior,” IEEE Trans. Image Process., vol. 19, no. 5,pp. 1201–1213, May 2010.

[30] I. Daubechies, “Orthonormal bases of compactly supported wavelets ii:Variations on a theme,” SIAM J. Math. Anal., vol. 24, no. 2, pp. 499–519,Mar. 1993.

[31] P. J. Van Fleet, Discrete Wavelet Transformations: An ElementaryApproach With Applications. Hoboken, NJ, USA: Wiley, 2008.

[32] G. Strang, Linear Algebra and Its Applications. Pacific Grove, CA,USA: Brooks/Cole.

[33] J. M. Shapiro, “Embedded image coding using zerotrees of wavelet co-efficients,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3445–3462,Dec. 1993.

[34] R. R. Schultz and R. L. Stevenson, “A Bayesian approach to image expan-sion for improved definition,” IEEE Trans. Image Process., vol. 3, no. 3,pp. 233–242, May 1994.

[35] M. T. Eismann and R. C. Hardie, “Application of the stochastic mixingmodel to hyperspectral resolution enhancement,” IEEE Trans. Geosci.Remote Sens., vol. 42, no. 9, pp. 1924–1933, Sep. 2004.

[36] M. V. Joshi, L. Bruzzone, and S. Chaudhuri, “A model-based approachto multiresolution fusion in remotely sensed images,” IEEE Tran. Geosci.Remote Sens., vol. 44, no. 9, pp. 2549–2562, Sep. 2006.

[37] H. Shen, M. K. Ng, P. Li, and L. Zhang, “Super-resolution reconstructionalgorithm to MODIS remote sensing images,” Comput. J., vol. 52, no. 1,pp. 90–100, 2009.

[38] L. Wald, Data fusion: Definitions and Architectures-Fusion of Imagesof Different Spatial Resolutions. Paris, France: Les Presses, Ecole desMines de Paris, 2002.

[39] F. A. Kruse et al., “The Spectral Image Processing System SIPS—Interactive visualization and analysis of imaging spectrometer data,”Remote Sens. Environ., vol. 44, no. 2/3, pp. 145–163, May./Jun. 1993.

[40] A. Garzelli, F. Nencini, L. Alparone, and S. Baronti, “A new methodfor quality assessment of hyperspectral images,” in proc. IGARSS, 2007,pp. 5138–5141.

[41] S. M. C. Nascimento, F. P. Ferreira, and D. H. Foster, “Statistics of spatialcone-excitation ratios in natural scenes,” J. Opt. Soc. Amer. A, vol. 19,no. 8, pp. 1484–1490, Aug. 2002.

[42] [Online]. Available: http://personalpages.manchester.ac.uk/staff/david.foster/

[43] D. H. Brainard, Hyperspectral Image Data. [Online]. Available: http://color.psych.upenn.edu/hyperspectral/

[44] Aviris Free Data, Jet Propulsion Lab., California Inst. Tech., Pasadena.[Online]. Available: http://aviris.jpl.nasa.gov/html/aviris.freedata.html

Rakesh C. Patel received the B.E. degree in in-strumentation and control from Lalbhai DalpatbhaiCollege of Engineering (LDCE), Ahmedabad, India,and the M.E. degree in microprocessors systemsand applications from Maharaja Sayajirao Univer-sity, Baroda, India. He is currently working towardthe Ph.D. degree at Dhirubhai Ambani Instituteof Information and Communication Technology,Gandhinagar, India.

His areas of interest include signal processing,multispectral and hyperspectral image analysis, com-

pressive sensing, optimization, control, microcontrollers, field-programmablegate array, and embedded system design. He served as an Assistant Professorfor over 11 years in different technical universities across Gujarat, India.Currently, he is working as an Associate Professor with LDCE for two years.

Mr. Patel is a Life Member of Institute of Electronics and TelecommunicationEngineers (IETE).

Manjunath V. Joshi received the B.E. degreefrom the University of Mysore, Mysore, India, andthe M.Tech and Ph.D. degrees from the IndianInstitute of Technology Bombay (IIT Bombay),Mumbai, India.

Currently, he is serving as a Professor withthe Dhirubhai Ambani Institute of Information andCommunication Technology, Gandhinagar, India. Hehas been involved in active research in the areas ofsignal processing, image processing, and computervision. He has coauthored a book entitled Motion-

Free Super Resolution (Springer, New York).Dr. Joshi was a recipient of the Outstanding Researcher Award in Engineer-

ing Section by the Research Scholars Forum of IIT Bombay. He was also arecipient of the Best Ph.D. Thesis Award by Infineon India and the Dr. VikramSarabhai Award for the year 2006–07 in the field of information technologyconstituted by the Government of Gujarat, India.