Upload
meenakshi-dhasmana
View
213
Download
0
Embed Size (px)
DESCRIPTION
image reconstruction using omp
Citation preview
CHAPTER 1
INTRODUCTIONImaging is an extensive field and it is evolving at a rapid rate at the same time. It provides
various form of framework for image acquisition, modification, generalization, visualization,
reconstruction and many others. It offers various methods for reconstruction that includes a
wide range with different-different requirements and goals which helps in achieving the
various prospective of any proposed work. Imaging is a vast and immensely vital field which
covers all aspects of the analysis, modification, compression, visualization, and generation of
images. It is a highly interdisciplinary field in which researchers from biology, medicine,
engineering, computer science, physics, and mathematics, among others, work together to
provide the best possible image. Imaging science is profoundly mathematical and challenging
from the modeling and the scientific computing point of view. There are at least two major
areas in imaging science in which applied mathematics has a strong impact: image
processing, and image reconstruction. In image processing the input is a (digital) image such
as a photograph or video frame, while in image reconstruction the input is a set of data from
which the desired image can be recovered. In the latter case, the data is limited, and its poor
information content is not enough to generate an image to start with. Image reconstruction
refers to the techniques used to create an image of the interior of a body (or region) non-
invasively, from data collected on its boundary. Image reconstruction can be seen as the
solution of a mathematical inverse problem in which the cause is inferred from the effect.
Image reconstruction can be achieved by wide range of techniques in an image processing
framework. The Traditional techniques involves transformation, iterative methods,
tomography, total variation and greedy pursuits while recovery of high dimensional sparse
signal based on a small number of linear measurements is successfully possible through
compressive sensing [1]. It provides means on how to reconstruct the signal from under-
sampled data. Image reconstruction has a history dates back to 1917, but this field still
provides challenging opportunity to the researcher either to still improvise the existing
techniques or propose new one. The major reconstruction methods based on Radon’s work
was developed in 1917 i.e. the classic image reconstruction from projections paper. In 1972,
Hounsfield develop the first commercial x-ray computer tomography scanner. The very
fundamental or Classical reconstruction method is based on the radon transform which
acquaints the researcher with the method known as back projection. The other alternative
approaches those were further proposed involves Fourier transform and iterative series-
1
expansion methods. It further takes into account the Statistical estimation methods, Wavelet
and other multiresolution methods. These days signal recovery from the under-sampled data
or inaccurate measurements are in trend. This motive of signal recovery can be accomplished
with the amalgamation of modified recovery techniques with compressive sensing.
As our modern technology-driven civilization acquires and exploits ever-increasing amounts
of data, “everyone” now knows that most of the data we acquire “can be thrown away” with
almost no perceptual loss—witness the broad success of lossy compression formats for
sounds, images, and specialized technical data [2]. The phenomenon of ubiquitous
compressibility raises very natural questions: why go to so much effort to acquire all the data
when most of what we get will be thrown away? Can we not just directly measure the part
that will not end up being thrown away? Due to the paradigm to acquire sparse signals at a
rate significantly at a rate significantly below Nyquist rate, compressive sensing has attracted
much attention in recent years. The field of CS has existed for around four decades. It was
first used in Seismology in 1970 when Claerbout and Muir gave attractive alternative of least
square solutions,[3] In 1990s, Rudin, Osher, and Fatemi used total variation minimization in
Image Processing which is very close to l1 minimization . The idea of CS got a new life in
2004 when David Donoho, Candes, Justin Romberg, and Terence Tao gave important results
regarding the mathematical foundation of CS. A series of papers have come out in last six
years and the field is witnessing significant advancement almost on a daily basis. The
compressive sensing theorem states that a sparse signal can be perfectly reconstructed even
though it is sampled at a rate lower than the Nyquist rate [1]. It has gained an increasing
interest due to its promising results in various applications. The goal of Compressive sensing
is to recover the sparse vector using a small number of linearly transformed measurements.
The process of acquiring compressed measurements is referred to as sensing while that of
recovering the original sparse signals from compressed measurements is called reconstruction
[4]. The reconstruction problem basically require the solution for two distinct questions that
how many measurements are necessary and given these measurements, what algorithms can
be used. For reconstruction algorithms, there are two popular algorithms for Compressive
sensing and these are basis pursuit (BP) and Matching Pursuit (MP). A number of variants of
these techniques have been proposed. In this report Orthogonal matching pursuit method is
used for recovering the signal from inaccurate measurements. The class of greedy algorithms
solves the reconstruction problem by finding the answer, step by step, in an iterative fashion.
The idea is to select columns of residue in a greedy fashion. At each iteration, the column of
residue that correlates most with y (measurement vector) is selected. Conversely, least square
2
error is minimized at each iteration. That row’s contribution is subtracted from Y and
iterations are done on the residual until correct set of columns is identified. This is usually
achieved in M iterations. The stopping criterion varies from algorithm to algorithm. Most
used greedy algorithms are matching pursuit [5] and its derivative orthogonal matching
pursuits (OMP) [6] because of their low implementation cost and high speed of recovery. The
other methods can be Regularized OMP, subspace OMP, iterative thresholding algorithms,
with each having particular advantage and use. The report starts with presenting a brief
historical background of image reconstruction, CS during last four decades and the matching
pursuit techniques. It is followed by a comparison of the modified technique with
conventional sampling technique. A succinct mathematical and theoretical foundation
necessary for grasping the idea behind CS which is used with OMP is given. It then talks
about the modified technique and its simulation results. In the end, open research areas are
identified, results are justified and the report is concluded.
1.1 OBJECTIVES
The following objectives are considered for the thesis work in order to design a framework
for image reconstruction using compressed sensing and Orthogonal Matching Pursuit. The
objectives are as follows:
Implementation of reconstruction algorithm by varying the the criteria like energy,
least square error of identifying and selecting the significant components. The
information about the contrast and higher pixel ratio can be used as the criteria.
Reconstruction of images with OMP in the presence of noise as distorting attribute.
Recovery of image from the inaccurate and undersampled measurements via OMP
with explicit stopping rule and exact recovery condition
Recovery of image from the inaccurate and undersampled data via ROMP with
explicit stopping rule i.e. selecting the group of indices and then cut it down one on
the basis of the maximal energy.
Generalization of the implemented technique i.e. improving the criteria for identifying
the multiplying significant indices for better correlation on the basis of thresholding
the residual value.
Analysis of parameters like sampling ratio (M/N), PSNR, Running time and
percentage of recovery for the evaluation of the implemented technique.
3
CHAPTER 2
IMAGE RECONSTRUCTION USING COMPRESSIVE SENSINGConventional approaches to sampling signals or images follow Shannon’s celebrated
theorem: the sampling rate must be at least twice the maximum frequency present in the
signal (the so-called Nyquist rate). In fact, this principle underlies nearly all signal acquisition
protocols used in consumer audio and visual electronics, medical imaging devices, radio
receivers, and so on. For some signals, such as images that are not naturally bandlimited, the
sampling rate is dictated not by the Shannon theorem but by the desired temporal or spatial
resolution. However, it is common in such systems to use an antialiasing low-pass filter to
band limit the signal before sampling, and so the Shannon theorem plays an implicit role. In
the field of data conversion, for example, standard analog-to-digital converter (ADC)
technology implements the usual quantized Shannon representation: the signal is uniformly
sampled at or above the Nyquist rate [7]. This report surveys the theory of compressive
sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that
goes against the common wisdom in data acquisition. CS theory asserts that one can recover
certain signals and images from far fewer samples or measurements than traditional methods
use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals
of interest, and incoherence, which pertains to the sensing modality.
2.1 COMPRESSIVE SENSING PARADIGM
Compressive sensing (CS) has witnessed an increased interest recently courtesy high demand
for fast, efficient, and in-expensive signal processing algorithms, applications, and devices.
Contrary to traditional Nyquist paradigm, the CS paradigm, banking on finding sparse
solutions to underdetermined linear systems, can reconstruct the signals from far fewer
samples than is possible using Nyquist sampling rate. The problem of limited number of
samples can occur in multiple scenarios, e.g., when we have limitations on the number of
data capturing devices, measurements are very expensive or slow to capture such as in
radiology and imaging techniques via neutron scattering. In such situations, CS provides a
promising solution. CS exploits sparsity of signals in some transform domain and the
incoherency of these measurements with the original domain. In essence, CS combines the
sampling and compression into one step by measuring minimum samples that contain
maximum information about the signal: This eliminates the need to acquire and store large
number of samples only to drop most of them because of their minimal value. CS has seen
4
major applications in diverse fields, ranging from image processing to gathering geophysics
data. Most of this has been possible because of the inherent sparsity of many real world
signals like sound, image, video, etc [8]. These applications of CS are the main focus of this
report, with added attention given to the reconstruction of the image in the imaging domain.
The novel technique of CS is applied with OMP to avail the image recovery from random and
inaccurate data with improved concluding parameters. A brief comparison of these
techniques with other is also provided.
Figure 2.1 Traditional data sampling and compression versus CS.
2.1.1 Sparsity
Sparsity expresses the idea that the “information rate” of a continuous time signal may be
much smaller than suggested by its bandwidth, or that a discrete-time signal depends on a
number of degrees of freedom which is comparably much smaller than its (finite) length.
More precisely, CS exploits the fact that many natural signals are sparse or compressible in
the sense that they have concise representations when expressed in the proper basisψ.
Mathematically speaking, we have a vector f ∈ Rn (such as the n-pixel image in Figure 2.2)
which we expand in an orthonormal basis (such as a wavelet basis) ψ= [ψ1ψ2 ・ ・ ・ψn] as
follows:
f(t)= ∑i=1
n
xψ ( t) (2.1)
5
where x is the coefficient sequence of f , xi= ‹f,ψi›. It will be convenient to express f as ψ
x(where ψ is the n× n matrix with ψ1, . . . ,ψn as columns). The implication of sparsity is now
clear: when a signal has a sparse expansion, one can discard the small coefficients without
much perceptual loss. Natural signals such as sound, image or seismic data can be stored in
compressed form, in terms of their projection on suitable basis [9]. When basis is chosen
properly, a large number of projection coefficients are zero or small enough to be ignored. If
a signal has only s non-zero coefficients, it is said to be s-sparse. If a large number of
projection coefficients are small enough to be ignored, then signal is said to be compressible.
Well known compressive-type basis include 2 dimensional (2D) wavelets for images,
localized sinusoids for music, fractal-type waveforms for spiky reflectivity data, and
curvelets for wave field propagation.
Figure 2.2 Original image and its image in Wavelet transform domain
2.1.2 Incoherence
Incoherence extends the duality between time and frequency and expresses the idea that
objects having a sparse representation in ψ must be spread out in the domain in which they
are acquired, just as a Dirac or a spike in the time domain is spread out in the frequency
domain. Put differently, incoherence says that unlike the signal of interest, the
sampling/sensing waveforms have an extremely dense representation in ψ [8]. Coherence
measures the maximum correlation between any two elements of two different matrices.
These two matrices might represent two different basis/representation domains. If ψ is a N×N
matrix with ψ1 ...... ψn as columns and ϕ is an M × N matrix with ϕ1....... ϕN as rows. Then,
coherence μ is defined as
μ (ϕ,ψ) = √ N max¿ϕk , ψ j | (2.2)
for 1 ≤ j ≤ N and 1 ≤ k ≤ M. It follows from linear algebra that 1 ≤ μ(ϕ,ψ) ≤ √ N . In CS, we
are concerned with the incoherence of matrix used to sample/sense signal of interest
6
(hereafter referred as measurement matrix ϕ) and the matrix representing a basis, in which
signal of interest is sparse (hereafter referred as representation matrix ψ). Within the CS
framework, low coherence between ϕ and ψ translates to fewer samples required for
reconstruction of signal. An example of low coherence measurement/ representation basis
pair is sinusoids and spikes that are incoherent in any dimension, and can be used for
compressively sensing signals having sparse representation in terms of sinusoids [9].
2.1.3 Requirement and interest
CS operates very differently, and performs as “if it were possible to directly acquire just the
important information about the object of interest.” By taking about O(Slog(n/S)) random
projections as in “Random Sensing,” one has enough information to reconstruct the signal
with accuracy at least as good as that provided by fS, the best S-term approximation the best
compressed representation of the object. In other words, CS measurement protocols
essentially translate analog data into an already compressed digital form so that one can—at
least in principle obtain super-resolved signals from just a few sensors. All that is needed
after the acquisition step is to “decompress” the measured data. There are diverse ranges of
applications or uses of CS. The fact that a compressible signal can be captured efficiently
using a number of incoherent measuremnts that is proportional to its information level S ≤ n
has implications that are far reaching and concern a number of possible applications:
Data compression: In some situations, the sparse basis ψ may be unknown at the
encoder or impractical to implement for data compression. As in “Random Sensing,”
however, a randomly designed ϕ can be considered a universal encoding strategy, as it
need not be designed with regards to the structure of ψ. The knowledge and ability to
implement ψ are required only for the decoding or recovery of f. [11].
Channel coding. CS principles (sparsity, randomness, and convex optimization) can
be turned around and applied to design fast error correcting codes over the reels to
protect from errors during transmission.
Inverse problems: In still other situations, the only way to acquire f may be to use a
measurement system ϕ of a certain modality. However, assuming a sparse basis ψ
exists for f that is also incoherent with ϕ, then efficient sensing will be possible. One
such application involves MR angiography and other types of MR setups [12].
Data acquisition: In some important situations the full collection of n discrete-time
samples of an analog signal may be difficult to obtain (and possibly difficult to
subsequently compress). Here, it could be helpful to design physical sampling devices
7
that directly record discrete, low-rate incoherent measurements of the incident analog
signal.
CS in Cameras: CS has far reaching implications on compressive imaging systems
and cameras. It reduces the number of measurements hence, power consumption,
computational complexity and storage space without sacrificing the spatial resolution.
CS allows reconstruction of sparse N x N images by fewer than N2 measurements. CS
solves the problem by making use of the fact that in vision applications, natural
images can be sparsely represented in wavelet domains [13]. In random projections of
a scene with incoherent set of test function and reconstruct it by solving convex
optimization problem or OMP algorithm. CS measurements also decrease packet drop
over communication channel. Recent works have proposed the design of Tera hertz
imaging system.
Medical Imaging: CS is being actively pursued for medical imaging, particularly in
magnetic resonance imaging (MRI). MR images, like angiograms, have sparsity
properties, in domains such as Fourier or wavelet basis. Generally, MRI is a costly
and time consuming process because of its data collection process which is dependent
upon physical and physiological constraints. However, the introduction of CS based
techniques has improved the image quality through reduction in the number of
collected measurements and by taking advantage of their implicit sparsity.
Biological Applications: CS can also be used for efficient and inexpensive sensing in
biological applications. Recent works show usage of CS in comparative
deoxyribonucleic acid (DNA) microarray [14]. Traditional microarray bio-sensors are
useful for detection of limited number of micro organisms. To detect greater number
of species large expensive microarrays are required. However, natural phenomena are
sparse in nature and easily compressible in some basis. DNA microarrays consist of
millions of probe spots to test a large number of targets in a single experiment. CS
gives an alternative design of compressed microarrays in which each spot contains
copies of different probe sets reducing the overall number of measurements and still
efficiently reconstructing from them.
Sparse Channel Estimation: CS has been used in communications domain for sparse
channel estimation. Adoption of multiple-antenna in communication system design
and operation at large bandwidths, possibly in gigahertz, enables sparse representation
of channels in appropriate bases. Conventional technique of training based estimation
using least-square (LS) methods may not be an optimal choice. Various recent studies
8
have employed CS for sparse channel estimation. Compressed channel estimation
(CCS) gives much better reconstruction using its non-linear reconstruction algorithm
as opposed to linear reconstruction of LS-based estimators. In addition to non-
linearity, CCS framework also provides scaling analysis. The use of high time
resolution over-complete dictionaries further enhances channel estimation. BP and
OMP are used to estimate multipath channels with Doppler spread ranging from mild,
like on a normal day, to severe, like on stormy days.
2.2 RECONSTRUCTION MODEL
A nonlinear algorithm is used in CS, at receiver end to reconstruct original signal. This
nonlinear reconstruction algorithm requires knowledge of a representation basis (original or
transform) in which signal is sparse (exact recovery) or compressible (approximate recovery).
The simple technique of CS can be represented in Figure 2.3
Figure 2.3 Compressive Sensing Flow steps
2.2.1 Sparse Image Reconstruction
The problem statement for reconstruction process using CS involve the proper definition of
the given problem i.e. the given data or signal, its compressible form, the desired transform
domain, its inefficiencies and benefits, the acquisition of the image and its sparse form. The
various segment of the problem statement can be given as follows
Compressible signals: Consider a real-valued, finite-length, one-dimensional, discrete-time
signal x, which can be viewed as an N × 1 column vector in RN with elements x[n], n = 1,
9
2. . . , N. (We treat an image or higher-dimensional data by vectorizing it into a long one-
dimensional vector.) Any signal in RN can be represented in terms of a basis of N × 1 vectors
{ψi}N i=1. For simplicity, assume that the basis is orthonormal. Using the N × N basis matrix
ψ = [ψ1,ψ2| . . . |ψN] with the vectors {ψi} as columns, a signal x can be expressed as
x = ∑i=1
N
sψ or x = ψs (2.3)
where s is the N × 1 column vector of weighting coefficients s= ‹x,ψi› = ψ i T x and ·T denotes
transposition. Clearly, x and s are equivalent representations of the signal, with x in the time
or space domain and s in the ψ domain. The signal x is K-sparse if it is a linear combination
of only v basis vectors; that is, only K of the si coefficients in (are nonzero and (N − K) are
zero. The case of interest is when K ≤ N. The signal x is compressible if the representation
(2.3) has just a few large coefficients and many small coefficients [15].
Transform domain: The fact that compressible signals are well approximated by K-sparse
representations forms the foundation of transform coding. In data acquisition systems (for
example, digital cameras) transform coding plays a central role: the full N-sample signal x is
acquired; the complete set of transform coefficients {si} is computed via s = ψi T x; the K
largest coefficients are located and the (N − K) smallest coefficients are discarded; and the K
values and locations of the largest coefficients are encoded. Unfortunately, this sample-then-
compress framework suffers from three inherent inefficiencies. First, the initial number of
samples N may be large even if the desired K is small. Second, the set of all N transform
coefficients {si} must be computed even though all but K of them will be discarded. Third,
the locations of the large coefficients must be encoded, thus introducing an overhead.
Compressive Sensing Problem: Compressive sensing addresses these inefficiencies by
directly acquiring a compressed signal representation without going through the intermediate
stage of acquiring N samples. Consider a general linear measurement process that computes
M<N inner products between x and a collection of vectors {ψ j} where j=1 to J as in y j = ‹x, ϕ
j› .Arrange the measurements y j in an M x 1 vector y and the measurement vectors { y j}
where j=1 to M, j as rows in an M × N matrix ϕ. Then, by substituting ψ from (2.3), y can be
written as
y = ϕx =ϕψ s = Θs (2.4)
where y = χψ is an M × N matrix. The measurement process is not adaptive, meaning that ϕ
is fixed and does not depend on the signal x. The problem consists of designing a stable
measurement matrix ϕ such that the salient information in any K-sparse or compressible
signal is not damaged by the dimensionality reduction from x ∈ RN to y ∈ RM) a
10
reconstruction algorithm to recover x from only M ≈ K measurements y (or about as many
measurements as the number of coefficients recorded by a traditional transform coder) [16].
2.3 IMPLEMENTATION OF COMPRESSIVE SENSING
The solution consists of two steps. In the first step, a stable measurement matrix ϕ is
developed that ensures that the notable information in any V-sparse or compressible signal is
not damaged by the dimensionality reduction from x ∈ RN down to y ∈ RM. In the second
step, we develop a reconstruction algorithm to recover x from the measurements y. Initially,
we focus on exactly K-sparse signals.
Figure 2.4 (a) Compressive sensing measurement process with (random Gaussian) measurement matrix χ and
Transform matrix. (a)The coefficient vector s is sparse with K= 4. (b) Measurement process in terms of the
matrix product Θ=ϕψ with the four columns corresponding to nonzero si highlighted.
2.3.1 Stable Measurement Matrix
The aim is to construct M measurements (the vector y) from which the length-N signal x can
be aptly reconstructed, or equivalently its sparse coefficient vector s in the basis ψ as
defined .Clearly reconstruction will not be possible if the measurement process damages the
information in x. Unfortunately, this is the case in general: Since the measurement process is
linear and defined in terms of the matrices ϕ and ψ , solving for s given y in (2.4) is just a
linear algebra problem, and with M < N. However, the K-sparsity of s comes to concern. In
this case the measurement vector y is just a linear combination of the K columns of Θ whose
corresponding si ≠ 0. Hence, if we knew a priori which V entries of s were nonzero, and then
we could form an M × K system of linear equations to solve for these nonzero entries, where
now the number of equations M equals or exceeds the number of unknowns K. A necessary
and sufficient condition to ensure that this M × K system is well-conditioned and hence
sports a stable inverse is that for any vector p sharing the same K nonzero entries as s we
have
11
1-δ ≤ ¿∨Θ p∨¿2¿∨p∨¿2 ≤ 1+δ (2.5)
for some δ > 0. In words, the matrix γ must preserve the lengths of these particular V-sparse
vectors. This is the so-called restricted isometry property (RIP) [17]. For ensuring the
stability the measurement matrix ϕ should be incoherent with the sparsifying basis ψ in the
sense that the vectors {ϕ j} cannot sparsely represent the vectors {ψ i} and vice versa. In
compressive sensing, measurement matrix ϕ is selected as a random matrix. For example, we
draw the matrix elements ϕ j,i as independent and identically distributed (iid) random variables
from a zero-mean, 1/N-variance Gaussian density (white noise) . Then, the measurements y is
merely M different randomly weighted linear combinations of the elements of x. A Gaussian
χ has two interesting and useful properties. First, ϕ is incoherent with the basis ψ= I of delta
spikes with high probability, since it takes fully N spikes to represent each row of χ . Second,
due to the properties of the i.i.d. Gaussian distribution generatingφ, the matrix Θ =ϕψ is also
i.i.d. Gaussian regardless of the choice of (orthonormal) sparsifying basis matrix. Thus,
random Gaussian measurements ϕ are universal in the sense that Θ = ϕψ has the RIP with
high probability for every possible [18].
2.4 RECONSTRUCTION ALGORITHMS
The image reconstruction algorithm must take the M measurements in the vector y, the
random measurement matrix, and the basis ψ and reconstruct the length-N signal x or
equivalently, its sparse coefficient vector s. For K-sparse signals, since M < N, there are
infinitely many s' that satisfy Θs = y. This is because if Θs = y then Θ(s + r) = y for any
vector r in the null space N(Θ) of Θ Therefore, the signal reconstruction algorithm aims to
find the signal’s' sparse coefficient vector in the (N − K)dimensional.
2.4.1 OMP for Signal Recovery
For reconstruction using compressed sensing, there are number of basis pursuit algorithms
used till date based on L-norms which give reliable reconstruction but at the cost of slower
reconstruction time. Orthogonal Matching Pursuit is a greedy pursuit algorithm for recovery
that provides rapid processing than the basis pursuit but at the cost of its computational
complexity. If complexity of OMP can be reduced by certain means then it offers a much
effective algorithm over basis pursuit in terms of reconstruction time and exact recovery. The
OMP algorithm has been studied by Rauhut [19]. Their work was focused on recovery via
random frequency measurements. The highest correlation between Φ and residual of y is
12
calculated and one coordinate for support of signal is produced per iteration. Hence the
complete signal X can be recovered by total iterations performed by algorithm. In its
alternative, compressive sensing OMP a proxy signal of y is generated and then its
correlations are found out. There had been a lot of research work on image reconstruction
using compressed sensing with Orthogonal Matching Pursuit. Major research in this area is
performed by Donoho. D. L. Donoho with Tsaig has given a stage wise orthogonal matching
pursuit for the sparse solution of underdetermined system of linear equations [20]. Tropp and
Gilbert focused on the measurement matrices such as Gaussian and Bernoulli. Inverse
problems are often solved with the help of greedy algorithms. Two popular greedy algorithms
used for compressed sensing reconstruction are orthogonal least square and orthogonal
matching pursuit. Generally the two are taken as same but that is not true. The confusion
between two was made clear with work of Davies and Thomas Blumemsath. Soussen and
Gribnovel’s work is based on data without noise taken into account. In their work a subset of
the true support is formed from the available partial information. They derive condition
complementary to restricted isometry for the success of greedy algorithms. This condition
relaxes the coherence constrain which is considered as a necessity for implementation of
compressed sensing. Generally two greedy algorithms orthogonal matching pursuit and
orthogonal least square are used for compressed sensing reconstruction are taken as same but
that is not true. Davies and Thomas Blumemsath cleared the confusion between two. T. Tony
and Lie Wang have reconstructed the high dimensional signal in presence of noise with
OMP. They have given OMP with explicitly stopping conditions. Their work shows that
reconstructions possible under mutual incoherence of coefficients by suing OMP. Signal
Reconstruction using tree based orthogonal matching pursuit has been performed by La C
and Do M.N. Their recovery results shows OMP gives better reconstruction as compared to
recover algorithms that only use assumption of sparse representation. Their work solved the
linear inverse problems with a limit in the total number of measurements. Beck and Teboulle
has proposed a fast speed recovery algorithm called the iterative thresholding algorithm
(ISTA) and fast iterative thresholding algorithm (FISTA). In this algorithm there is an
optimization by solving the optimization problem without ℓ-1 penalty. After this the
algorithm selects the values from x using a predefined threshold and decreases ℓ-1 norm. The
selection is based on hard and soft threshold. For the soft thresholding, the value zero is
assigned to atoms of x which have magnitude below a certain predefined variable and for the
hard thresholding the algorithm assigns the value zero to the entities which have smaller
magnitude. Reboulle and Dowe have proposed an optimized orthogonal matching pursuit
13
reconstruction that builts upon functions selected from a dictionary. For an iteration an
approximation of signal is given that minimizes the residual norm [21]. A fast orthogonal
matching pursuit algorithm is given by Gharavi and Huang. Their proposed algorithm has a
Signal recovery from random frequency measurements has been performed by Rauhat and
Kunis. They have proved that OMP is faster than the L-norm method of reconstruction. They
have proved that for a K sparse signal computation complexity for a number of coding
applications that is very close to the non orthogonal version.
2.4.2 OMP algorithm
Matching Pursuit is an approach to compute adaptive signal representations. The prevalent
goal of this technique is to obtain a sparse signal representation by choosing, a transformed
form residual that is best adapted to approximate part of the signal at each iteration.
Nonetheless, the MP algorithm in its original form [22] does not provide at each iteration the
linear expansion of the selected residual that approximate the signal at best. A later
distillation which does provide such approximation has been termed orthogonal matching
pursuit (OMP). The OMP approach improves upon the MP in the following sense: from the
selected residues through the MP criterion, the OMP approach gives rise to the set of
coefficients yielding the linear expansion that minimizes the distance to the signal i.e. the
least mean square. OMP is an iterative greedy algorithm that selects at each step the column
which is most correlated with the current residuals. This column is then added into the set of
selected columns. The algorithm updates the residuals by projecting the observed
measurements y onto the linear subspace spanned by the columns that have already been
selected, and the algorithm then iterates. Compared with other alternative methods, a major
advantage of the OMP is its simplicity and fast implementation.
Algorithm for OMP for Signal Recovery:
INPUT: A M x N measurement matrix ϕ, A M-dimensional data vector v , The sparsity level
K of the ideal signal.
OUTPUT: An estimate s^ in RM for the ideal signal, A set Λz containing z elements from
{1......j}, An M-dimensional approximation aj of the data x , An M-dimensional residual
rj = v- aj .
PROCEDURE:
1) Initialize the residual ro = v, the index set Λo = ∅ and the iteration counter t=1 .
2) Find the index τ t that solves the easy optimization problem
λ t =argmaxi=1 to j |‹ rt-1, Χ i›| (2.6)
14
If the maximum occurs for multiple indices, break the tie deterministically.
3) Augment the index set Λ t = Λ t-1 ∪ {λt} and the matrix of chosen atoms ϕ t = [ϕ t-1 , Χ λt] and
The convention that ϕo is an empty matrix is used.
4) Solve a least squares problem to obtain a new signal estimate:
xt = argminx ‖ v - ϕ t x ‖2 (2.7)
5) Calculate the new approximation of the data and the new residual
at = ϕ t xt (2.8.a)
rt = v - at (2.8.b)
6) Increment, and return to Step 2 if t < K.
7) The estimate for the ideal signal s^ has nonzero indices at the components listed in Λz. The
value of the estimate s^ in component λ j equals the jth component of xt
The residual rt is always orthogonal to the columns of ϕ t . Provided that the residual rt-1 is
nonzero, the algorithm selects a new atom at iteration t and the matrix ϕ t has full column
rank. At iteration, the least squares problem can be solved with marginal cost.
15
CHAPTER 3
LITERATURE REVIEWImage reconstruction is a mathematical process that generates images from the recovered data
in many different ways. Image reconstruction has a fundamental impact on image quality and
on the application for which the image is used. Compressed sensing relies on L1 techniques,
which several other scientific fields have used historically. The -norm was used in
matching pursuit in 1993 and basis pursuit in 1998. There were theoretical results describing
when these algorithms recovered sparse solutions, but the required type and number of
measurements were sub-optimal and subsequently greatly improved by compressed sensing.
At first glance, compressed sensing might seem to violate the sampling theorem, because
compressed sensing depends on the sparsity of the signal in question and not its highest
frequency. This is a misconception, because the sampling theorem guarantees perfect
reconstruction given sufficient, not necessary, conditions. Sparse signals with high frequency
components can be highly under-sampled using compressed sensing compared to classical
fixed-rate sampling.
The image reconstruction using compressive sensing can be accomplished with different
techniques like TV minimization algorithm, OMP, basic pursuit etc. It can be transformed in
sparse form to have more economical recovery. Image can be reconstructed with transform
code reconstruction [23] in which the reconstructed quality is decided by the quantization
level. Compressive sensing (CS) breaks the limit and states that sparse signals can be
perfectly recovered from incomplete or even corrupted information by solving convex
optimization. Under the same acquisition of images, if images are represented sparsely
enough, they can be reconstructed more accurately by CS recovery than inverse transform.
So, modified TV operator is used to enhance image sparse representation and reconstruction
accuracy, and image information is acquired from transform coefficients corrupted by
quantization noise in image transform coding.
The method of Improved Total variation (TV) minimization algorithms [24] recover sparse
signals or images in the compressive sensing (CS) and improve it in terms of undesirable
staircase effect i.e. either by intra-prediction[25] or gradient descent method. The new
method conducts intra-prediction block by block in the CS reconstruction process and
generates a residual for the image block being decoded in the CS measurement domain. The
gradient of the residual is sparser than that of the image itself, which can lead to better
reconstruction quality in CS by TV regularization. The staircase effect can also be eliminated
16
due to effective reconstruction of the residual. Furthermore, to suppress blocking artifacts
caused by intra-prediction, an efficient adaptive in-loop deblocking filter was designed for
post-processing during the CS reconstruction process.
Block compressive sensing can be achieved with two methods. One is called coefficient
random permutation (CRP) [26], and the other is termed adaptive sampling (AS). The CRP
method can be effective in balancing the sparsity of sampled vectors in DCT domain of
image, and then in improving the CS sampling efficiency. The AS is achieved by designing
an adaptive measurement matrix used in CS based on the energy distribution characteristics
of image in DCT domain, which has a good effect in enhancing the CS performance.
Experimental results demonstrate that our proposed methods are efficacious in reducing the
dimension of the BCS-based image representation and/or improving the recovered image
quality.
The proposed BCS based image representation scheme [27] could be an efficient alternative
for applications of encrypted image compression and/or robust image compression. Another
algorithm i.e. BCS with sampling optimization takes full advantage of the characteristics of
the block compressed sensing, which assigns a sampling rate depending on its texture
complexity of each block. The block complexity is measured by the variance of its texture
gradient, big variance with high sampling rates and small variance with low sampling rates.
Orthogonal Matching Pursuit is a greedy pursuit algorithm for recovery that provides rapid
processing than the basis pursuit but at the cost of its computational complexity. If
complexity of OMP can be reduced by certain means then it offers a much effective
algorithm over basis pursuit in terms of reconstruction time and exact recovery. The OMP
algorithm has been studied by Rauhut [28]. Their work was focused on recovery via random
frequency measurements. The highest correlation between ϕ and residual of y is calculated
and one coordinate for support of signal is produced per iteration. Hence the complete signal
X can be recovered by total iterations performed by algorithm. In its alternative, compressive
sensing OMP a proxy signal of y is generated and then its correlations are found out. There
had been a lot of research work on image reconstruction using compressed sensing with
Orthogonal Matching Pursuit. Major research in this area is performed by Donoho. D. L.
Donoho with Tsaig has given a stage wise orthogonal matching pursuit for the sparse solution
of underdetermined system of linear equations [29]. Tropp and Gilbert focused on the
measurement matrices such as Gaussian and Bernoulli [30]. Inverse problems are often
solved with the help of greedy algorithms. Two popular greedy algorithms used for
compressed sensing reconstruction are orthogonal least square and orthogonal matching
17
pursuit. Generally the two are taken as same but that is not true. The confusion between two
was made clear with work of Davies and Thomas Blumemsath. Soussen and Gribnovel’s
work is based on data without noise taken into account. In their work a subset of the true
support is formed from the available partial information. They derive condition
complementary to restricted isometry for the success of greedy algorithms. This condition
relaxes the coherence constrain which is considered as a necessity for implementation of
compressed sensing. Generally two greedy algorithms orthogonal matching pursuit and
orthogonal least square are used for compressed sensing reconstruction are taken as same but
that is not true. Davies and Thomas Blumemsath cleared the confusion between two. T. Tony
and Lie Wang have reconstructed the high dimensional signal in presence of noise with
OMP. They have given OMP with explicitly stopping conditions. Their work shows that
reconstructions possible under mutual incoherence of coefficients by suing OMP. Signal
Reconstruction using tree based orthogonal matching pursuit has been performed by La C
and Do M.N. Their recovery results shows OMP gives better reconstruction as compared to
recover algorithms that only use assumption of sparse representation. Their work solved the
linear inverse problems with a limit in the total number of measurements. Beck and Teboulle
has proposed a fast speed recovery algorithm called the iterative thresholding algorithm
(ISTA) and fast iterative thresholding algorithm (FISTA). In this algorithm there is an
optimization by solving the optimization problem without ℓ-1 penalty. After this the
algorithm selects the values from x using a predefined threshold and decreases ℓ-1 norm. The
selection is based on hard and soft threshold. For the soft thresholding, the value zero is
assigned to atoms of x which have magnitude below a certain predefined variable and for the
hard thresholding the algorithm assigns the value zero to the entities which have smaller
magnitude [31]. Reboulle and Dowe have proposed an optimized orthogonal matching
pursuit reconstruction that builts upon functions selected from a dictionary. For an iteration
an approximation of signal is given that minimizes the residual norm [32]. A fast orthogonal
matching pursuit algorithm is given by Gharavi and Huang. Their proposed algorithm has a
Signal recovery from random frequency measurements has been performed by Rauhat and
Kunis. They have proved that OMP is faster than the L-norm method of reconstruction. They
have proved that for a V sparse signal computation complexity for a number of coding
applications that is very close to the non orthogonal version [33].
The OMP [34] approach improves upon the MP in the following sense: from the selected
atoms through the MP criterion, the OMP approach gives rise, at each iteration, to the set of
coefficients yielding the linear expansion that minimizes the distance to the signal. However,
18
since it selects the atoms according to the MP prescription, the selection criterion is not
optimal in the sense of minimizing the residual of the new approximation. OMP [35] is an
iterative greedy algorithm that selects at each step the column which is most correlated with
the current residuals. Orthogonal matching pursuit involves the use of approximation of the
signal estimates in terms of dictionary. Then, these approximations are used to calculate the
recovery, at each iteration recovery signal is computed which is compared with the estimates
to have maximum inner product. This procedure is repeated until the stopping condition.
With OMP, [36] side information has been used, the noise component is also considered in
the estimates, generalized OMP has been implemented and many others too. With each
implementation, different kind of sparse domain is taken into consideration.
The paper titled "Signal Recovery from Incomplete and Inaccurate Measurements via
Regularized Orthogonal Matching Pursuit" by Deanna Needell and Roman Vershynin
proposes Regularized Orthogonal Matching Pursuit (ROMP) that seeks to provide the
benefits of the two major approaches to sparse recovery. It combines the speed and ease of
implementation of the greedy methods with the strong guarantees of the convex programming
methods. For any measurement matrix χ that satisfies a quantitative restricted isometry
principle, ROMP recovers a signal x with O(n) non zeros from its inaccurate measurements
in at most iterations, where each iteration amounts to solving a least squares problem. In
particular, if the error term vanishes the reconstruction is exact. This stability result extends
naturally to the very accurate recovery of approximately sparse signals[t].
The paper titled "Orthogonal Matching Pursuit for sparse signal recovery with noise"
propposed by T.Tony Cai and Lie in the year 2011 consider the OMP technique for the
recovery of high dimensional sparse signal based on a small number of noisy linear
measurements. In this paper OMP as an iterative greedy algorithm selects at each step the
column, which is most correlated with the current residuals along with the explicit stopping
rules. Here the problem of identifying the significant components in the case where some of
the nonzero components are possibly small. With these modified rules, the OMP algorithm
can ensure that no zero components are selected.
The next paper generalizes the traditional technique of OMP proposed by Jian Wang,
Seokbeop Kwon, Byonghyo Shim named "Generalized Orthogonal Matching Pursuit" in the
year 2012. In this paper generalization of OMP is done in the sense that multiple N indices
are identified per iteration. Owing to the selection of multiple "correct" indices, the gOMP
algorithm is finished with much smaller number of iterations when compared to the OMP
[38].
19
The paper titled "Signal recovery from random measurements via orthogonal matching
pursuit" proposed by Joel. A. Tropp demonstrates theoretically and empirically that a greedy
algorithm called OMP can reliably recover a signal with K nonzero entries in dimension N
given random linear measurements of that signal. This is a massive improvement over
previous results, which require O (N2) measurements [39].
The following table summarizes and illustrates the previous work done in the field of image
reconstruction with corresponding advantages and disadvantages. The evaluating parameters
are also discussed.Table 3.1: Evaluation of Literature review
PAPER TITLE YEAR DETAILS OF WORK ADV, PROS AND
CONS
PARAMET
ER
Compressive Sensing
With Modified Total
Variation Minimization
Algorithm(ieee)
2010 TV+NORM 1
TV+DCT Constraint +
NORM1
TV+ Contourlet
Constraint+NORM1
(CS in all)
Decrease the cost of
Hardware, Higher
Directional Quality(CL)
Better threshold better
performance, visual
improvement of edges
PSNR,
MEASURE
MENT
MATRIX
DIM. (M)
An iterative Weighing
Algorithm for image
reconstruction in
compressive sensing
(ieee)
2010 Iteratively define weighing
Coeffient , TV + NORM 1
Weight changes acc. to
Kroneckor product
(CS)
Enhance sparsity,
losses little details
Adaptibility can be
improved for
complicated image
PSNR
Ratio of
data
acquisition
An image reconst.
algorithm based on
compressed sensing
using conjugate
gradient (ieee)
2010 Image into set of atoms
called dictionary- find
similar set-remove it- get
residual- process it using
conjugate gradient untill it
reaches the estimated signal
(MP +OMP) +DWT
Simple, less memory
requirement, better
PSNR, High sampling
speed
PSNR
TIME
COST
Compressive Sensing
imge reconstruction
using Multi wavelet
transforms (ieee)
2010 CS + OMP+ Discreet Multi
WT- sparse reprsentation-
measurement matrix-
calculate residual and
Better than DCT
Faster convergence rate
PSNR
M
20
continue to select best
match atoms
Image Compressed
sensing based on
wavelet transform in
contourlet domain
(elsevier) vol 91
2011 WT in contourlet: image
decomposed into low pass
sub bands & several band
pass sub bands at multiscale
, each subband is
transformed by 2-ds
orthonormal wavelet basis:
solve optimal problem +
thresholding + smoothing
(contourlet + orthonormal
wavelet transform +
opyimization + thresholding
+ smoothing by wiener)
Reduced measurement
matrix , optimal
approximation ,
improved PSNR, Lower
computation
PSNR
Image Decoding
optimization based on
compressive sensing
(elsevier) vol 236
2011 Modified TV operator +CS
Hor and ver gradient are
considered, minimize TV
Better visual quality
and PSNR
PSNR
Quantizatio
n noise
parameter
Improved total variation
minimization method
for compressive sensing
by intra prediction
(elsevier ) vol 92
2012 Modified TV operator
+ CS + Intraprediction
modes (hor, ver, DC, plane)
+(xi- xp) in loop deblocking
filter acc. to boundary
strength
Reduces staircase
effect, blocking
artifacts that is removed
by in loop deblocking
filter
Comparison
to prev
methods,
measureme
nt rate,
PSNR
Improved Image
Reconstruction based
on block compressed
sensing (ieee)
2012 CS + WT + OMP
Divide image into blocks-
WT- choose measurement
matrix- make sparse
reprsentation- only measure
high frequency coefficient-
reconstruction using OMP-
apply inverse WT
Reduce sampling
complexity &
calculation, less time
consuming , lesser
storage' PSNR increses
with sampling rate
PSNR,
Sampling
rate (M/N)
for different
block size
21
An Adaptive
Compressive Sensing
with side information
(ieee)
2013 CS+OMP (TV) +WT
(daubechies )
Local spatial variance as
additional measurement,
Measurement is reduced by
extracting local features ,
fidelity term is added,
Good at edges, Better
PSNR with adaptive
strategy, TV reduces
ringing artifacts
PSNR
SSIM
respective
to
compressio
n ratio
Image representation
using block
compressive sensing for
compression
applications ( elsevier)
vol 24
2013 BCS +DCT (CRP or AS)
CRP balances sparsity of
sampled vectors, AS gives
adaptive measurements
(RWS or RO)
Better PSNR, Adaptive
measurements, data
sampling and
compression at the
same time , less cost ,
robust coding
PSNR,
block size ,
visual
quality,
energy
ratios
Sampling adaptive
block compressed
sensing reconstruction
algorithm for images
based on edge detection
(elsevier) vol 20
2013 BCS+SA+SPL+ED+DDWT
BCS+SA+SPL+ED+CT
Smoothing with wiener
filter
Fast calculation Speed,
adaptive sampling
Better reconstruction
quality
PSNR
A new algorithm for
compressive sensing
based on TV norm
(ieee)
2013 CS+ TV +RLS +NORM P
(P<1) , FLECTER REEVES
CG OPTIMIZATION
Improved PSNR
Reduced computational
effort
PSNR
MSE
CPU time
Compressive Sensing
via reweighted TV and
non local sparsity (ieee)
vol 49
2013 CS +Reweighted TV
Spatial adaptive weights are
computed towars a
maximum a posteriori
estimation of gradients +
non local self similarity
constraint
Fail to preserve edges,
affect large magnitude
values
Number of
measureme
nts
Iterative gradient
projection algorithm for
two dimensional
2014 CS + TV +DDWT+
Gradient Descent(GD)+
Bivariate Shrinkag(BS )+
High recovery quality,
high sparsity, preserve
sharp edges, suppress
PSNR
CPU time
Measureme
22
compressive sensing
sparse image
reconstruction
(elsevier) vol 104
Projection- directly
reconstruct the 2D image
iteratively. GD decreases
TV, BS ensures sparsity ,
projection gives result in
solution space
aliasing nt rate ,
Iterative
numbers,
Noise level
Generalised Orthogonal
Matching Pursuit, ieee
vol 60
2012 Generalises OMP, select
multiple N indices are
identified per iteration. The
gOMP algorithm is finished
with much smaller number
of iterations.
Reduced computational
time, reduced
complexity,
Number of
iteration
and total
time taken
Orthogonal Matching
Pursuit for sparse signal
recovery with noise,
ieee, vol 57
2011 Recover the accurate signal
based on a small number of
noisy linear measurements
under the condition of
mutual incoherence property
and the minimum
magnitude of non zero
components of the signal.
Moreover with the modified
stopping rules, the OMP
algorithm can ensure that no
zero components are
selected.
Exact recovery,
bounded stopping rules,
reduced computational
time.
Incoherence
parameter (
μ), sparsity
level(k),
Optimized Orthogonal
Matching Pursuit
Approach, ieee, vol 9
2002 An adaptive OMP is
proposed and the
representation is built up
through functions selected
from a redundant family. At
each iteration, the algorithm
gives rise to an
Exact recovery,
Selection criteria is
optimal,
Sensing
Matrix,
recovered
signal.
23
approximation of a given
signal which is orthogonal
projection of a signal onto
the subspace generated by
selected functions.
From the Literature survey in the field of image reconstruction using compressive sensing
and orthogonal matching pursuit, the following research gaps have been identified:
The recovery technique can be observed under the effect of noise and the merge of
recovery and noise removing technique can be considered.
For OMP, various apt criteria like energy, variance, least mean square for selection of
significant columns can be considered which correlate most with the residual can be
accounted for.
Image can be recovered from the inaccurate and under sampled data via ROMP for
under the influence of the explicit stopping rules.
Generalization of OMP can be applied with the modified stopping rules in the
presence of bounded noise.
Stopping rules can involve Mutual incoherence property, restricted isometry property
with desired sparsity level.
CHAPTER 4
IMAGE RECONSTRUCTION
24
USING COMPRESSIVE SENSING AND MODIFIED OMP
In the previous section, the conventional technique of OMP with Compressive sensing is
discussed that provides the means to recover the desired image. Here, in this chapter, a
modified technique is used that progresses under the effect of the certain rules in order to
modify the number of computations required and the overall time elapsed.
4.1 MODIFIED OMP
Orthogonal matching pursuit (OMP) is a greedy search algorithm popularly being used for
the recovery of compressive sensed sparse signals. In this report discreet wavelet transform is
used to obtain the sparse form of the test images. In OMP, the greedy algorithm selects at
each step the column of ϕ which is most correlated with the current residuals. This column is
then added into the set of selected columns. The algorithm updates the residuals by projecting
the observation y onto the linear subspace spanned by the columns that have already been
selected and the algorithm then iterates. The major advantage is its simplicity. The
measurement vector is given as:
y=ϕx (4.1)
where ϕ is M x N measurement matrix. The algorithm searches for the maximum value of |‹
rt , ϕ j›| , augment the initialized index set Λ. The estimate can be computed and the complete
signal can be recovered by total iterations performed by the algorithm, where ϕ ' can be
obtained from
Ф= (Ф*Ф)-1Ф * (4.2)
OMP Algorithm for Signal Recovery
Input: • An M x N measurement matrix, Ф, An M dimensional data vector v, The Sparsity
level K of the ideal signal
Output: • An estimate x^ in RN for the ideal signal, An M-dimensional approximate r signal
rm = v-am (4.3)
Reconstruction using OMP is an inverse problem. Initially y residual is calculated and its
correlation with measurement matrix is found out. For each iteration, an approximation of the
given image signal is generated, which is orthogonal projection of signal onto the subspace
generated by the selected entries of signal and which minimizes the norm of the
corresponding residual error. In the second step the minimum of residual is calculated. After
the orthogonal projection to the values, the entry with minimum residual error, i.e. rn = S-Sn is
selected. The continuous update then results in the overall recovered image. The recovery
25
result with conventional OMP can be improved in one step further by sparsifying the low
frequency coefficients rather than employing the recovery algorithm on the overall image in
order to conserve the memory storage required. The OMP algorithm can even perform better
with the explicit stopping Rules and properties. It is shown that under the mutual incoherence
and the specified number of iterations given with the minimum magnitude of the nonzero
components of the signal. In this case, the OMP algorithm still selects all significant
components before possibly selecting incorrect ones [40]. In this report the stopping rules is
also discussed and the properties of the OMP is investigated. The mutual incoherence
property can be included in the stopping rule property to modify the algorithm. Incoherence
says that unlike the signal of interest, the sampling/sensing waveforms have an extremely
dense representation in. Coherence measures the maximum correlation between any two
elements of two different matrices. These two matrices might represent two different
basis/representation domains. If ψ is a N×N matrix with ψ1 ...... ψn as columns and ϕ is an M
× N matrix with ϕ1....... ϕN as rows. Then, coherence μ is defined as
μ (ϕ,ψ) = √ N max¿ϕk , ψ j | (4.4)
for 1 ≤ j ≤ N and 1 ≤ k ≤ M. It follows from linear algebra that 1 ≤ μ(ϕ,ψ) ≤ √ N . In CS, we
are concerned with the incoherence of matrix used to sample/sense signal of interest
(hereafter referred as measurement matrix ϕ) and the matrix representing a basis, in which
signal of interest is sparse (hereafter referred as representation matrix ψ). Within the CS
framework, low coherence between ϕ and ψ translates to fewer samples required for
reconstruction of signal. The MIP requires that the mutual incoherence μ to be small. The
value of μ can be bounded as μ < 1/ (2K-1). A necessary and sufficient condition to ensure
that the M × K system is well-conditioned and hence sports a stable inverse is that for any
vector p sharing the same K nonzero entries as s we have
1-δ ≤ ¿∨Θ p∨¿2¿∨p∨¿2 ≤ 1+δ (4.5)
for some δ > 0. In words, the matrix γ must preserve the lengths of this particular V-sparse
vectors. This is the so-called restricted isometry property (RIP). The OMP can reconstruct all
K-sparse vectors if δK+1 < 1/√K +1 [d]. For ensuring the stability the measurement matrix ϕ
should be incoherent with the sparsifying basis ψ in the sense that the vectors {ϕ j} cannot
sparsely represent the vectors {ψ i} and vice versa. The parameter for exact recovery
condition (ERC) can be given as:
M= maxr∈Φ(k)
{∨¿(Φ (t )' Φ (t ))-1Φ (t) 'r||1 (4.6)
26
This condition is called the Exact Recovery Condition (ERC) [41]. The ERC is a sufficient
condition for the exact recovery of the signal in the noiseless case. The bounded stopping
condition allows only specified iteration by selecting the significant correlated column before
applying process on the non significant zero columns. The modified stopping rules can ensure
that no zero components are selected.
4.1.1 Algorithm for Image Recovery
The Modified Orthogonal Matching Pursuit algorithm selects at each step the column of ϕ
which is most correlated with the current residuals. This column is then added into the set of
selected columns. The algorithm updates the residuals by projecting the observation y onto
the linear subspace spanned by the columns that have already been selected and the algorithm
then iterates. The algorithm only selects those significant components which satisfy the
modified stopping rule as given by equation no. (4.6), and thus ensure that no zero
components are selected. The MIP ensures the proper selection of significant columns that
correlated the most with the residual and significantly reduce the computational time for
overall algorithm. The modified algorithm commences in the same way as the conventional
OMP and under the certain stopping condition the OMP algorithm iterates at a lesser number
with the better quality recovered image.
The algorithm stated as follows:
Step 1: Consider N x N image (x), Choose appropriate M and construct the measurement
matrix ϕ (M x N).
Step 2: Make sparse representation for the image and get the low frequency coefficients Li
(i=1,2.....N) , high frequency coefficients Hi , Vi , Di (i=1,2.....N). then only measure the low
frequency coefficients using the compressive sensing technique.
y=ϕLi (4.7)
Step 3: Reconstruct the low frequency coefficients using the modified OMP algorithm under
the certain stopping condition in the presence of Gaussian noise.
Step 4: Initialize the residual r0 = y and initialize the set of selected variables Λ = ∅ . Let the
iteration count k=1. The other parameters can be specified as ϕ as Measurement Matrix (M x
N) , x as the input image(N x N), ψ (N x N) as the transform matrix.
Step 5:If the incoherence μ > 1/(2K+1), then the algorithm progresses, formost the condition
for k is checked. (While (norm(r) > threshold and k< min{K,M/N}) )do
Increment the iteration count .Select the indices {ϕ (i)}i=1,2,3...N corresponding to N largest
entries in ϕ ' rk-1
27
Step 6: Augment the set of selected variables: Λk = Λk-1 ∪ {ϕ (1 ) ,…… .. ϕ (N )}. Then, solve a
least squares problem to obtain a new signal estimate:
xk = argmin ‖y - ϕku‖2
Step 7: Update the residual to recover the image: rk = y - ϕkxk . Check if, M <1, then retrieve
the recovered image: x = min ‖y - ϕu‖2
4.2 FLOW CHART OF MODIFIED ALGORITHM
This algorithm provides better recovered image with reduced computational time. Here,
corresponding to the test image, M is selected and measurement matrix is computed, Sparsity
is specified. The algorithm selects the significant column having the best correlation under
the specified condition of MIP.
Figure 4.1: Flow Chart of the Modified Algorithm
This Flow chart signifies the step wise implementation of the modified algorithm along with
the condition and their properties.
CHAPTER 5
RESULT AND CONCLUSIONIn this section, the analysis and execution of the modified OMP technique under the certain
conditions is observed. The result can be observed on the given test images.
5.1 TEST IMAGES
28
The following are the standard test images of size 256 x 256. The technique is implemented
on these test images i.e. lena, cameraman and barbaara. The results are the form of
reconstructed images.
(a) Lena (b) Cameraman (c) Barbaara
Figure 5.1: Test images
5.2 SIMULATION RESULTS
The OMP technique has an advantage of easy or simple computation, along with this feature
if reduced computational time is added then the modified OMP can be conveniently used for
the exact recovery of the desired image. The modified stopping rules and exact recovery
condition ensures that the OMP algorithm selects the column which are significant before the
zero columns. The test images like figure 5.1(a) and 5.1(b) are used. Both the images having
size 256 x 256. These images are then processed to avail low frequency coefficients.
Sparsifying the image is done by discreet wavelet transform and the Gaussian measurement
matrix is used. There are two basic tasks in CS, sampling and recovery. Both the images in
Fig. 5.2(b) and Fig. 5.2 (c) show that the optimum numbers of measurements are required for
the exact recovery of the desired image.
(a)
29
Figure 5.2: Reconstruction result with modified OMP (a)Original image (b) Reconstructed image with 128
number of measurements (M=128) (c): Reconstructed image with M=190
(a) (b)
Figure 5.3: Reconstructed image with modified OMP (a) Original image (b) Reconstructed image with M=220
The images reconstructed for the Figure 5.3 (b) represents the optimum recovery of the
desired image under the specified Exact recovery condition and the Mutual incoherence
property rule with lesser number of measurements for the low frequency coefficients and
hence in turn lesser storage spaceThe results are also displayed in tabular form comparing the
PSNR values for various techniques. The modified technique displays relatively improved
PSNR.
Table 5.1 Comparison of PSNR
30
MPSNR(db)
M=128 M=150 M=170 M=190
OMP 26.44 28.23 30.7232.63
OMP (dct) - 25.33 26.4528.04
Modified OMP 32.09 31.87 32.0933.67
Modified OMP with noise 28.01 28.06 28.1728.19
Table 5.2: Comparison of Time elapsed
MElapsed Time (sec)
M=128 M=150 M=170 M=190
OMP 4.64 5.02 5.265.29
Modified OMP 3.91 4.24 4.304.43
Modified OMP with noise 8.48 9.08 11.7211.45
The tabular results show that the lesser number of measurements are sufficient to reconstruct
the image. The table indicates that 150 samples of the cameraman image are sufficient to
reconstruct it instead of total 256 if certain modified stopping and exact recovery rules are
applied. Sparsity level can reconstruct it from 190 samples instead of 256. PSNR values for
after reconstruction are shown in table for different techniques. Similar result follows for
Lena image. The elapsed time for the three implemented algorithms is computed and the
comparison is shown in table 5.2. The time is given in seconds. The tabular results show that
the implemented OMP algorithm is better than the existing techniques. The reconstruction
process is faster and gives a stable result. The elapsed time is calculated recovery of image
from sparse domain image form.
5.3 CONCLUSION AND DISCUSSIONS
The theoretical and empirical work in this paper demonstrates that OMP is an effective
alternative for signal recovery from random measurements. In this paper, compressive
sensing based image reconstruction is performed by implementing orthogonal matching
pursuit using Gaussian measurements under modified condition. The simulation results
demonstrate that the implemented OMP gives a faster reconstruction than the existing
algorithms using lesser number of dimensions than previous work on OMP. Modified OMP
can be used effectively to recover the sparse images. Implemented technique of OMP is
31
performed under certain conditions and stopping rules and it is observed that the complexity
of the algorithm can be reduced by solving it in. It provides feasible results in reduced
running time for lesser number of undersampled data provided. The modified technique can
further be optimized or even generalised under these conditions to avail the better
reconstruction result within the reduced amount of time elapsed.
5.4 WORK TO BE DONE
Till now, OMP with explicit stopping rules has been implemented. The next step is the
amalgamation of Restricted Isometry Property condition as a stopping rule and MIP. The
generalisation of the already implemented technique can also be considered for the upcoming
efforts. The motive is also to optimize the design of Measurement matrix and the procedure
of identifying and selecting the significant components in order to improve the percentage of
correlation. Generalization can be obtained by improving the criteria for identifying the
multiple significant indices for better correlation. The regularization of the technique gives
the algorithm with reduced computational time.
REFERENCES
[1] Supatana Auethavekiat, "Introduction to the implementation of compressive sensing", AU
journal of Technology, vol 14, no. 1, pp 39- 46, 2010
[2] D. Donoho, "Compressed Sensing", IEEE Transactions on inf. theory, vol. 52, pp 1289-
1306.
[3] D. Donoho, Y. Tsaig, I. Drori, and J. Starck, “Sparse solutions of underdetermined linear
equations by stagewise orthogonal matching pursuit,” 2006. Available: http://www-
stat.stanford. edu/~donoho/Reports/2006/StOMP-20060403.
32
[4] Baraniuk R. "Compressive sensing". IEEE Signal Processing Magazine, 2007, 24 (4):
118-121
[5] E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal
reconsruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol.
52, no. 2, pp. 489–509, Feb. 2006
[6] M.A. Davenport andM. B. Wakin, “Analysis of Orthogonal Matching Pursuit using the
restricted isometry property,” IEEE Trans. Inf. Theory, vol. 56, no. 9, pp. 4395–4401, Sep.
2010.J.A.
[7] Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, and Chinmay Hegde,"Model
based compressed sensing" IEEE Trans. on info. theory, vol. 56, pp. 1982-2001, 2010
[8] SaadQaisar, Rana Muhammad Bilal, Wafa Iqbal, Muqaddas Naureen, andSungyoung Lee,
"Compressive Sensing: From theory to Applications, a Survey", Journal Of Communications
and Networks, vol. 15, pp. 443-456, 2013.
[9] E. J. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,”
Inverse problems, vol. 23, pp. 969–969, Apr. 2007
[10] Emmauel J. Candes, Michael B. Wakin, "An introduction to Compressive Sampling",
IEEE Signal Processing Magazine, pp. 21-30, 2008.
[11] D. Baron, M.B. Wakin, M.F. Duarte, S. Sarvotham, and R.G. Baraniuk, “Distributed
compressed sensing,” 2005, Preprint.
[12] M. Lustig, D.L. Donoho, and J.M. Pauly,“Rapid MR imaging with compressed sensing
and randomly under-sampled 3dft trajectories,” in Proc. 14th Ann. Meeting ISMRM, Seattle,
WA, May 2006.
[13] V. Cevher, A. Sankaranarayanan, M. Duarte, D. Reddy, R. Baraniuk, and R. Chellappa,
“Compressive sensing for background subtraction,” Comput. Vision-ECCV 2008, pp. 155–
168, 2008.
[14] M. Mohtashemi, H. Smith, D. Walburger, F. Sutton, and J. Diggans, “Sparse sensing
DNA microarray-based biosensor: Is it feasible?” in Proc. SAS, 2010, pp. 127–130.
[15] Emmauel J. Candes, Michael B. Wakin, "An introduction to Compressive Sampling",
IEEE Signal Processing Magazine, pp. 21-30, 2008.
[16] Richard Baraniuk, "Compressed sensing", IEEE Signal processing magazine, vol. 24, pp
1-9, 2007.
[17] E. J. Candès, “The restricted isometry property and its implications for compressed
sensing,” Comptes Rendus Mathematique, vol. 346, no. 9–10, pp. 589–592, 2008.
33
[18] R. Baraniuk, M.Davenport, R. DeVore, and M.Wakin, “A simple proof of the restricted
isometry property for random matrices,” Construct. approx., vol. 28, pp. 253–263, 2008.
[19] E. Cand`es, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal
reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory,
vol. 52, no. 2, pp. 489–509, Feb. 2006.
[20] D.L.donoho, Y. Tsaig. I. Drori and J.L. strack, “Sparse solution of undetermined linear
equations by stagewise orthogonal matching pursuit(Stomp)”, IEEE Transactions on
information theory, vol. 58, issue 2, 2007.
[21] Laura Rebollo-Neira and David Lowe," Optimized Orthogonal Matching Pursuit
Approach", IEEE Signal Processing Letters, VOL. 9, NO. 4, pp. 137-140, APRIL 2002.
[22] S. Mallat and Z. Zhang, “Matching pursuit in time–frequency dictionary,” IEEE Trans.
Signal Processing, vol. 41, pp. 3397–3415, Dec. 1993.
[23] Natterer F. and W¨ubbeling F., Mathematical Methods in Image Reconstruction, SIAM
Monographs on Mathematical Modeling and Computation, 2001.
[24] Zhen Zhang, Yunhui Shi, Dehui Kong, Wenpeng Ding, Baocai Yin," Image decoding
optimization based on compressive sensing”, Elsevier, Image and Vision Computing, vol. no.
236, pp. 812-818, 2011.
[25] Jie Xu a,b, Jianwei Mac, Dongming Zhang a, Yongdong Zhang a, Shouxun Lin a,"
Improved total variation minimization method for compressive sensing by intra-prediction”,
Elsevier, Image and Vision Computing, vol. no. 92 , pp. 2614-2623, 2013
[26] Zhirong Gao, Chengyi Xiong , Lixin Ding , Cheng Zhou , "Image representation using
block compressive sensing for compression applications" Elsevier, Image and Vision
Computing, vol. no. 24, pp. 885-894, 2013.
[27] Zheng Hai-bo, Yide Ma, Zhu Xiu-chang,"Sampling adaptive block compressed sensing
reconstruction algorithms for images based on edge detection”, Elsevier, Image and Vision
Computing, vol. no. 28, pp. 97-103, 2013.
[28] Donoho DL, Elad M, Temlyakov VN. “Stable recovery of sparse over complete
representations in the presence of noise”. IEEE Trans Inf Theory 2006;52:6–18.
[29] D.L.donoho, Y. Tsaig. I. Drori and J.L. strack, “Sparse solution of undetermined linear
equations by stagewise orthogonal matching pursuit(Stomp)”, IEEE Transactions on
information theory, vol. 58, issue 2, 2007
[30] Tropp, Gillbert, “Signal recovery from random measurements via orthogonal matching
pursuit”, IEEE Transactions On Inf Theory, Vol. 53, No. 12, Dec 2007
34
[31] Beck, Teboulle,M.2009a. “Fast gradient based algorithms for constrained total variation
image denoising and deblurring problems”, IEEE Transactions on image Processing 18,
2419- 2434.
[32] Rebollo-Neira, Lowe D, “Optimized Orthogonal matching pursuit approach”, signal
processing Letters, IEEE, Volume:9 Issue:4, April 2002.
[33] Gharavi, Huang, T.S., “A fast orthogonal matching pursuit algorithm”, Acoustics,
Speech and Signal processing, 1998, Proceedings of the 1998 IEEE international Conference,
vol. 3, 1998
[34] Paul Tseng, "Further results on stable recovery of sparse overcomplete representations in
the presence of noise" IEEE Transactions on Information Theory, Vol. 55, No. 2,pp. 888-
899, 2009.
[35] Laura Rebollo-Neira and David Lowe," Optimized Orthogonal Matching Pursuit
Approach", IEEE Signal Processing Letters, VOL. 9, NO. 4, pp. 137-140, APRIL 2002.
[36] T. Tony Cai and Lie Wang, "Orthogonal Matching Pursuit for Sparse Signal Recovery
With Noise"IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 7,pp.
4680-4688 July 2011.
[37] Deanna Needell and Roman Vershynin, "Signal Recovery From Incomplete and
Inaccurate Measurements Via Regularized Orthogonal Matching Pursuit" IEEE JOURNAL
OF SELECTED TOPICS IN SIGNAL PROCESSING, VOL. 4, NO. 2, APRIL 2010 310-316
[38] Jian Wang, Seokbeop Kwon, Byonghyo Shim,"Generalised Orthogonal Matching
Pursuit", IEEE Trans. on Signal Processing, vol 60, pp. 6202-6216, 2012.
[39] Joel A. Tropp, Member, IEEE, and Anna C. Gilbert," Signal Recovery From Random
Measurements Via Orthogonal Matching Pursuit" IEEE Transcations on Information Theory,
Vol. 53, No. 12,pp. 4655-4666, December 2007.
[40] J. A. Tropp, “Greed is good: Algorithmic results for sparse approximation,” IEEE Trans.
Inf. Theory, vol. 50, pp. 2231–2242, 2004.
35