12
arXiv:1405.4201v2 [cs.CE] 29 May 2014 1 Exploiting Prior Knowledge in Compressed Sensing Wireless ECG Systems Luisa F. Polan´ ıa, Student Member, IEEE, Rafael E. Carrillo, Member, IEEE, Manuel Blanco-Velasco, Senior Member, IEEE, and Kenneth E. Barner, Senior Member, IEEE Abstract—Recent results in telecardiology show that com- pressed sensing (CS) is a promising tool to lower energy consump- tion in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant perfor- mance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods. Index Terms—Electrocardiogram (ECG), wireless body area networks (WBAN), compressed sensing (CS), wavelet transform. I. I NTRODUCTION A WIRELESS body area network (WBAN) is a radio frequency-based wireless networking technology that connects small nodes with sensor and/or actuator capabilities in, on, or around a human body. WBANs promise to revolu- tionize health monitoring by allowing the transition from cen- tralized health care services to ubiquitous and pervasive health monitoring in every-day life. One of the major challenges in the design of such systems is energy consumption, as WBANs are battery powered [1]. The WBAN energy consumption can be divided into three main processes: sensing, wireless communication and data processing. However, the process that consumes most of the energy is the wireless transmission of data [1], which indicates that some data reduction operation should be performed at the sensor node to reduce the energy cost of the network. In addition, data reduction can supplement the bandwidth con- straints of the WBAN when many sensor nodes are required to measure different physiological signals. Mamaghanian et L.F. Polan´ ıa and K.E. Barner are with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE, 19711 USA (e-mail: [email protected]; [email protected]). R.E. Carrillo is with the Institute of Electrical Engineering, ´ Ecole Polytech- nique F´ ed´ erale de Lausanne (EPFL), CH-1015 Laussane, Switzerland (e-mail: [email protected]). M. Blanco-Velasco is with the Department of Signal Theory and Communications, Universidad de Alcal´ a, Madrid 28805, Spain (e-mail: [email protected]). al. [2] recently proposed to use CS to lower energy con- sumption and complexity in WBAN-enabled ECG monitors. Compressed sensing is an emerging field that exploits the structure of signals to acquire data at a rate proportional to the information content rather than the frequency content, therefore allowing sub-Nyquist sampling [3], [4]. When this sampling strategy is introduced in WBANs, it gives rise to sensor nodes that efficiently acquire a small group of random linear measurements that are wirelessly transmitted to remote terminals. Indeed, sensor nodes can achieve high compression of ECG data with low computational cost when using a sparse binary sensing matrix [2]. Most state-of-the-art algorithms for ECG compression are based on wavelet transforms because of their straightforward implementation and desirable time and frequency localization properties [5]–[7]. The main works in this area are character- ized by hierarchical tree structures, such as embedded zero- tree wavelet (EZW) [6] and set partitioning in hierarchical tree (SPIHT) [7], which leverage the correlations of the wavelet coefficients across scales within a hierarchically decomposed wavelet tree. Even though wavelet-based methods offer good performance in terms of compression, CS-based methods exhibit superior energy efficiency due to their lower processing complexity at the encoder [2], [8]. The application of CS in WBAN-enabled ECG monitors is still at its infancy. Dixon et al. [8] studied several design con- siderations for CS-based ECG telemonitoring via a WBAN, including the encoder architecture and the design of the mea- surement matrix. Their results show high compression ratios using a 1-bit Bernoulli measurement matrix. Zhilin et al. [9] introduced CS for wireless telemonitoring of non-invasive fetal ECG. They exploited the block-sparsity structure of the signals in the time domain and used block sparse Bayesian learning (BSBL) for the reconstruction. In [10], Mamaghanian et al. showed significant gains in compression and robust- ness attained by exploiting structural ECG information. In a more recent work, Mamaghanian et al. [11] proposed a new CS architecture based on random demodulation techniques for ambulatory physiological signal monitoring in embedded systems. Their results exhibit better energy efficiency than traditional acquisition systems. Our previous works in the area include CS-based algorithms for ECG compression with a focus on algorithms achieving joint reconstruction of ECG cycles by exploiting correlation between adjacent heartbeats [12]–[14]. We also proposed a novel CS-based approach to recover ECG signals contami- nated with electromyographic (EMG) noise using fractional

Exploiting Prior Knowledge in Compressed Sensing Wireless ...Exploiting Prior Knowledge in Compressed Sensing Wireless ECG Systems Luisa F. Polan´ıa, Student Member, IEEE, Rafael

  • Upload
    others

  • View
    14

  • Download
    0

Embed Size (px)

Citation preview

arX

iv:1

405.

4201

v2 [

cs.C

E]

29 M

ay 2

014

1

Exploiting Prior Knowledge in Compressed SensingWireless ECG Systems

Luisa F. Polanıa,Student Member, IEEE,Rafael E. Carrillo,Member, IEEE,Manuel Blanco-Velasco,SeniorMember, IEEE,and Kenneth E. Barner,Senior Member, IEEE

Abstract—Recent results in telecardiology show that com-pressed sensing (CS) is a promising tool to lower energy consump-tion in wireless body area networks for electrocardiogram (ECG)monitoring. However, the performance of current CS-basedalgorithms, in terms of compression rate and reconstructionquality of the ECG, still falls short of the performance attainedby state-of-the-art wavelet based algorithms. In this paper, wepropose to exploit the structure of the wavelet representation ofthe ECG signal to boost the performance of CS-based methods forcompression and reconstruction of ECG signals. More precisely,we incorporate prior information about the wavelet dependenciesacross scales into the reconstruction algorithms and exploit thehigh fraction of common support of the wavelet coefficients ofconsecutive ECG segments. Experimental results utilizingtheMIT-BIH Arrhythmia Database show that significant perfor-mance gains, in terms of compression rate and reconstructionquality, can be obtained by the proposed algorithms comparedto current CS-based methods.

Index Terms—Electrocardiogram (ECG), wireless body areanetworks (WBAN), compressed sensing (CS), wavelet transform.

I. I NTRODUCTION

A WIRELESS body area network (WBAN) is a radiofrequency-based wireless networking technology that

connects small nodes with sensor and/or actuator capabilitiesin, on, or around a human body. WBANs promise to revolu-tionize health monitoring by allowing the transition from cen-tralized health care services to ubiquitous and pervasive healthmonitoring in every-day life. One of the major challenges inthe design of such systems is energy consumption, as WBANsare battery powered [1].

The WBAN energy consumption can be divided into threemain processes: sensing, wireless communication and dataprocessing. However, the process that consumes most of theenergy is the wireless transmission of data [1], which indicatesthat some data reduction operation should be performed atthe sensor node to reduce the energy cost of the network. Inaddition, data reduction can supplement the bandwidth con-straints of the WBAN when many sensor nodes are requiredto measure different physiological signals. Mamaghanianet

L.F. Polanıa and K.E. Barner are with the Department of Electrical andComputer Engineering, University of Delaware, Newark, DE,19711 USA(e-mail: [email protected]; [email protected]).

R.E. Carrillo is with the Institute of Electrical Engineering, Ecole Polytech-nique Federale de Lausanne (EPFL), CH-1015 Laussane, Switzerland (e-mail:[email protected]).

M. Blanco-Velasco is with the Department of Signal Theory andCommunications, Universidad de Alcala, Madrid 28805, Spain (e-mail:[email protected]).

al. [2] recently proposed to use CS to lower energy con-sumption and complexity in WBAN-enabled ECG monitors.Compressed sensing is an emerging field that exploits thestructure of signals to acquire data at a rate proportional tothe information content rather than the frequency content,therefore allowing sub-Nyquist sampling [3], [4]. When thissampling strategy is introduced in WBANs, it gives rise tosensor nodes that efficiently acquire a small group of randomlinear measurements that are wirelessly transmitted to remoteterminals. Indeed, sensor nodes can achieve high compressionof ECG data with low computational cost when using a sparsebinary sensing matrix [2].

Most state-of-the-art algorithms for ECG compression arebased on wavelet transforms because of their straightforwardimplementation and desirable time and frequency localizationproperties [5]–[7]. The main works in this area are character-ized by hierarchical tree structures, such as embedded zero-tree wavelet (EZW) [6] and set partitioning in hierarchicaltree(SPIHT) [7], which leverage the correlations of the waveletcoefficients across scales within a hierarchically decomposedwavelet tree. Even though wavelet-based methods offer goodperformance in terms of compression, CS-based methodsexhibit superior energy efficiency due to their lower processingcomplexity at the encoder [2], [8].

The application of CS in WBAN-enabled ECG monitors isstill at its infancy. Dixonet al. [8] studied several design con-siderations for CS-based ECG telemonitoring via a WBAN,including the encoder architecture and the design of the mea-surement matrix. Their results show high compression ratiosusing a 1-bit Bernoulli measurement matrix. Zhilinet al. [9]introduced CS for wireless telemonitoring of non-invasivefetal ECG. They exploited the block-sparsity structure of thesignals in the time domain and used block sparse Bayesianlearning (BSBL) for the reconstruction. In [10], Mamaghanianet al. showed significant gains in compression and robust-ness attained by exploiting structural ECG information. Inamore recent work, Mamaghanianet al. [11] proposed a newCS architecture based on random demodulation techniquesfor ambulatory physiological signal monitoring in embeddedsystems. Their results exhibit better energy efficiency thantraditional acquisition systems.

Our previous works in the area include CS-based algorithmsfor ECG compression with a focus on algorithms achievingjoint reconstruction of ECG cycles by exploiting correlationbetween adjacent heartbeats [12]–[14]. We also proposed anovel CS-based approach to recover ECG signals contami-nated with electromyographic (EMG) noise using fractional

2

lower-order moments [15].Even though previous works validate the potential of CS

for energy-efficient ECG compression [2], [8], [9], the per-formance of CS in terms of compression rate and ECGreconstruction quality is still unsatisfactory when comparedto the results attained by state-of-the-art algorithms based onexploiting magnitude correlation across wavelet subbands[6],[7]. Most previous CS-based ECG compression works exploitonly the sparsity of the signal, ignoring important signalstructure information that can be knowna priori and lead toenhanced reconstruction results.

The main contribution of this paper is the application of aCS algorithm that enables exploitation of ECG signal struc-ture in the reconstruction process. Two key signal structureproperties are incorporated into the proposed algorithm. Thefirst property captures the wavelet domain dependencies acrosssubbands and the second exploits the significant fraction ofcommon wavelet coefficient support for consecutive ECGsegments. The proposed algorithm falls within the frameworkof model-based CS [16]—a new CS framework based onunions of subspaces—that can enhance signal reconstructionwhile reducing the number of measurements. The motivationfor using a model-based approach is that it enables theincorporation of structural dependencies between the locationsof the signal coefficients caused by R wave events. However, itis worth mentioning that the proposed algorithm differs fromtraditional model-based recovery algorithms [10], [16] intworespects. First, it uses prior support estimate knowledge toleverage the small variation in the support set of adjacent datasequences. Second, it excludes the selection of coefficientsfrom the lowest-energy wavelet subband.

The performance of the proposed method is evaluated usingthe MIT-BIH Arrythmia Database [17]. Given that most CSECG application works use the BPDN algorithm [2], [8],system-level comparisons are provided based on implemen-tations of the BPDN and proposed reconstruction algorithms.Simulations are also performed with the bound-optimization-based BSBL algorithm, previously employed by Zhilinetal. [9] for non-invasive fetal ECG, in order to compare withother structured sparsity-based CS reconstruction algorithms.Results indicate that the proposed algorithms outperform bothBPDM and bound-optimization-based BSBL in terms of com-pression rate and reconstruction quality.

The organization of the paper is as follows. Section IIpresents a brief review of CS and model-based CS [16]. Sec-tion III describes the connected subtree structure encounteredin the wavelet representation of the ECG and studies thesupport variation across consecutive ECG segments. In SectionIV, the proposed method is presented. Numerical results fortheproposed method and comparisons with a benchmark state-of-the-art algorithm for ECG compression, SPIHT, are presentedin Section V. Finally, we conclude in Section VI with closingremarks.

II. BACKGROUND

A. Compressed sensing

Let x ∈ RN be a signal that is eitherK-sparse or

compressible in an orthogonal basisΨ. Thus, the signalx can

be approximated by a linear combination of a small number ofvectors fromΨ, i.e.x ≈∑K

i=1 siψi, whereK ≪ N . LetΦ beanM ×N sensing matrix,M < N . Compressed sensing [3],[4] addresses the recovery ofx from linear measurements ofthe formy = Φx = ΦΨs. DefiningΘ = ΦΨ, the measurementvector becomesy = Θs. Compressed sensing results show thatthe signalx can be reconstructed fromM = O(K log(N/K))measurements if the matrixΘ satisfies some special condi-tions, e.g., the restricted isometry property [4], the mutualcoherence [18], or the null space property [3]. In real scenarioswith noise, the signals can be reconstructed fromy by solvingthe convex optimization problem

mins

1

2‖y −Θs‖22 + λ‖s‖1, (1)

with λ a regularization parameter that balances sparsity andreconstruction fidelity. The optimization problem in (1) isreferred to as basis pursuit denoising (BPDN) [19].

In addition to convex optimization methods, a family ofiterative greedy algorithms [20]–[22] has received significantattention due to the algorithmic simplicity and low complexity.Two iterative algorithms of importance for the problem at handare Compressive Sampling Matching Pursuit (CoSaMP) [21]and Iterative Hard Thresholding (IHT) [23].

1) CoSaMP:CoSaMP is an iterative greedy algorithm thatoffers the same theoretic performance guarantees as even thebest convex optimization approaches [21]. At each iteration,several components of the vectors are selected based on thelargest correlation values between the columns ofΘ and theresidual vector. If they are found sufficiently reliable, theirindices are added to the current support estimate ofs. Thisis repeated until all the recoverable portion of the signal isfound.

2) Iterative Hard Thresholding:IHT is a powerful methodfor sparse recovery that converges to a local minimum of theproblem statement

mins‖y −Θs‖22 subject to‖s‖0 ≤ K, (2)

by using the recursion

si+1 = HK(si +ΘT (y −Θsi)) (3)

whereHK is a non-linear operator that sets to zero all elementsother than theK largest elements (in magnitude). IHT is avery simple algorithm that recovers sparse and compressiblevectors with a minimal number of observations and with nearoptimal accuracy, whenever the matrixΘ has a small restrictedisometry constant [23].

B. Model-based Compressed Sensing

Model-based compressive sensing is a new paradigm thataims to capture the inter-dependency structure in the supportof the large signal coefficients using a union-of-subspacesmodel [16]. This model decreases the degrees of freedom ofthe signal by allowing only some configurations of support forthe largest coefficients.

Let x be aK-sparse signal. Then,x lies in ΣK ⊂ RN ,

which is a union of(

N

K

)

subspaces of dimensionK. A

3

union-of-subspaces model allows only someK-dimensionalsubspaces inΣK and leads to representations that incorpo-rate signal structure. Letx|Ω denote the coordinates ofxcorresponding to the set of indicesΩ ⊆ 1, . . . , N , and letΩC represent the complement ofΩ. Then, the signal modelMK is defined as the union ofmK canonicalK-dimensionalsubspaces

MK =

mK⋃

m=1

Xm,Xm = x : x|Ωm∈ R

K , x|ΩCm= 0 (4)

whereΩ1, . . . ,ΩmK is the set of all allowed supports with

|Ωm| = K for eachm = 1 . . .mK . It should be noted thatMK ⊆ Σk and thatMK containsmK ≤

(

NK

)

subspaces.Signals fromMK are calledK-model sparse.

A similar treatment is applied to compressible signals. Acompressible signalx ∈ RN that is nearlyK-model sparsecan be approximated to the best model-based approximationinMK . Theℓ2 error produced by the approximation is givenby

σMK= inf

x∈MK

‖x− x‖2. (5)

The algorithm that provides the bestK-term approximationof the signalx under the modelMK is denoted asM(x,K).Thus, the errorσMK

can also be written asσMK= ‖x −

M(x,K)‖2. A sparsity modelM = M1,M2, . . . producesnested approximations if the support ofM(x,K∗) containsthe support ofM(x,K) for all K < K∗. If the signalmodel produces nested approximations, then the support ofthe difference vectorM(x, jK) − M(x, (j − 1)K) lies in asmall union of subspaces. These difference vectors form setsof residual subspaces. Thejth set of residual subspaces of sizeK is defined as

Rj,K(M) = u ∈ RN s.t. u = M(x, jK)−M(x, (j − 1)K)

for somex ∈ RN,(6)

for j = 1, . . . , ⌈N/K⌉. A structured compressible signalxcan be robustly recovered from the compressive measurementsy = Φx if the matrix Φ satisfies the restricted amplificationproperty (RAmP) [16]. A matrixΦ has the(ǫK , r)-restrictedamplification property for the residual subspacesRj,K ofmodelM if

‖Φu‖22 ≤ (1 + ǫK)j2r‖u‖22 (7)

for any u ∈ Rj,K and for each1 ≤ j ≤ ⌈N/K⌉.Baraniuk et al. [16] incorporated the union-of-subspaces

models into two well-known CS recovery algorithms, CoSaMPand IHT, through a single modification in the algorithms. Themodification, in practice, replaces the bestK-term approxi-mation with a bestK-term model-based approximation.

C. Tree-structured sparsity models

One example of a structured sparsity model is that encoun-tered in signals whose most significant wavelet coefficientsare organized into a tree structure, and where the largestcoefficients cluster along the branches of the tree [16].

Consider anN -dimensional signalx. Given a waveletfunctionψ and a scaling functionϕ, the wavelet representationof x is defined in terms of shifted versions ofϕ and shiftedand dilated versions ofψ

x =

NL−1∑

i=0

aL,iϕL,i +L∑

j=1

Nj−1∑

i=0

dj,iψj,i, (8)

where j denotes the scale of analysis andL denotes thecoarsest scale.Nj = N/2j indicates the number of coefficientsat scalej ∈ 1, . . . , L andi represents the position,0 ≤ i ≤Nj − 1. The wavelet transform consists of the scaling coef-ficientsaL,i and wavelet coefficientsdj,i. Using the previousnotation, we writex = Ψs, whereΨ is the orthogonal matrixcontaining the wavelet and scaling functions as columns ands = [d1,0 . . . d1,N1−1 . . . dL,0 . . . dL,NL−1aL,1 . . . aL,NL−1]

T

is the vector of scaling and wavelet coefficients. The vectors can be decomposed intoL + 1 subvectors. The firstLsubvectors are denoted bydj , j = 1, . . . , L, and thejthsubvector contains all of the wavelet coefficients for scalej. The last subvector corresponds to the scaling coefficientsand is denoted asaL. Thus, s can also be written ass =[d1d2 . . . dLaL]

T .

The wavelet atoms form a binary tree structure wherethe wavelet coefficientdj,i is the parent of its two childrendj+1,2i and dj+1,2i+1. This nesting property causes rapidtransitions and other singularities to manifest as chains oflarge coefficients along the branches of the wavelet tree [24].This gives rise to the concept of a connected subtree, whichrefers to a connected set of nodesΩ meeting the condition thatwhenever a coefficientdj,i ∈ Ω, then its parent also belongsto Ω. In [16], Baraniuket al. assumedNL = 1 for simplicityand defined the structured sparsity modelTK as the union ofall K-dimensional subspaces corresponding to supportsΩ thatform connected subtrees,

TK =

x = aL,0ϕL,0 +

L∑

j=1

Nj−1∑

i=0

dj,iψj,i : d|ΩC = 0,

|Ω| = K,Ω forms a connected subtree

, (9)

whereΩC denotes the complement of the setΩ. To find thebestK-term tree-based approximation, Baraniuket al. usedthe condensing sort and select algorithm (CSSA) [25], whichsolves for

x∗ = argminx∈Tk

‖x− x‖2 (10)

by using a two-stage process. The first stage merges the non-monotonic segments of the tree branches with an iterative sort-and-average routine. The second stage simply sorts the waveletcoefficients once they are organized in a monotonic non-increasing sequence along the branches out from the root. Theprinciple behind the CSSA algorithm is that the standardK-term approximation coincides with the subtree approximationwhen the wavelet data is monotonically non-increasing alongthe tree branches.

4

a5

d5

d4

d3

d2

d1

0 2 4 6 8 10

x(t)

t (seconds)

Fig. 1. Daubechies-4 coefficients subvectorsdj , j = 1 . . . 5, anda5 for ECGtime series.

III. MOTIVATION

In this section, we analyze the structure of the waveletrepresentation of ECG signals to motivate the incorporationof prior information in CS-based recovery algorithms forECG reconstruction. We concentrate on exploiting two keyproperties of the ECG wavelet coefficients. The first propertyis the connected subtree structure formed by the largest (inmagnitude) wavelet coefficients, and the second property isthe high fraction of common support between adjacent ECGsegments.

A. Connected subtree structure of ECG wavelet coefficients

Sharp transition regions in the time domain generate largemagnitude wavelet coefficients that persist along the branchesof the wavelet tree, forming connected rooted subtrees [24].This behavior is also present in the wavelet representationof ECG signals and is connected with R wave events. Thisidea is illustrated in Fig. 1, which shows that coefficientssubvectorsdj , j = 1, . . . , 5, and a5, corresponding to theDaubechies-4 wavelet transform of an ECG time series using adecomposition levelL = 5. The subvectors are plotted as rowsstacked on top of each other. The ECG time series, located atthe bottom of Fig. 1, corresponds to an extract of 11.5 secondsfrom record 117 of the MIT-BIH Arrhythmia Database. Eachcoefficient vector is time-shifted so that the tree structure canbe clearly identified. For visualization purposes, the magnitudeof the coefficients is normalized so that the Euclidean normof each wavelet subband is unity.

Examining the wavelet representation in Fig. 1, it is noticedthat the large coefficients are aligned and propagate acrossscales, forming a connected tree structure. This persistence

0 20 40 60 80 1000

0.2

0.4

0.6

0.8

1

t

σ t,t−

1

Fig. 2. Fraction of common support between consecutive ECG segments. Theresults are averaged over the selected data set from the MIT-BIH ArrhythmiaDatabase.

property is mainly noticed in subbandsd4, d3, andd2. Theseresults suggest that the tree-structured sparsity model de-scribed in Section II-C is an appropriate model to representthe support of the most significant ECG wavelet coefficients.The tree structure is intrinsically related to the shape of theECG cycles. When compared with the ECG time series, itis noted that the large coefficients are connected with theQRS complexes, which can be regarded as sharp transitionregions of the signal. We propose to exploit the tree-structuredsparse representation as additional prior information fortheCS reconstruction of ECG signals. As stated by the model-based CS framework [16], knowledge of the signal supportpotentially leads to high quality reconstruction using fewermeasurements, and thus higher compression performance.

B. Common support between consecutive ECG segments

In this section, we study how the support of two consecutiveECG segments varies. Consider the 10-min long single leadsextracted from records 100, 101, 102, 103, 107, 109, 111, 115,117, 118 and 119 in the MIT-BIH Arrhythmia Database, whichare sampled at 360 Hz, and form 100 consecutive sequencesof 2048 samples for each record. LetΩt denote the supportof theK = 225 largest (in magnitude) wavelet coefficients ofsequencest, wheret = 1, . . . , 100 denotes the order of thesequences. Consider the fraction of common support betweentwo consecutive sequences, denoted asσt,t−1, which is definedas

σt,t−1 =|Ωt ∩Ωt−1||Ωt|

, t = 2, . . . , 100. (11)

The averaged results over the entire ECG data set areillustrated in Fig. 2, where the data points are connected forvisualization purposes only. The fraction of common supportis high, always greater than 0.65, which indicates that wecan use the support of the previous data sequence to improvethe estimate of the support of the current data sequence. Indesigning the CS-based reconstruction algorithm, we proposeto include support information of previous reconstructed datasequences to improve the performance. A related work byWanget al. [26] shows how to iteratively enhance recovery ofa signal by solving basis pursuit in the first iteration to obtaina support estimate, solving the problem of basis pursuit withpartially known support [27] with this support estimate, and

5

Sampling

Quantization

Q(·)

Reconstruction

algorithm

Huffman

decoding

Huffman

Encoding

Recovered

signal

Input

signal

Redundancy

Removal

Recovery of

the original

samples

Fig. 3. Block Diagram of the proposed ECG compression method.

repeating the steps. Their work attaines exact reconstructionguarantees for a single iteration even if the initial supportestimate includes a small fraction of indices outside the truesupport.

IV. METHODS

The proposed ECG compression method based on com-pressive sensing for potential implementation in WBAN-based telemonitoring systems is described in this section.The method is summarized in Fig. 3. The process initiateswith the sampling of the ECG signals through linear randommeasurements, followed by a redundancy removal module anda quantization stage. The quantized samples are entropy codedand transmitted to a remote terminal where the reconstructionis performed. The presented contribution focuses on the re-construction algorithm that relies on prior knowledge of ECGwavelet coefficient structure to reduce the required numberof measurements and improve reconstruction quality. A moredetailed explanation of each stage is given below.

A. Sampling and Encoding

The first step addresses the sampling of consecutive ECGsegments of lengthN . Let x denote an ECG segment. Theinformation we acquire aboutx can be described byy = Φx,whereΦ is a M × N matrix. In order to recover the bestK-term model-based approximation of the original signal, thematrix Φ needs to satisfy the restricted amplification property(RAmP). It is known that sub-Gaussian matrices meet thiscondition with high probability [16]. Here we build the entriesof the matrixΦ by independently sampling from a symmetricBernoulli distribution (P(Φi,j = ±1/

√M = 1/2)) in order

to facilitate an efficient hardware implementation. The useof Bernoulli matrices, as compared to other sub-Gaussianmatrices, results in simpler circuit complexity, data storage,and computation requirements [28].

To realize the analog hardware implementation of CS ma-trices, Mamaghanianet al. [11] recently proposed the spreadspectrum random modulator pre-integrator. This architecturestarts with a pre-modulation block, followed by a randomdemodulation pre-integrator architecture, which is composedof parallel channels of random demodulators. Each randomdemodulator is, in turn, composed of three stages. The firststage refers to the multiplication of the input signal witha pseudo-random sequence that takes values±1 with equalprobability. Such sequences are generated with a pseudo-random bit sequence generator. The second stage incorporates

a low-pass filter to avoid aliasing and the final stage corre-sponds to a standard ADC. Their implementation achieves43% energy saving, compared to traditional Nyquist sampling,when applied to EEG and ECG signals.

The use of the described sampling procedure results insimilar adjacent measurement vectors. This is caused by thequasi-periodic nature of the ECG signal and the use of afixed sampling matrix. To further improve the compression andreduce the amount of redundant information, we implementedthe same redundancy removal and entropy coding stagesproposed by Mamaghanianet al. [2]. The redundancy re-moval stage computes the difference between two consecutivemeasurement vectors and only transmits this difference to thequantization module. In [2], it was shown that the varianceof the difference signals between consecutive measurementvectors is lower than the variance of the original measurementvectors, which leads to a reduction in the number of bits forsignal representation, from 12 to 9 bits. It is worth mentioningthat the experiments in [2] were applied to non-aligned ECGsegments, which implies that the QRS complexes were locatedat different locations and, nevertheless, the correspondingmeasurements exhibited significant redundancy to be removed.An 8-bit optimal scalar quantizer designed with the Lloyd-Max algorithm is utilized [29] and entropy coding stage usesHuffman coding to further increase the compression ratio.

B. Reconstruction algorithm using prior information

The original samples must be recovered from the transmit-ted difference between consecutive vectors. Therefore, attheremote terminal, Huffman decoding followed by recovery ofthe original samples is employed. For reconstruction of theECG signal, two iterative algorithms that are easily modifiedto incorporate prior ECG wavelet representation structureinformation are proposed. The algorithms are CoSaMP andIHT, which were previously modified by Baraniuket al. [16]to incorporate structured sparsity models. Their modificationresults in replacing the nonlinear sparse approximation stepwith a structured sparse approximation. These algorithms areknown in the literature as model-based CoSaMP and model-based IHT; both have provable robust guarantees. One ofthe main properties of model-based CoSAMP is that robustsignal recovery requires only a number of measurements thatis proportional to the sparsity level of the signal.

The sparsity model employed in this paper correspondsto a modified version of the tree-structured sparsity modeldescribed in section Section II-C. The modification is basedon the fact that ECG wavelet coefficients at scalej = 1

6

0 500 1000 1500 2000 2500 3000 3500 4000−600

−400

−200

0

200

400

600

800M

agni

tude

d4 d3 d2 d1d5a5

Fig. 4. Wavelet decomposition of levelL = 5 for ECG time series usingDaubechies-4.

correspond to coefficients with nearly zero magnitude, asillustrated in Fig. 4, and should therefore not be included inthe bestK-term model-based signal approximation. Thus, weselectNL scaling coefficients and defineT ⋆

K as

T ⋆K =

x =

NL−1∑

i=0

aL,iϕL,i +

L∑

j=1

Nj−1∑

i=0

dj,iψj,i :

supp(d1,i) ∈ ΩC for all i, d|ΩC = 0, |Ω| = K,

Ω forms a connected subtree

. (12)

Denote the algorithm that finds the bestK-term tree-basedapproximation byT(x,K). That is,

T(x,K) = argminx∈T ⋆

k

‖x− x‖2. (13)

The model-based CoSaMP and the model-based IHT, usingthe tree-structured sparsity modelT ⋆

K , are summarized inAlgorithm 1 and 2, respectively, and are referred to as modifiedmodel-based CoSaMP (MMB-CoSaMP) and modified model-based IHT (MMB-IHT). The halting criterion for both algo-rithms can be a fixed number of iterations or a bound on theresidual norm,‖rj‖2 ≤ ǫ, for some predeterminedǫ > 0. Notethat in the algorithm tables, the Moore-Penrose pseudo-inverseof Θ is denoted byΘ†, ΘT denotes the submatrix obtained byextracting the columns ofΘ corresponding to the indexes inT , b|T represents the entries ofb corresponding to the set ofindicesT , and the support ofs is denoted by supp(s). Also,we use the condensing sort and select algorithm [25] to solvefor the optimization problem in (13).

As in model-based CoSaMP, MMB-CoSaMP starts theiteration by calculating the correlation values between thecolumns ofΘ and the residual of the previous iterationrj−1

(step 4). The correlation values are used to find support ofthe best2K-term tree-based approximation (step 5), which issubsequently merged with the support of the previous iteration(step 6). The next step refers to solving a least-squares problemto approximate the target signal on the updated support (step7). To enforce a sparse solution, the algorithm finds the bestK-term tree-based approximation of the least squares solution(step 8) and finalizes with the update of the residual (step 9).

Similarly, MMM-IHT also resembles model-based IHT. The

algorithm iteratively solves (2) by moving in the oppositedirection of the gradient of‖y − Θs‖22 at each iteration(step 4). MMM-IHT enforces a sparse solution by selectingthe bestK-term tree-based approximation ofb (step 5) andfinalizes with an update of the residual (step 6). It is worthemphasizing that the proposed algorithms exploit the wavelettree structure in the steps that refer to the bestK-termtree-based approximation defined in (13). Both MMB-IHTand MMB-CoSaMP differ from the traditional model-basedapproaches in the initialization step and in theK-term tree-based approximation of the signal.

Unlike the traditional model-based approaches, the proposedalgorithms always include the scaling coefficients in theK-term tree-based approximation of the signal, which is of greatimportance given that the scaling coefficients accumulate mostof the ECG signal energy. Indeed, in a previous work [30],we show that enforcing CS greedy algorithms to select theatoms corresponding to the scaling coefficients can lead toa significant reduction in the number of measurements whileimproving the accuracy of the ECG reconstruction.

The initialization step is a modification with respect totraditional approaches because it incorporates support infor-mation from the previously reconstructed signal. In sectionIII-B, it is noted that a high fraction of support is sharedbetween two consecutive ECG segments. The two consecutiveECG segments in the wavelet domain are denoted byst

and st−1. With the aim of exploiting this information, thefirst signal estimate is determined by solving a least squaresproblem using the support of the previously reconstructedECG segment,Λ = supp(st−1). The contribution of thisestimate is subtracted from the measurement vector and it-erated on the residual. These steps are implemented in theinitialization of Algorithm 1 and 2. In both algorithms, MMB-IHT and MMB-CoSaMP, the support set is refined throughiterations by addition of promising new atoms and deletion ofunnecessary atoms. In this way, we expect the algorithms topreserve the common support (supp(st) ∩ supp(st−1)) and toreplace the unnecessary atoms (supp(st−1) \ supp(st)) withthe innovation support of the current ECG sequence,i.e.(supp(st) \ supp(st−1)).

V. EXPERIMENTAL RESULTS

To validate the proposed methods, compressed measure-ments are generated using a set of records from the MIT-BIHArrhythmia Database [17] as the original signals. Every fileinthe database contains two lead recordings sampled at 360 Hzwith 11 bits per sample of resolution. However, since bodyarea networks adopt lower sampling frequencies than 360 Hz,each ECG recording is resampled at 250 Hz. The samplingfrequency of 250 Hz is commonly used for ECG monitoringin body area networks [31]. Experiments are carried out andaveraged over 10-min long single leads extracted from records100, 101, 102, 103, 107, 109, 111, 115, 117, 118 and 119.This data set was proposed in [7]; it consists of a varietyof signals with different rhythms, wave morphologies andabnormal heartbeats. Results are presented for averages of100repetitions of each experiment, with a different realization ofthe random measurement matrix at each time.

7

Algorithm 1 MMB-CoSaMPRequire: MatricesΘ = ΦΨ andΨ, measurementsy, sparsity

levelK, support of previously reconstructed ECG segmentΛ, structured sparse approximation algorithmT.

1: Initialize s0|Λ = Θ†Λy, s0|ΛC = 0, r0 = y −Θs0, j = 0.

2: while halting criterion falsedo3: j ← j + 14: e← ΘT rj−1

5: Ω← supp(ΨTT(Ψe, 2K))

6: T ← Ω ∪ supp(sj−1)

7: b|T ← Θ†T y, b|TC ← 0

8: sj ← ΨTT(Ψb,K)

9: rj ← y −Θsj10: end while11: return x← Ψsj

A. Performance Evaluation

The compression ratio (CR), the percentage root-mean-square difference (PRD), the normalized version of PRD(PRDN), the quality score (QS), and the the reconstructionSNR (R-SNR) are used as performance measures. The CR isdefined as the number of bits required for the original signalover the number of bits required for the compressed signal.Here the original signals refer to the resampled ECG recordsat 250 Hz. The reconstruction SNR is defined as

R-SNR= 10log10‖x‖22‖x− x‖22

, (14)

wherex and x denote theN -dimensional original and recon-structed signals, respectively. The PRD is defined as PRD=(‖x− x‖2/‖x‖2)×100. Let e denote anN -dimensional vectorof ones andx be the mean value ofx. The PRDN is definedas PRDN= (‖x − x‖2/‖x − xe‖2) × 100, and the QS asQS= CR/PRD [32].

B. Practical considerations

The length of the ECG segments is set toN = 256, so thatthe acquisition time can be sufficiently short (approximately1 sec at the rate of 250 Hz) for real-time monitoring. Theorthogonal Daubechies-4 wavelets is chosen as the sparsifyingtransform. For the reconstruction algorithms, the haltingcri-terion is either a maximum number of iterations (we selected70 for our simulations) or a bound on the residual norm,‖rt‖2 ≤ ǫ. We selectedǫ = 1× 10−3‖y‖2.

For the sparsity level, a sequence of residual energy isdefined. The elements in the vector of wavelet coefficientss are ordered according to their square magnitudes, such that

|s(1)|2 ≥ |s(2)|2 ≥ . . . ≥ |s(N−1)|2 ≥ |s(N)|2. (15)

The sequence of residual energy is defined as

CK =

N∑

j=1

|s(j)|2 −K∑

j=1

|s(j)|2

N∑

j=1

|s(j)|2, K = 1, . . . , N. (16)

Algorithm 2 MMB-IHTRequire: Matrices Ψ, Φ, and Θ = ΦΨ, measurementsy,

sparsity levelK, support of previously reconstructed ECGsegmentΛ, structured sparse approximation algorithmT.

1: Initialize s0|Λ = Θ†Λy, s0|ΛC = 0, x0 = Ψs0, r0 = y −

Φx0, j = 02: while halting criterion falsedo3: j ← j + 14: b← xj−1 +ΦT rj−1

5: xj ← T(b,K)6: rj ← y − Φxj7: end while8: return x← xj

0 50 100 150 200 25010

−10

10−8

10−6

10−4

10−2

100

K

Ave

rage

d C

K

Fig. 5. Sequence of residual energy averaged over the selected set of recordsfrom the MIT-BIH Arrhythmia Database.

From each record of the data set, 300 ECG segments of lengthN = 256 are selected. The sequence of residual energy isaveraged over all the selected ECG segments and over allthe different records. The results are plotted in Fig. 5 inlogarithmic scale. The sparsity level is selected as the valueof K that satisfies averagedCK = 0.001, which correspondsto K = 34 and is indicated in Fig. 5 with dotted lines. Thisresult indicates that the most significant 34 wavelet coefficientsapproximately accumulate 99.999% of the total signal energy.

Given that the ECG recordings are sampled at 250Hz,the sampling interval becomes∆t = 1/250 seconds. Adecomposition levelL = 5 is utilized so that the scalingcoefficients vectora5, associated with a physical scale of25∆t = 25/250 = 0.13 seconds, approximately isolate theT waves, and thus the detail coefficients capture the QRScomplex. In this way, the largest (in magnitude) detail co-efficients are expected to exhibit a connected subtree structurecaused by the QRS complexes, as shown in Fig. 1. The ECGreconstruction quality, using both MMB-CoSaMP and MMB-IHT algorithms, is evaluated as a function of the waveletdecomposition level. The results are averaged over the selectedset of records for a number of measurementsm = 3K. Theresults in Fig. 6 indicate that the selection ofL = 5 is indeeda good choice for the decomposition level. The data points areconnected for visualization purposes only.

Sparse binary matrices have recently been proposed for

8

TABLE IPRDOBTAINED BY THE PROPOSED ALGORITHMS FOR DIFFERENT SENSING MATRICES

Reconstructionalgorithm

Samplingmatrix

CR

3.5 4 4.5 5 5.5 6 6.5 7 7.5 8

MMB-IHT

Bernoulli 3.31 3.32 3.32 3.33 3.35 3.55 3.81 4.05 4.43 4.91

Matrix I 3.42 3.42 3.47 3.7 3.85 4.23 4.8 5.71 6.7 7.79

Matrix II 3.42 3.43 3.45 3.55 3.67 3.99 4.32 4.68 5.2 5.82

MMB-CoSaMP

Bernoulli 2.98 2.99 2.99 3.11 3.24 3.49 4.02 4.8 9.74 25.58

Matrix I 2.99 2.99 3.04 3.12 3.62 3.92 5.12 6.9 14.01 28.15

Matrix II 3.01 3.01 3.03 3.07 3.4 3.7 4.8 6.2 12.9 27.74

2 3 4 5 6 7 8 9 10 110

5

10

15

20

25

30

Wavelet decomposition level

Rec

onst

ruct

ion

SN

R

MMB−CoSaMPMMB−IHT

Fig. 6. Reconstruction SNR as a function of the wavelet decomposition level.Number of measurementsm = 3K.

CS ECG since they lead to fast computations and low-memory requirements [2]. An experiment is designed to testthe performance of the proposed algorithms when using sparsebinary matrices. Similarly to the work in [2], two types ofsparse binary matrices, matrix I and matrix II, are employed.Matrix I has onlyq ≪ N nonzero elements in each column.Each nonzero element takes the value1/

√q and its location

is chosen randomly. Matrix II has onlyq ≪ N nonzeroelements in each column. Each nonzero element takes thevalue±1/√q with equal probability and its location is chosenrandomly. In [2], the value ofq = ⌊0.025 × N⌋ providesa good tradeoff between execution time and reconstructionquality. The same relation between the value ofq andN isselected for the proposed experiment,q = ⌊0.025×N⌋ = 6.The reconstruction performance of the proposed algorithmsusing the Bernoulli matrix, described in Section IV.A, andthe sparse binary matrices is shown in Table I. The resultscorrespond to averaged PRD values over the entire set ofselected records. Even though all the matrices have a similarperformance for CR≤ 6, the performance of the sparse binarymatrices deteriorates for CR> 6. Hence, the dense Bernoullimatrix is selected as the sampling matrix for the proposedmethods. Nevertheless, the minimal detriment to performancesuggests that sparse binary sensing matrices are a plausiblealternative for CS ECG.

C. Evaluation of ECG reconstruction using the proposedmethod

This section presents an experiment to evaluate how thereconstruction of ECG signals changes as the amount of prior

knowledge of the support set varies. The best performance isachieved with the oracle estimate. Assume an oracle revealsthe support setΩ of theK most significant wavelet coefficientsof the signal of interest. The oracle estimate correspondsto the least squares projection onto the subspace spannedby the columns ofΘ with indices in Ω. This experimentalso considers the case where no prior information is knownabout the support setΩ, and the signal is reconstructed usingthe traditional CoSaMP [21] and the traditional IHT [23].The performance of all these methods is compared withthe performance attained by the proposed algorithms, MMB-CoSaMP and MMB-IHT.

The results are averaged over the set of selected recordsand illustrated in Fig. 7. Given that the objective of thisfirst experiment is only to evaluate the reconstruction of theproposed scheme, we restrict our method to the samplingand reconstruction of the signals and exclude the redundancyremoval, quantization, and entropy coding stages. For thisexperiment, the reconstruction SNR is used to evaluate thequality of the recovered signals as a function of the oversam-pling ratioM/K.

As shown in Fig. 7, the proposed algorithms outperformthe traditional CoSaMP and IHT, indicating that exploitingtheconnected subtree structure of the most significant waveletco-efficients, as well as the common support between consecutiveECG segments, results in a large reduction of the requirednumber of measurements to achieve successful recovery ofECG signals. It is also noted from Fig. 7 that the algorithmsbased on IHT require fewer measurements than the algorithmsbased on CoSaMP to attain good reconstruction. The resultsof the proposed methods are the closest to the best achievableperformance obtained by the oracle estimate.

D. Evaluation of compression performance

For the second experiment, the reconstruction algorithmin the compression scheme, Fig. 3, is varied. The proposedMMB-CoSaMP and MMB-IHT are first used as reconstructionalgorithms, and their results are compared with BPDN, thebound-optimization-based BSBL algorithm, and an overcom-plete dictionary-based reconstruction algorithm. The resultsare also compared with SPIHT, a state-of-the-art algorithmfor ECG compression. The same entropy coding stage of theproposed method is added to SPIHT to ensure fairness inthe comparison. The results of the PRD as a function of thecompression ratio are illustrated in Fig. 8. The results areaveraged over the entire set of selected records.

9

2 2.5 3 3.5 4 4.50

5

10

15

20

25

30

35

M/K

Rec

onst

ruct

ion

SN

R

OracleMMB−CoSaMPMMB−IHTTraditional CoSaMPTraditional IHT

Fig. 7. Comparison of MMB-CoSaMP and MMB-IHT with CoSaMP, IHT,and the oracle estimate. Reconstruction SNR averaged over all the records ofthe selected data set for different number of measurements.

Basis pursuit denoising is the reconstruction algorithm se-lected by Mamaghanianet al. [2] and Dixon et al. [8] forthe recovery of ECG signals. However, the results in Fig. 8indicate that the proposed compression scheme works signif-icantly better when MMB-IHT and MMB-CoSaMP are usedas reconstruction algorithms than when BPDN is employed.These results are expected, as the proposed reconstructionalgorithms exploit prior knowledge of the signal structure,unlike BPDN, which only leverages the sparsity of the signals.

Unlike BPDN, the bound-optimization-based BSBL algo-rithm, denoted as BSBL-BO, provides flexibility to exploitthe block structure and intra-block correlation of the signalsparsity pattern. Even though it was previously employed toreconstruct non-invasive fetal ECG [9], it is also successfullyapplied in the recovery of adult ECG in the wavelet domain.The reason is the clustering property of the ECG waveletcoefficients that suggests the use of a block-sparsity model.This property refers to the tendency of the large coefficients tocluster together into blocks, as it is suggested by Fig. 1. For theimplementation of the BSBL-BO algorithm, the block partitionwas set toh = 15 and the maximum number of iterations to30. Even though the BSBL-BO algorithm offers performancesuperior to BPDN, it is outperformed by MMB-IHT andMMB-CoSaMP. This result suggests that the connected subtreestructured sparsity model may be more appropriate to representthe largest (in magnitude) ECG wavelet coefficients thanis the block sparsity model. In addition, the incorporationof prior support knowledge also contributes to the superiorperformance attained by the proposed methods.

It is of interest to compare the performance of the proposedalgorithms with overcomplete dictionary-based reconstructionmethods. In a previous work [33], we propose the use of amulti-scale dictionaryD ∈ R

N×J , J > N , for CS ECG, withthe aim of combining the advantages of multi-scale represen-tations using wavelets with the benefits of dictionary learning.

4 5 6 7 8 9

2

3

4

5

6

7

8

9

10

CR

PR

D

SPIHTMMB−CoSaMPBPDNBSBL−BOMS−BPDNMMB−IHT

Fig. 8. Compression performance evaluation of the compression scheme usingthe proposed algorithms (MMB-CoSaMP, MMB-IHT), the BSBL-BO, theMS-BPDN, and the BPDN algorithms. The results of SPIHT are includedas a baseline for comparisons.

The dictionaryD is divided into subdictionaries according tothe corresponding wavelet subband and each subdictionaryis learned separately. The idea behind this approach is toexploit correlations within each wavelet subband. The multi-scale dictionary-based algorithm, denoted as MS-BPDN, aimsto solve problem (1) withΘ = ΦΨD, instead ofΘ = ΦΨ, byusing basis pursuit denoising. According to the results in Fig.8, the proposed algorithms outperform MS-BPDN becausethey exploit additional signal structure information. However,it is also noticed that MS-BPDN outperforms the traditionalwavelet based-BPDN algorithm, which suggests the promisingapplication of adaptive overcomplete dictionaries to CS ECG.The design of more efficient overcomplete dictionary-basedreconstruction algorithms, that exploit dependencies amongdictionary atoms, is left for future work.

These results validate the potential of CS-based methodsin achieving high compression rates while offering importantadvantages, as described in previous works in the area [2], [8],such as higher energy and hardware efficiency at the encoder.

The results of SPIHT are illustrated in Fig. 8, with theaim of evaluating how the proposed compression schemecompares with current ECG compression methods. SPIHT isa benchmark state-of-the-art algorithm for ECG compression.SPIHT utilizes an optimized embedded coding of the waveletcoefficients that increases the compression rates and makesit outperform the results of the proposed methods, at theexpense of requiring more computations. The time complexityof the SPIHT encoder isO(N logN) [34], while the matrix-vector multiplication of the CS encoder,Φx, requires onlyO(N) operations whenΦ is a random symmetric Bernoullimatrix [35]. Therefore, the use of a CS encoder offers asubstantial complexity reduction.

In CS-based methods, there is no access to the rich struc-ture of the wavelet coefficients at the encoder, but instead,

10

−500

0

500

Am

plitu

de

(a) Original MIT−BIH Record: 119 (5 seconds)

−500

0

500(b) Reconstructed signal using modified model−based CoSaMP

Am

plitu

de

−20

0

20(c) Error signal obtained with modified model−based CoSaMP

Am

plitu

de

−500

0

500(d) Reconstructed signal using modified model−based IHT

Am

plitu

de

0 1 2 3 4 5−20

0

20(e) Error signal obtained with modified model−based IHT

Seconds

Am

plitu

de

−500

0

500

Am

plitu

de

(f) Magnified segment (0.6 seconds) of the original signal

−500

0

500

Am

plitu

de

(g) Magnified reconstructed signal using modified model−based CoSaMP

1.2 1.3 1.4 1.5 1.6 1.7−500

0

500(h) Magnified reconstructed signal using modified model−based IHT

Seconds

Am

plitu

deFig. 9. Visual evaluation of the reconstruction of record 119 using MMB-CoSaMP and MMB-IHT.CR = 6.4. PRD=2.61 (MMB-CoSaMP), PRD=2.29(MMB-IHT). Left: Recovery results for a 5-sec sequence. Right: Magnified views of the dashed boxes located on the left.

there is access to the random measurements, which containfewer structure properties to be exploited at the coding stage.This difference in compression performance between wavelettransform-based methods and CS-based methods was previ-ously noted by Mamaghanianet al. [2], who emphasized thesubstantial lower power consumption offered by CS-basedmethods. Even though the proposed algorithm does not out-perform SPIHT in terms of compression ratio, it offers bettercompression and reconstruction performance than previouslyproposed CS-based methods for ECG compression, such asBPDN and BSBL-BO.

There is significant variability among the set of ECGrecords, and therefore, it is instructive to calculate the re-construction performance of the proposed algorithms for eachECG recording. The results are illustrated in Table II. Foralmost all the records, MMB-IHT provides better performancethan MMB-CoSaMP.

E. Visual evaluation

Finally, visual study of the reconstructed and error signalsusing the proposed compression scheme is also presented.Two records with different clinical characteristics are selectedfor the study: 118 and 119. Record 119 contains ventricularectopic heartbeats while the record 118 contains right bundlebranch block heartbeats. The results for a CR= 6.4 areshown in Fig. 9 for record 119 and in Fig. 10 for record 118.Magnified views of the original and reconstructed signals areincluded in the figures. For the two records, the recoveredsignals are a good estimate of the original signals and theypreserve detailed information for clinical diagnosis. It shouldbe noted that the reconstructed signals using MMB-IHT ex-

TABLE IIPERFORMANCE OF THE PROPOSED ALGORITHMS FORCR=6.4

RecordMMB-IHT MMB-CoSaMP

PRD PRDN QS PRD PRDN QS

100 3.65 7.73 1.75 3.86 8.18 1.66

101 5.86 7.79 1.09 5.83 7.76 1.09

102 4.84 8.34 1.32 5.21 8.98 1.22

103 3.94 4.8 1.62 4.17 5.08 1.53

107 3.56 3.68 1.79 3.99 4.13 1.6

109 3.91 4.51 1.63 4.13 4.75 1.54

111 5.91 6.99 1.08 6.21 7.35 1.02

115 2.74 6.62 2.33 2.57 6.19 2.49

117 2.11 7.84 3.03 2.11 7.86 3.02

118 2.54 4.61 2.51 2.86 5.2 2.23

119 2.29 4.5 2.78 2.61 5.12 2.45

hibit less artifacts than the reconstructed signals using MMB-CoSaMP.

VI. CONCLUSIONS

The universal applicability of compressed sensing to sparsesignals lies in the fact that no specific prior information aboutthe signals is assumed, apart from the sparsity condition.However, for a particular application, some prior informa-tion about the signals is typically available. In this paper,we showed that for the specific application of compressedsensing to ECG compression, the appropriate incorporationof prior information into the reconstruction procedures leadsto more accurate reconstruction and higher compression rates.More precisely, we exploit prior information on the connectedsubtree structure formed by largest (in magnitude) wavelet

11

−400

−200

0

Am

plitu

de

(a) Original MIT−BIH Record: 118 (5 seconds)

−400

−200

0

Am

plitu

de

(b) Reconstructed signal using modified model−based CoSaMP

−20

0

20

Am

plitu

de

(c) Error signal obtained with modified model−based CoSaMP

−400

−200

0

Am

plitu

de

(d) Reconstructed signal using modified model−based IHT

0 1 2 3 4 5−20

0

20

Seconds

Am

plitu

de

(e) Error signal obtained with modified model−based IHT

−400

−200

0

Am

plitu

de

(f) Magnified segment (0.6 seconds) of the original signal

−400

−200

0

Am

plitu

de

(g) Magnified reconstructed signal using modified model−based CoSaMP

1.7 1.8 1.9 2 2.1 2.2

−400

−200

0

Seconds

Am

plitu

de

(h) Magnified reconstructed signal using modified model−based IHT

Fig. 10. Visual evaluation of the reconstruction of record 118 using MMB-CoSaMP and MMB-IHT.CR = 6.4. PRD=2.86 (MMB-CoSaMP), PRD=2.54(MMB-IHT). Left: Recovery results for a 5-sec sequence. Right: Magnified views of the dashed boxes located on the left.

coefficients and the common support of the wavelet represen-tation of consecutive ECG segments are exploited. The model-based CoSaMP and model-based IHT algorithms are modifiedto incorporate support knowledge from the previously recon-structed data sequence. The tree-structured sparsity model isalso modified to exclude the selection of atoms from thelowest-energy wavelet subband. We justified the applicationof a model-based compressed sensing approach with the factthat R wave events cause a connected subtree structure oflarge magnitude wavelet coefficients. In addition, we alsoselected an appropriate wavelet decomposition level to enablethe formation of such structure.

The proposed scheme was evaluated for the compression ofa set of eleven ECG records from the MIT-BIH ArrhythmiaDatabase encompassing a variety of signals with differentrhythms, wave morphologies and abnormal heartbeats. Theexperimental results were evaluated in terms of PRD and com-pression ratio. The proposed method outperformed the resultsof previously proposed CS-based methods for ECG compres-sion while still maintaining the low-complexity and energy-efficient implementation inherent to CS-based approaches.

REFERENCES

[1] H. Cao, V. Leung, C. Chow, and H. Chan, “Enabling technologiesfor wireless body area networks: A survey and outlook,”IEEECommunications Magazine, vol. 47, no. 12, pp. 84–93, Dec. 2009.

[2] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst, “Com-pressed sensing for real-time energy-efficient ECG compression on wire-less body sensor nodes,”IEEE Transactions on Biomedical Engineering,vol. 58, no. 9, pp. 2456–2466, Sept. 2011.

[3] D. Donoho, “Compressed sensing,”IEEE Transactions on InformationTheory, vol. 52, no. 4, pp. 1289–1306, Sept. 2006.

[4] E.J. Candes and M.B. Wakin, “An introduction to compressive sam-pling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30,Mar. 2008.

[5] R.S.H. Istepanian, L.J. Hadjileontiadis, and S.M. Panas, “ECG datacompression using wavelets and higher order statistics methods,” IEEETransactions on Information Technology in Biomedicine, vol. 5, no. 2,pp. 108–115, 2001.

[6] M.L. Hilton, “Wavelet and wavelet packet compression ofelectrocar-diograms,” IEEE Transactions on Biomedical Engineering, vol. 44, no.5, pp. 394–402, May 1997.

[7] Z. Lu, D. Y. Kim, and W. A. Pearlman, “Wavelet compressionof ECGsignals by the set partitioning in hierarchical trees algorithm,” IEEETransactions on Biomedical Engineering, vol. 47, no. 7, pp. 849–856,Jul. 2000.

[8] A.M.R. Dixon, E.G. Allstot, D. Gangopadhyay, and D.J. Allstot, “Com-pressed sensing system considerations for ECG and EMG wirelessbiosensors,” IEEE Transactions on Biomedical Circuits and Systems,vol. 6, no. 2, pp. 156 –166, Apr. 2012.

[9] Z. Zhang, T. Jung, S. Makeig, and B.D. Rao, “Compressed sensingfor energy-efficient wireless telemonitoring of noninvasive fetal ECGvia block sparse bayesian learning,”IEEE Transactions on BiomedicalEngineering, vol. 60, no. 2, pp. 300–309, Feb. 2013.

[10] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst, “Struc-tured sparsity models for compressively sensed electrocardiogram sig-nals: A comparative study,” in2011 IEEE Biomedical Circuits andSystems Conference, Nov. 2011, pp. 125–128.

[11] H. Mamaghanian, N. Khaled, D. Atienza, and P. Vandergheynst, “Designand exploration of low-power analog to information conversion based oncompressed sensing,”IEEE Journal on Emerging and Selected Topicsin Circuits and Systems, vol. 2, no. 3, pp. 493–501, Sept. 2012.

[12] L.F. Polania, R.E. Carrillo, M. Blanco-Velasco, and K.E. Barner, “Com-pressed sensing based method for ECG compression,” inProc., IEEEICASSP, May 2011, pp. 761 –764.

[13] L.F. Polania, R.E. Carrillo, M. Blanco-Velasco, and K.E. Barner,“Compressive sensing exploiting wavelet-domain dependencies for ECGcompression,” inSPIE Defense, Security, and Sensing, Apr. 2012, pp.83650E–83650E.

[14] L.F. Polania, R.E. Carrillo, M. Blanco-Velasco, and K.E. Barner, “Onexploiting interbeat correlation in compressive sensing-based ECG com-pression,” in SPIE Defense, Security, and Sensing, Apr. 2012, pp.83650D–83650D.

[15] L.F. Polania, R.E. Carrillo, M. Blanco-Velasco, and K.E. Barner, “Com-pressive sensing for ECG signals in the presence of electromyographicnoise,” in38th Annual Northeast Bioengineering Conference, Mar. 2012,pp. 295–296.

12

[16] R.G. Baraniuk, V. Cevher, M.F. Duarte, and C. Hegde, “Model-basedcompressive sensing,”IEEE Transactions on Information Theory, vol.56, no. 4, pp. 1982–2001, Apr. 2010.

[17] A.L. Goldberger et al., “Physiobank, Physiotoolkit, and Physionet: Com-ponents of a new research resource for complex physiologic signals,”Circulation, vol. 101, no. 23, pp. e215–e220, 2000.

[18] A.M. Bruckstein, D.L. Donoho, and M. Elad, “From sparsesolutions ofsystems of equations to sparse modeling of signals and images,” SIAMReview, vol. 51, no. 1, pp. 34–81, Feb. 2009.

[19] S.S. Chen, D.L. Donoho, and M.A. Saunders, “Atomic decompositionby basis pursuit,”SIAM Journal on Scientific Computing, vol. 20, no.1, pp. 33–61, Dec. 1998.

[20] J.A. Tropp and A.C. Gilbert, “Signal recovery from random mea-surements via orthogonal matching pursuit,”IEEE Transactions onInformation Theory, vol. 53, no. 12, pp. 4655–4666, Dec. 2007.

[21] D. Needell and J.A. Tropp, “CoSaMP: Iterative signal recovery fromincomplete and inaccurate samples,”Applied and Computational Har-monic Analysis, vol. 26, no. 3, pp. 301–321, May 2009.

[22] Deanna Needell and Roman Vershynin, “Uniform uncertainty principleand signal recovery via regularized orthogonal matching pursuit,” Foun-dations of Computational Mathematics, vol. 9, no. 3, pp. 317–334, Apr.2009.

[23] T. Blumensath and M Davies, “Iterative hard thresholding for com-pressed sensing,”Applied and Computational Harmonic Analysis, vol.27, no. 3, Nov. 2009.

[24] M.S. Crouse, R.D. Nowak, and R.G. Baraniuk, “Wavelet-based statisticalsignal processing using hidden markov models,”IEEE Transactions onSignal Processing, vol. 46, no. 4, pp. 886–902, Apr. 1998.

[25] R.G. Baraniuk, R.A. DeVore, G. Kyriazis, and X.M. Yu, “Near best treeapproximation,” Advances in Computational Mathematics, vol. 16, no.4, pp. 357–373, May 2002.

[26] L. Wang and W. Yin, “Sparse signal reconstruction via iterative supportdetection,” SIAM Journal on Imaging Sciences, vol. 3, no. 3, pp. 462–491, Jul. 2010.

[27] N. Vaswani and Wei L., “Modified-CS: Modifying compressive sensingfor problems with partially known support,”IEEE Transactions onSignal Processing, vol. 58, no. 9, pp. 4595–4607, Sept. 2010.

[28] F. Chen, A.P. Chandrakasan, and V.M. Stojanovic, “Design and analysisof a hardware-efficient compressed sensing architecture for data com-pression in wireless sensors,”IEEE Journal of Solid-State Circuits, vol.47, no. 3, pp. 744–756, Mar. 2012.

[29] A. Gersho and R. M. Gray,Vector quantization and signal compression,vol. 159, Springer, 1992.

[30] R. E. Carrillo, L. F. Polania, and K. E. Barner, “Iterative algorithmsfor compressed sensing with partially known support,” inProc., IEEEICASSP, Dallas, TX, Mar. 2010, pp. 3654 –3657.

[31] R. Paradiso, G. Loriga, and N. Taccini, “A wearable health care systembased on knitted integrated sensors,”IEEE Transactions on InformationTechnology in Biomedicine, vol. 9, no. 3, pp. 337–344, Sept. 2005.

[32] C.M. Fira and L. Goras, “An ECG signals compression method and itsvalidation using NNs,”IEEE Transactions on Biomedical Engineering,vol. 55, no. 4, pp. 1319–1326, April 2008.

[33] L.F. Polania and K.E. Barner, “Multi-scale dictionarylearning forcompressive sensing ECG,” in2013 IEEE Digital Signal Processingand Signal Processing Education Meeting, Aug. 2013, pp. 36–41.

[34] Y Park, “Time complexity analysis of SPIHT (set partitioning inhierarchy trees) image coding algorithm,”Journal of the Institute ofSignal Processing and Systems, vol. 4, no. 1, pp. 36–40, 2003.

[35] E. Liberty and S.W. Zucker, “The Mailman algorithm: A note on matrix-vector multiplication,” Information Processing Letters, vol. 109, no. 3,pp. 179 – 182, 2009.

Luisa F. Polanıa (S’12) received the B.S.E.E. de-gree (with honors) from the National University ofColombia, Bogota, Colombia, in 2009. She is cur-rently pursuing the Ph.D. degree in the Departmentof Electrical and Computer Engineering, Universityof Delaware, Newark.Her research interests includesignal and image processing, compressive sensing,low-dimensional modeling and biomedical signalprocessing. Miss Polanıa was the recipient of theUniversity of Delaware Graduate Student Fellowshipin 2013.

Rafael E. Carrillo (S’07-M’12) received the B.S.and the M.S. degrees (with honors) in ElectronicsEngineering from the Pontificia Universidad Jave-riana, Bogota, Colombia, in 2003 and 2006 respec-tively and the Ph.D. degree in Electrical Engineeringfrom the University of Delaware, Newark, DE, in2011. He was a lecturer from 2003 to 2006 atthe Pontificia Universidad Javeriana and a researchassistant at the University of Delaware from 2006 to2011. Since 2011 he is a postdoctoral researcher atthe Institute of Electrical Engineering, Ecole Poly-

technique Federale de Lausanne, Lausanne, Switzerland.His research interestsinclude signal and image processing, compressive sensing,inverse problems,and robust, nonlinear and statistical signal processing. Dr. Carrillo was therecipient of the “Mejor trabajo de grado” award, given to outstanding masterthesis at the Pontificia Universidad Javeriana, in 2006, theUniversity ofDelaware Graduate Student Fellowship in 2007 and the SignalProcessing andCommunications Graduate Faculty Award from the Universityof Delaware in2010.

Manuel Blanco-Velasco (S’00-M’05-SM’10) re-ceived the engineering degree from the Universidadde Alcala, Madrid, Spain in 1990, the MSc incommunications engineering from the UniversidadPolitecnica de Madrid, Spain, in 1999, and the PhDdegree from the Universidad de Alcala in 2004.From 1992 to 2002, he was with the Circuits andSystems Department at the Universidad Politecnicade Madrid as Assistant Professor. In April 2002,he joined the Signal Theory and CommunicationsDepartment of the Universidad de Alcala where he

is now working as Associate Professor. His main research interests arebiomedical signal processing and communications.

Kenneth E. Barner (S’84-M’92-SM’00) receivedthe B.S.E.E. degree (magna cum laude) from LehighUniversity, Bethlehem, Pennsylvania, in 1987. Hereceived the M.S.E.E. and Ph.D. degrees from theUniversity of Delaware, Newark, Delaware, in 1989and 1992, respectively.

He was the duPont Teaching Fellow and a VisitingLecturer with the University of Delaware in 1991and 1992, respectively. From 1993 to 1997, he wasan Assistant Research Professor with the Departmentof Electrical and Computer Engineering, University

of Delaware, and a Research Engineer with the duPont Hospital for Children.He is currently Professor and Chairman with the Department of Electrical andComputer Engineering, University of Delaware. He is coeditor of the bookNonlinear Signal and Image Processing: Theory, Methods, and Applications(Boca Raton, FL: CRC), 2004. His research interests includesignal and imageprocessing, robust signal processing, nonlinear systems,sensor networks andconsensus systems, compressive sensing, human-computer interaction, hapticand tactile methods, and universal access.

Dr. Barner is the recipient of a 1999 NSF CAREER award. He was the Co-Chair of the 2001IEEE-EURASIP Nonlinear Signal and Image Processing(NSIP) Workshopand a Guest Editor for a Special Issue of theEURASIP Jour-nal of Applied Signal Processingon Nonlinear Signal and Image Processing.He is a member of the Nonlinear Signal and Image Processing Board. Hewas the Technical Program Co-Chair for ICASSP 2005 and and previouslyserved on the IEEE Signal Processing Theory and Methods (SPTM) TechnicalCommittee and the IEEE Bio-Imaging and Signal Processing (BISP) TechnicalCommittee. He is currently a member of the IEEE Delaware Bay SectionExecutive Committee. He has served as an Associate Editor ofthe IEEETransactions on Signal Processing, the IEEE Transaction on Neural Systemsand Rehabilitation Engineering, the IEEE Signal Processing Magazine, theIEEE Signal Processing Letters, and theEURASIP Journal of Applied SignalProcessing. He was the Founding Editor-in-Chief of the journalAdvancesin Human-Computer Interaction. For his dissertation “Permutation Filters:A Group Theoretic Class of Non-Linear Filters,” Dr. Barner received theAllan P. Colburn Prize in Mathematical Sciences and Engineering for themost outstanding doctoral dissertation in the engineeringand mathematicaldisciplines. He is a member of Tau Beta Pi, Eta Kappa Nu, and Phi SigmaKappa.