7
J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234 Digital Object Identifier(DOI): 10.1007/s 11741-008-0308-2 A robust clustering algorithm for underdetermined blind separation of sparse sources FANG Yong ( ) 1 , ZHANG Ye ( ) 1,2 1. School of Communication and Information Engineering, Shanghai University, Shanghai 200072, P. R. China 2. Department Electronic and Information Engineering, Nanchang University, Nanchang 330031, P. R. China Abstract In underdetermined blind source separation, more sources are to be estimated from less observed mixtures without knowing source signals and the mixing matrix. This paper presents a robust clustering algorithm for underdetermined blind separation of sparse sources with unknown number of sources in the presence of noise. It uses the robust competitive agglom- eration (RCA) algorithm to estimate the source number and the mixing matrix, and the source signals then are recovered by using the interior point linear programming. Simulation results show good performance of the proposed algorithm for underdetermined blind sources separation (UBSS). Keywords underdetermined blind sources separation (UBSS), robust competitive agglomeration (RCA), sparse signal 1 Introduction Blind source separation (BSS) is the process of esti- mating unknown independent source signals from sen- sor measurements which are unknown mixtures of the source signals. BSS has been widely used in various fields such as biomedical signal analysis and process- ing, speech enhancement, image recognition and wire- less communications. In many practical applications, we have to estimate source signals by using mixing ob- servation signals fewer than the number of sources, re- ferred to as underdetermined or overcomplete BSS. In recent years, underdetermined BSS (UBSS) has received a great deal of attention. Since the mixing system is not invertible, it is impossible to obtain the source signals by using simply inverting the mixing matrix. Even if we know the mixing matrix exactly, it is difficult to recover the exact values of the source signals [1] . Recently, for underdetermined BSS problem, a num- ber of algorithms have been proposed. Lee, et al. [2] sep- arated three speech signals from only two mixtures of the three signals using overcomplete representations. In [3–5], a two-step separating approach was proposed for UBSS in which the mixing matrix and the sources are es- timated, respectively. The mixing matrix was estimated using clustering algorithms such as k-means or fuzzy c- means (FCM) clustering algorithm, and the sources were then estimated by minimizing 1-norm (shortest path separation criterion) in the absence of noise. Lewicki, et al. [6] provided a complete Bayesian approach, assum- ing Laplacian source prior to estimating both the mixing matrix and sources in the time domain. For two-sensor setup without additive noise, [7] proposed a Lapla- cian mixture model to perform separation by using an expectation-maximization (EM) type algorithm. Most of these methods assume that the number of sources is known and noise is absent. In practice, however, the number of sources is often unknown and the mixing signals are usually contaminated by additive noise. In this paper, the UBSS problem with linear instantaneous mixtures and additive noise is explored. The input sig- nals are assumed to be sufficiently sparse, and the actual number of sources is unknown. First, the robust com- petitive agglomeration (RCA) algorithm [8] is introduced to estimate the source number and the mixing matrix. Next, the source signals are recovered by using the in- terior point linear programming. The paper is organized as follows. Section 2 gives the problem formulation and assumption. In Section 3, the RCA algorithm is described, and estimation of the source number and the mixing matrix is discussed. The method of interior point linear programming is proposed to recover the source signals. Some experiments are per- formed to justify effectiveness of the new approach in Section 4. Finally, a conclusion is given in Section 5. Received Dec.7, 2006; Revised Nov.11, 2007 Project supported by the Research Foundation for Doctoral Programs of Higher Education of China (Grant No.20060280003), and the Shanghai Leading Academic Discipline Project (Grant No.T0102) Corresponding author FANG Yong, PhD, Prof., E-mail: [email protected]

A robust clustering algorithm for underdetermined blind separation of sparse sources

Embed Size (px)

Citation preview

Page 1: A robust clustering algorithm for underdetermined blind separation of sparse sources

J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234

Digital Object Identifier(DOI): 10.1007/s 11741-008-0308-2

A robust clustering algorithm for underdetermined blind separationof sparse sources

FANG Yong (� �)1, ZHANG Ye (� �)1,2

1. School of Communication and Information Engineering, Shanghai University, Shanghai 200072, P. R. China2. Department Electronic and Information Engineering, Nanchang University, Nanchang 330031, P. R. China

Abstract In underdetermined blind source separation, more sources are to be estimated from less observed mixtures withoutknowing source signals and the mixing matrix. This paper presents a robust clustering algorithm for underdetermined blindseparation of sparse sources with unknown number of sources in the presence of noise. It uses the robust competitive agglom-eration (RCA) algorithm to estimate the source number and the mixing matrix, and the source signals then are recoveredby using the interior point linear programming. Simulation results show good performance of the proposed algorithm forunderdetermined blind sources separation (UBSS).

Keywords underdetermined blind sources separation (UBSS), robust competitive agglomeration (RCA), sparse signal

1 Introduction

Blind source separation (BSS) is the process of esti-mating unknown independent source signals from sen-sor measurements which are unknown mixtures of thesource signals. BSS has been widely used in variousfields such as biomedical signal analysis and process-ing, speech enhancement, image recognition and wire-less communications. In many practical applications,we have to estimate source signals by using mixing ob-servation signals fewer than the number of sources, re-ferred to as underdetermined or overcomplete BSS. Inrecent years, underdetermined BSS (UBSS) has receiveda great deal of attention. Since the mixing system is notinvertible, it is impossible to obtain the source signalsby using simply inverting the mixing matrix. Even if weknow the mixing matrix exactly, it is difficult to recoverthe exact values of the source signals[1].

Recently, for underdetermined BSS problem, a num-ber of algorithms have been proposed. Lee, et al.[2] sep-arated three speech signals from only two mixtures ofthe three signals using overcomplete representations. In[3–5], a two-step separating approach was proposed forUBSS in which the mixing matrix and the sources are es-timated, respectively. The mixing matrix was estimatedusing clustering algorithms such as k-means or fuzzy c-means (FCM) clustering algorithm, and the sources werethen estimated by minimizing 1-norm (shortest path

separation criterion) in the absence of noise. Lewicki,et al.[6] provided a complete Bayesian approach, assum-ing Laplacian source prior to estimating both the mixingmatrix and sources in the time domain. For two-sensorsetup without additive noise, [7] proposed a Lapla-cian mixture model to perform separation by using anexpectation-maximization (EM) type algorithm. Mostof these methods assume that the number of sources isknown and noise is absent. In practice, however, thenumber of sources is often unknown and the mixingsignals are usually contaminated by additive noise. Inthis paper, the UBSS problem with linear instantaneousmixtures and additive noise is explored. The input sig-nals are assumed to be sufficiently sparse, and the actualnumber of sources is unknown. First, the robust com-petitive agglomeration (RCA) algorithm[8] is introducedto estimate the source number and the mixing matrix.Next, the source signals are recovered by using the in-terior point linear programming.

The paper is organized as follows. Section 2 givesthe problem formulation and assumption. In Section 3,the RCA algorithm is described, and estimation of thesource number and the mixing matrix is discussed. Themethod of interior point linear programming is proposedto recover the source signals. Some experiments are per-formed to justify effectiveness of the new approach inSection 4. Finally, a conclusion is given in Section 5.

Received Dec.7, 2006; Revised Nov.11, 2007Project supported by the Research Foundation for Doctoral Programs of Higher Education of China (Grant No.20060280003),and the Shanghai Leading Academic Discipline Project (Grant No.T0102)Corresponding author FANG Yong, PhD, Prof., E-mail: [email protected]

Page 2: A robust clustering algorithm for underdetermined blind separation of sparse sources

J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234 229

2 Problem statement

Let x(t) = [x1(t), · · · , xm(t)]T be an m-dimensionmixture vector which is an instantaneous linear mix-ture of n unknown independent source signals s(t) =[s1(t), · · · , sn(t)]T plus additive white Gaussian noisev(t) = [v1(t), · · · , vm(t)]T at the discrete time instancest:

x(t) = As(t) + v(t), (1)

where A is an m × n unknown mixing matrix

A =

⎡⎢⎢⎢⎢⎢⎣

a11 a12 · · · a1n

a21 a22 · · · a2n

......

...

am1 am2 · · · amn

⎤⎥⎥⎥⎥⎥⎦

= [a1, a2, · · · , an]. (2)

We assume that the number of sources n is unknownand m < n. Equation (1) can be rewritten as

x(t) =n∑

i=1

aisi(t) + v(t). (3)

Without any loss of generality, we can assume that thecolumns of the mixing matrix A are normalized, i.e.,‖ai‖2 = 1, i = 1, · · · , n.

For a UBSS problem, the sparsity of the independentsources is necessary to obtain good estimates of the in-put signals, even if the mixing matrix is known. Forsources of practical interests, the sparse representationcan be achieved by using a suitable basis expression suchas the Fourier, Gabor or Wavelet basis[9]. If the sourcesare sparse, the mixtures have a special structure. Basedon (3), if only one source is active, say s1(t), the resul-tant mixtures would be x(t) = a1s1(t). As a result, thepoints on the scatter plot of m-space would lie on theline through the origin, whose direction is given by thevector a1. When the sources are sparse, making it un-usual for more than one source to be active at the sametime, the scatter plot of coefficients would constitute amixture of lines, which is broadened due to noise andthe occasional simultaneous activity. These line orien-tations are unique to each source and correspond to thecolumns of the mixing matrix A, therefore the essenceof the sparse approach is the identification of line orien-tations vectors from the observed data.

3 Robust clustering algorithm for UBSS

The observed x(t) builds up a hyper-line in the m-dimensional when the sources are sufficiently sparse.However, some observed data may not belong to anyhyper-line, when there are many outliers and noise. In

this case, the line orientation clustering algorithms canbe used to identify these line orientation vectors, andthen reconstruct mixing matrix A. In our approach theRCA algorithm is used to identify the line orientationvectors by detecting clusters that resemble a line in anoisy background. Without assumption on the numberof sources, the RCA algorithm starts with a large num-ber of clusters to reduce sensitivity to initialization, anddetermines the actual number of clusters with a pro-cess of competitive agglomeration. Noise immunity isachieved by incorporating concepts from robust statis-tics into the algorithm.

RCA assigns two different sets of weights for eachdata point: the first set of constrained weights repre-sents degrees of sharing, and is used to create a com-petitive environment and to generate a fuzzy partitionof the data set. The second set corresponds to robustweights, and is used to obtain robust estimates of thecluster prototypes. By choosing an appropriate distancemeasure in the objective function, RCA can be used tofind an unknown number of clusters of various shapesin noisy data sets, as well as to fit an unknown numberof parametric models simultaneously[8]. In this sectionthe RCA algorithm is firstly introduced and then its im-plementation will be discussed for the estimation of thesource number and mixing matrix in the UBSS case. Fi-nally, the original sources can be estimated by using theinterior point linear programming method[10].3.1 RCA algorithm

Let χ = {x(t), t = 1, · · · , T } be an m × T matrixcorresponding to the sensor data at t = 1, · · · , T, andχ can also be considered as a set of T vectors in anm-dimensional feature space. Let B = (β1, · · · , βC)represent a C-tuple of prototypes, each of which charac-terizes one of the C clusters. Each βi = [βi1, · · · , βim]T

consists of a set of parameters that describe the clustercenter in this paper. The RCA algorithm minimizes thefollowing objective function[8]:

J(B, U ; χ) =C∑

i=1

T∑t=1

(uit)2ρi(d2it)

− α

C∑i=1

[T∑

t=1

wituit

]2

(4)

subject to

C∑i=1

uit = 1 for t ∈ {1, · · · , T }. (5)

In (4), d2it represents the distance of feature vector x(t)

from prototype βi, uit represents the degree to whichx(t) belongs to cluster i, U = [uit] is a C × T ma-trix called the constrained fuzzy C-partition matrix,

Page 3: A robust clustering algorithm for underdetermined blind separation of sparse sources

230 J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234

ρi(·) is a robust loss function associated with clusteri, and wit = ∂ρi(d

2it)

∂d2it

represents the “typicality” of pointx(t) with respect to cluster i. The function ρi(·) cor-responds to the loss function used in M-estimators ofrobust statistics and wit represents the weight functionof an equivalent W-estimator. The objective functionin (4) has two components. The first can be viewed asgeneralization of the M-estimator to detect C (possiblyoverlapping) clusters simultaneously. The second com-ponent of J in (4) is used to maximize the number of“good” points in each cluster. When both componentsare combined with a proper choice of the agglomerationparameter a and the distance dit, J can be used to findcompact clusters of various types while partitioning thedata set into a minimal number of clusters.

To minimize J with respect to the parameter vectorsB :

T∑t=1

(uit)2wit∂d2

it

∂βi= 0. (6)

Further simplification of this equation depends on theloss function ρ(·) and the distance measure used, andwill be discussed later. To minimize (4) with respect toU subject to (5), we can obtain the following formularbased on Lagrange multipliers:

JL(B, U ; χ)=C∑

i=1

T∑t=1

(uit)2ρi(d2it)−α

C∑i=1

[T∑

t=1

wituit

]2

−T∑

t=1

λi

(C∑

i=1

uit − 1

). (7)

Set the gradient to zero:

∂JL

∂usj= 2usjρs(d2

sj) − 2α

T∑t=1

wstust − λj = 0

for 1 � s � C and 1 � j � T. (8)

The agglomeration parameter α is

α(k) = e−k/10

C∑i=1

T∑t=1

(uit)2ρi(d2it)

C∑i=1

[T∑

t=1wituit

]2 , (9)

where k is the iteration number. The loss function is

ρi(d2)=

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

d2 − d6

6T 2i, if d2 ∈ [0, Ti],

[d2−(Ti+cSi)]3

6c2S2i

+ 5Ti+cSi

6 ,

if d2 ∈ (Ti, Ti + cSi],5Ti+cSi

6 + Ki, if d2 > Ti + cSi,

(10)

where c is a tuning constant[11], i.e., c(k) =max {4, c(k − 1) − 1}, c(0) = 12, and k is iterationnumber[8]. Ti and Si are given by

Ti = Medi

(d2it) and Si = MAD

i(d2

it)

for i = 1, · · · , C, (11)

where Medi

is the median of the residuals of the ith clus-ter, and MAD

iis the median of absolute deviations of

the ith cluster[11]. In (10) Ki is

Ki = max1�t�C

{5Tt + cSt

6

}− 5Ti + cSi

6for i = 1, · · · , C. (12)

3.2 Estimating mixing matrix by usingRCA

In order to determine the orientations of mixturesconcentration, we first project the sensor data vectorx(t) and the prototypes parameter B onto the sameunit half sphere by [12]:

x(t) =sgn (x1(t))x(t)

‖x(t)‖ , t = 1, · · · , T, (13)

βi =sgn (βi1)βi

‖βi‖ , i = 1, · · · , C. (14)

Before normalization the mixture data vectors witha very small norm should be removed, since these likelyare noisy. And then, RCA is used to separate the mix-tures into clusters. To detect mixture clusters, we usethe Euclidean distance as a distance measure, since theclusters are expected to be spherical. The distance mea-sure is defined as

d2it = ‖x(t) − βi‖2. (15)

Using (6), it can be shown that the update equationfor the cluster centers is

βi =

T∑t=1

(uit)2witx(t)

T∑t=1

(uit)2wit

, (16)

where wit can be obtained by (10):

wit =

⎧⎪⎪⎨⎪⎪⎩

1 − d4it

2T 2i, if d2 ∈ [0, Ti],

[d2it−(Ti+cSi)]

2

2c2S2i

, if d2 ∈ (Ti, Ti + cSi],

0, if d2 > Ti + cSi.

(17)

Using (5) and (8), the update equation for U is

usj =1/ρs(d2

2j)C∑

k=1

1/ρk(d2kj)

ρs(d2sj)

(Ns − N j), (18)

Page 4: A robust clustering algorithm for underdetermined blind separation of sparse sources

J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234 231

where Ns and Nj are respectively

Ns =T∑

t=1

wstust, (19)

N j =C∑

k=1

Nk

ρk(d2kj)

/ C∑k=1

1ρk(d2

kj). (20)

Ns is the robust cardinality of cluster s, and N j is aweighted average of the cluster cardinalities. It canbe seen from (18) that the membership values of datapoints in low-cardinality clusters are depreciated heav-ily when their distances to low-cardinality clusters aresmall. Moreover, when a feature point x(t) is close toonly one cluster (say cluster s), and far from other clus-ters, Ns = N j , and no competition is involved. Thus,the algorithm encourages agglomeration of clusters withhigh cardinality and decay of clusters with low car-dinality. When the prototype parameter B stabilize,we can select the prototype parameters of clusters withlarger cardinality as the estimation of mixing matrix’scolumn vectors. Then we can restructure the mixingmatrix . The number of column of the estimated mix-ing matrix can be regarded as the estimated number ofthe source signals. We discard the cluster with smallercardinality[8], i.e., if Ns is smaller, the cluster s is re-moved.

The RCA algorithm for UBSS is summarized below:Step 1 Set C = Cmax. Randomly select or use

FCM algorithm to initialize the prototype parameterB, and set iteration k = 0. Set wit = 1, ∀i, t.

Step 2 Use (15) to compute d2it for 1 � i � C, 1 �

t � T, and estimate Ti and Si by using (11).Step 3 Update wit by using (17) and α(k) by using

(9).Step 4 Update U by using (10), (18)∼(20).Step 5 Increment k by 1. Update the prototype

parameter B by using (16) and then normalize by using(14).

Step 6 If the prototype parameters stabilize, exit.Otherwise go to Step 2.3.3 Recovery of the source signals

Once the mixing matrix and the number of sourceshave been estimated, the original sources can be recov-ered. If the estimated source number n � m, i.e., in thenon-underdetermined case, several approaches to inde-pendent component analysis have been used in the liter-ature to numerically solve (1), assuming only statisticalindependence of the source components[1]. In the un-derdetermined case with n > m, recovery of the sourcesis in general not possible. However, for sparse BSS, oncethe mixing matrix has been estimated, the problem (1)can be decomposed into T independent small problems

for each data point s(t) :

mins(t)

12‖x(t) − As(t)‖2

2 + λ‖s(t)‖1, t = 1, · · · , T, (21)

where λ = σ√

2 log p, σ2 is variance of additive noise andp is cardinality of A. The problem (21) can be solvedusing interior point linear programming[6,8,13].

4 Simulations

In this section, some examples are given to test per-formance of the proposed method for UBSS. Withoutloss of generality, we assume that there are two mix-tures, i.e., in two dimension cases, and the source num-ber is more than two in our experiments. In order tocheck how well the mixing matrix is estimated, the al-gebraic matrix distance index (AMDI) is introduced asa performance index[13]:

AMDI (W , H) =C − ∑

rowsmax {|W ′H |}

C

+C − ∑

cols

max {|W ′H |}C

, (22)

where W , H are two column-normalized matrices ofsimilar dimensions, and C is the number of the ma-trix’s column. The 2 by 4 mixing matrix A is generatedrandomly and its columns are normalized to unit lengthvectors, here

A=

[0.377 6 0.148 4 0.992 5 −0.758 0

−0.926 0 0.988 9 −0.122 1 −0.652 2

]. (23)

The sparse source matrix S ∈ R4×5 000 is generated withthe following MATLAB command:

S = − log (rand (n, T ). ∗ max (0, sign (rand (n, T ) − q))),0 � q � 1, (24)

where q is a parameter. The larger the parameter q, thesparser the source signals. In the following experiments,we choose n = 4, T = 5 000, q = 0.79, Cmax = 12.

Experiment 1 In this experiment, we assumethat SNR is 15 dB and check the convergence ofRCA. Selecting randomly or using standard FCM al-gorithm to initialize the prototype parameter B, whenAMDI (B(k), B(k − 1)) � 0.000 1, the prototype pa-rameters are stable, in other words, the algorithm isconvergent, where k is iteration. The results are shownin Figs.1 and 2. We can see that convergence of us-ing FCM to initialize B is faster than that by selectingrandomly to initialize B, with the former less than 6iterations and the latter less than 9 iterations, respec-tively. The experimental results also explain that the

Page 5: A robust clustering algorithm for underdetermined blind separation of sparse sources

232 J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234

performance of using FCM to initialize B is superior torandom initialization of B, since the former’s AMDI issmaller than the latter’s in most cases. Thus, in thefollowing experiments, we only use the FCM algorithmto initialize the prototype parameters.

0.025

0.020

0.015

0.010

0.00012 4 6 8 10 12 14 16 18 20

AM

DI

Interation k

Fig.1 Selecting randomly initialization of B (SNR=15 dB)

1.0

0.8

0.6

0.4

0.2

0 2 4 6 8 10 12 14 16 18 20Iteration k

AM

DI (

×10−

4 )

Fig.2 Using FCM algorithm to initialize B (SNR=15 dB)

Experiment 2 In this experiment, robustness ofRCA is demonstrated in estimating the number ofsources and the mixing matrix in the presence of noise.The mixture system can be described by (1). Let SNR =10 dB, 15 dB, 20 dB, 25 dB, 30 dB, respectively, to testthe performance of RCA after the algorithm is conver-gent. The detailed results are shown in Table 1. WhenSNR� 10 dB, the accurate number of source signalscan hardly be estimated and the mixing matrix cannotbe restructured in most cases since distances betweenlarger Ns and smaller Ns are not distinguished. WhenSNR� 15 dB, distances between larger Ns and smallerNs are distinguishable, so the parameters of clusterswith larger cardinality (bold number) can be selected asthe estimation of mixing matrix’s column vectors. Themixing matrix A can thus be restructured. The numberof column of a mixing matrix estimated can be regardedas the number of source signals.

Experiment 3 This experiment demonstratesthat sparse sources can be separated almost perfectly

when SNR is higher. In order to measure the separationaccuracy, signal to interference ratio (SIR) is introduced:

SIRi = 10 log‖si‖2

2

‖si − si‖22

. (25)

We select SNR=25 dB. The original source signals areshown in Fig.3 and the mixed sources by (24) areshown in Fig.4. Fig.5 gives the results of separation. SIRof the estimated signals are 25.2006 dB, 15.431 9 dB,18.045 8 dB, 11.4320 dB, respectively.

5

0

5

05

0 100 200 300 400 500

1050

Fig.3 Source signals

10

5

0

−510

5

0

0 100 200 300 400 500−5

Fig.4 Observed signals

5

010505

042

0 100 200 400 400 500

Fig.5 Separated signals

5 Conclusion

In this paper, a robust clustering algorithm has beenpresented to solve UBSS when the input signals aresparse. The RCA algorithms based on a process of com-petitive agglomeration are used for estimation of themixing matrix, and the source signals can be obtained

Page 6: A robust clustering algorithm for underdetermined blind separation of sparse sources

J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234 233

Table 1 Performance of the RCA algorithm for different SNR

SNR Performance of the RCA algorithm

10 dB B =

240.987 8 0.314 9 0.593 8 0.993 7 0.891 3 0.565 4 0.936 6 0.106 7 0.378 9 0.803 4 0.758 4 0.113 7

0.156 0 0.949 1 −0.804 6 −0.111 8 0.453 5 0.824 8 −0.350 5 0.994 3 −0.925 4 −0.595 4 0.651 8 −0.993 5

35

Ns =[365.219 0 390.987 0 389.069 0 475.1969 352.290 6 416.2070 390.537 3 429.0569 486.3195 351.238 9 491.7502 392.240 1]

bA can not be estimated

15 dB B =

240.991 1 0.333 9 0.594 9 0.992 3 0.901 4 0.563 2 0.924 6 0.123 9 0.375 3 0.798 4 0.754 6 0.114 1

0.133 1 0.942 6 −0.803 8 −0.123 6 0.432 9 0.826 3 −0.380 8 0.992 3 −0.926 9 −0.602 1 0.656 1 −0.993 5

35

Ns =[342.457 7 357.934 8 332.755 1 577.5364 349.642 8 368.307 8 308.079 2 516.5116 590.1676 305.132 6 526.2213 360.451 9]

bA =

24 0.992 3 0.123 9 0.375 3 0.754 6

−0.123 6 0.992 3 −0.929 6 0.656 1

35

AMDI (A, bA) = 0.034 6

20 dB B =

240.988 0 0.338 9 0.597 0 0.993 2 0.906 4 0.583 9 0.935 0 0.128 7 0.378 4 0.794 8 0.761 8 0.117 1

0.154 4 0.940 8 −0.802 3 −0.116 3 0.422 3 0.811 8 −0.354 5 0.991 7 −0.925 6 −0.606 8 0.647 8 −0.993 1

35

Ns =[286.725 7 363.236 1 304.159 7 619.6809 280.751 1 337.071 0 327.480 6 573.7245 691.7774 282.108 3 605.2643 265.961 5]

bA =

24 0.993 2 0.128 7 0.378 4 0.761 8

−0.116 3 0.991 7 −0.925 6 0.647 8

35

AMDI (A, bA) = 0.026 0

25 dB B=

240.990 9 0.328 8 0.558 9 0.991 8 0.902 6 0.755 0 0.923 5 0.127 8 0.372 7 0.767 8 0.560 5 0.115 6

0.134 6 0.944 4 −0.829 2 −0.127 7 0.430 5 0.655 7 −0.383 6 0.991 8 −0.928 0 −0.640 7 0.828 2 −0.993 3

35

Ns =[288.239 0 318.483 2 275.701 0 664.4956 284.165 7 699.8424 284.172 7 624.7162 661.7654 250.048 2 322.748 0 270.607 5]

bA =

24 0.991 8 0.755 0 0.127 8 0.372 7

−0.127 7 0.655 7 0.991 8 −0.928 0

35

AMDI (A, bA) = 0.028 2

30 dB B =

240.988 2 0.365 6 0.614 4 0.993 1 0.902 2 0.759 2 0.943 4 0.137 7 0.381 3 0.815 7 0.592 0 0.121 5

0.153 3 0.930 8 −0.789 0 −0.117 2 0.431 2 0.650 8 −0.331 7 0.990 5 −0.924 4 −0.578 5 0.805 9 −0.992 6

35

Ns =[227.259 4 289.349 1 267.754 6 699.2431 268.917 3 692.7406 263.058 3 721.6537 705.8485 271.280 8 297.544 0 241.926 2]

bA =

24 0.993 1 0.759 2 0.137 7 0.381 3

−0.117 2 0.650 8 −0.990 5 −0.924 4

35

AMDI (A, bA) = 0.018 0

by using interior point linear programming in the pres-ence of noise. The experimental results show validityof the proposed method. The method presented in thispaper can be extended to any number of dimensions.

References

[1] Hyvarinen A, Karhunen J, Oja E. Independent

Component Analysis [M]. New York: Wiley Inter-

science, 2001.

[2] Lee T W, Lewicki M S, Girolami M, et al. Blind

source separation of more sources than mixtures using

overcomplete representations [J]. IEEE Signal Process-

ing Letters, 1999, 6(4): 87–90.

[3] Georgiev P, Theis F J, Cichocki A. Blind source

separation and sparse component analysis of overcom-

Page 7: A robust clustering algorithm for underdetermined blind separation of sparse sources

234 J Shanghai Univ (Engl Ed), 2008, 12(3): 228–234

plete mixtures [C]//IEEE ICASSP 2004, Montreal,

Canada. [S.l.]: [s.n.], 2004: 493–496.

[4] Theis F J, Puntonet C G, Lang E W. Median-based

clustering for underdetermined blind signal proces-

sing [J]. IEEE Signal Processing Letters, 2006, 13(2):

96–99.

[5] Bofill P, Zibulevsky M. Underdetermined blind

source separation using sparse representations net-

works [J]. Signal Process, 2001, 81(11): 2353–2362.

[6] Lewicki M, Sejnowski T J. Learning overcomplete

representations [J]. Neural Computation, 2000, 12(2):

337–365.

[7] Mitianoudis N, Stathaki T. Overcomplete source

separation using Laplacian mixture models [J]. IEEE

Signal Processing Letters, 2005, 12(4): 277–280.

[8] Frigui H, Krishnapuram R. A robust competitive

clustering algorithm with applications in computer vi-

sion [J]. IEEE Transactions on Pattern Analysis and

Machine Intelligence, 1999, 21(5): 450–465.

[9] Zibulevsky M, Pearlmutter B A, Bofill P, et al.

Blind source separation by sparse decomposition in a

signal dictionary [J]. Neural Computation, 2001, 13(4):

863–882.

[10] Chen S S, Donoho D L, Saunders M A. Atomic de-

composition by basis pursuit [J]. SIAM Review, 2001,

43(1): 129–159.

[11] Hampel F R, Ronchetti E M, Rousseeuw P J,

et al. Robust Statistics: The Approach Based on In-

fluence Functions [M]. NewYork: John Wiley & Sons,

1986.

[12] Zibulevsky M, Pearlmutter B A, Bofill P,

et al. Blind source separation by sparse decomposi-

tion [M]// Independent Component Analysis: Princi-

ples and Practice. Cambridge, Eng.: Cambridge Uni-

versity Press, 2001.

[13] Waheed K, Salem F M. Algebraic overcomplete inde-

pendent component analysis [C]// The fourth Interna-

tional Symposium on Independent Component Anal-

ysis and Blind Signal Separation, Nara, Japan. [S.l.]:

[s.n.], 2003: 1077–1082.

(Editor HONG Ou)