6
Sparseness Measure of Signal Zhaoshui He, Shengli Xie, Yuli Fu School of Electronics and Information Engineering, South China University of Technology, Guangzhou, 510640, China Email: he_shui@ 163.com, [email protected], y.l.fu@ 163.com Abstract-In this paper generalized Gaussian distribution is employed to discuss sparseness measure for signals. At rst we established a mathematical formula to calculate the sparseness measure of signals. According to this measure formula, the sparseness measure value of the Laplacian signal is 1, and Gaussian signal is 2. Given a signal, from its sparseness measure value, by reference to Laplacian signal and Gaussian signal, we can very intuitively know how sparse it is. Two examples are given to illustrate the fact that, only when the source signals are sparse enough, we can achieve undetermined BSS by sparse representation. . INTRODUCTION Signal processing methods relying on "sparse representation" have proven useful in signal compression, de-noising, classification, and inverse problems in imaging, acoustics/speech, and communications. Sparseness in the context of Blind Source Separation (BSS) or Independent Component Analysis (ICA) have led to powerful techniques capable of, for example, separating many speech signals from only two mixtures [5]. De-mixing more sources than sensors is of particular interest because the classical mathematical formulation of that problem is ill-posed, and only through the exploitation of sparse representations of the signals of interest have efficient solutions become possible [1]-[6][12]. Even in the standard case, when the number of sources is equal to or smaller than the number of sensors, as in the analysis of various sorts of functional brain image data, sparse representation have also led to significant improvements in the performance of the BSS/ICA techniques [2][4]. In spite of the importance of sparseness in signal processing, there are still many fundamental theoretical problems about sparse signal processing. Most important one of all is the definitions and measures of sparsity. Li et al [6] once used l -norm and 1 -norm to measure the sparseness of a signal. They found that though l0-norm solutions are sparsest, 1 -norm solutions are unique, more robust and have many strong tools to solve the corresponding optimization problem. Especially, 4 -norm solutions are sometimes equivalent to 10-norm ones with probability 1. Further research work was done by Hoyer [7], who used a sparseness measure between 11 -norm and 12-norm. So far, so good. But above sparseness measures are not enough, or not convenient to answer the following questions: What is sparseness on earth? How sparse is a signal? Whether or not are the source signals sparse enough to achieve BSS? For this purpose, we continue to investigate the sparseness measure of a signal by quantity. We intend to find a more intuitive sparseness measure in this paper. HI. GENERALMED GAUSSIAN DISTRIBUTION AND ITS SOME PROPERTIES At first we introduce generalized Gaussian distribution (GGD), whose PDF (Probability Density Function) is: ( a a;,,) expl-| a >o,/ >o,(l) where r(.) is Gamma function given by F(x) = tx-le-tdt . (2) GGD is a huge probability distribution family characterized by two parameters a and f,. Here a is inversely proportional to the decreasing rate of the peak, while ,B models the width of the PDF peak (standard deviation). Sometimes a is called the shape parameter, while , is referenced to as the scale parameter. Hence the parameter ,B is associated with a, i.e. a is the only parameter which needs to be determined, for example, if we constrain EU2 =1 , ,B can be represented by I= (a) . Fig. 1 shows some members of GGD F(3/a) distribution family with different a's all with unit variance. Obviously, the smaller the parameter a is, the sharper the PDF is. Assume a signal ,u follows GGD distribution (1), it has some interesting properties as follows: Property 1. If a = 1, it's a Laplacian signal; if a = 2, it's a standard Gaussian signal; if a = +oo, it converges to a uniform distribution signal in distribution, and the PDF of uniform distribution U (-,fi) given by p(,-';U (-,/,i)) = 1/2fl,flI . <,. (3) Proof: see appendix A. Property 2. The k th-order absolute moment of signal ,u 0-7803-9422-4/05/$20.00 C2005 IEEE 1931

[IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

Embed Size (px)

Citation preview

Page 1: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

Sparseness Measure of SignalZhaoshui He, Shengli Xie, Yuli Fu

School of Electronics and Information Engineering, South China University of Technology, Guangzhou,510640, China

Email: he_shui@ 163.com, [email protected], y.l.fu@ 163.com

Abstract-In this paper generalized Gaussian distribution isemployed to discuss sparseness measure for signals. At rst weestablished a mathematical formula to calculate the sparsenessmeasure of signals. According to this measure formula, thesparseness measure value of the Laplacian signal is 1, andGaussian signal is 2. Given a signal, from its sparsenessmeasure value, by reference to Laplacian signal and Gaussiansignal, we can very intuitively know how sparse it is. Twoexamples are given to illustrate the fact that, only when thesource signals are sparse enough, we can achieve undeterminedBSS by sparse representation.

. INTRODUCTION

Signal processing methods relying on "sparserepresentation" have proven useful in signal compression,de-noising, classification, and inverse problems in imaging,acoustics/speech, and communications. Sparseness in thecontext of Blind Source Separation (BSS) or IndependentComponent Analysis (ICA) have led to powerful techniquescapable of, for example, separating many speech signalsfrom only two mixtures [5]. De-mixing more sources thansensors is of particular interest because the classicalmathematical formulation of that problem is ill-posed, andonly through the exploitation of sparse representations ofthe signals of interest have efficient solutions becomepossible [1]-[6][12]. Even in the standard case, when thenumber of sources is equal to or smaller than the number ofsensors, as in the analysis of various sorts of functionalbrain image data, sparse representation have also led tosignificant improvements in the performance of theBSS/ICA techniques [2][4].

In spite of the importance of sparseness in signalprocessing, there are still many fundamental theoreticalproblems about sparse signal processing. Most importantone of all is the definitions and measures of sparsity. Li et al[6] once used l -norm and 1 -norm to measure the

sparseness of a signal. They found that though l0-normsolutions are sparsest, 1 -norm solutions are unique, morerobust and have many strong tools to solve thecorresponding optimization problem. Especially, 4 -normsolutions are sometimes equivalent to 10-norm ones withprobability 1. Further research work was done by Hoyer [7],who used a sparseness measure between 11 -norm and

12-norm. So far, so good. But above sparseness measures

are not enough, or not convenient to answer the followingquestions: What is sparseness on earth? How sparse is asignal? Whether or not are the source signals sparse enoughto achieve BSS? For this purpose, we continue to investigatethe sparseness measure of a signal by quantity. We intend tofind a more intuitive sparseness measure in this paper.

HI. GENERALMED GAUSSIAN DISTRIBUTION AND ITS SOMEPROPERTIES

At first we introduce generalized Gaussian distribution(GGD), whose PDF (Probability Density Function) is:

( a

a;,,) expl-| a >o,/ >o,(l)

where r(.) is Gammafunction given by

F(x) = tx-le-tdt . (2)GGD is a huge probability distribution family

characterized by two parameters a and f,. Here a isinversely proportional to the decreasing rate of the peak,while ,B models the width of the PDF peak (standarddeviation). Sometimes a is called the shape parameter,while , is referenced to as the scale parameter. Hence theparameter ,B is associated with a, i.e. a is the onlyparameter which needs to be determined, for example, if weconstrain EU2 =1 , ,B can be represented by

I=(a) . Fig. 1 shows some members of GGDF(3/a)

distribution family with different a's all with unit variance.Obviously, the smaller the parameter a is, the sharper thePDF is.Assume a signal ,u follows GGD distribution (1), it has

some interesting properties as follows:Property 1. If a = 1, it's a Laplacian signal; if a = 2, it'sa standard Gaussian signal; if a = +oo, it converges to auniform distribution signal in distribution, and the PDF ofuniform distribution U (-,fi) given by

p(,-';U (-,/,i)) = 1/2fl,flI .<,. (3)Proof: see appendix A.Property 2. The k th-order absolute moment of signal ,u

0-7803-9422-4/05/$20.00 C2005 IEEE1931

Page 2: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

is

/k= fk ((k+1)/a) k =-iE - , lim El,i /3kwhere ke R and k >0.The proof is in appendix B.

(4)

- alpha=0.80.8 - - alpha=1

- alpha=206 - alpha=4

0.4

0.2

0-5 ° miu 5

Fig.l The PDF of generalized Gaussian distribution wi$ha = 0.8,1,2,4, under unit variance.

In practice, many signals with zero-mean, such as abovementioned Laplacian signal, Gaussian signal, uniform signaletc, follow, or approximately follow GGD distribution (1)and compose a very wide signal family, but they are justpossibly with different parameters a and /3, whichmakes GGD distribution very attractive in signal processing,especially in BSS problem. For example, Choi, Wu, Zhanget al [8][9][10] had proposed some effective BSS algorithmsvia GGD distribution. Totally, these mentioned works trendto find the appropriate nonlinear function via GGDdistribution, while we intend to find an intuitive sparsenessmeasure index in this paper.

HI. THE ISO-PD CONTOUR OF SIGNALS AND SPARSENESSMEASURE

As you know, the iso-PD (iso-Probability-Density)contour of random variables is geometrically similar to thescatter plot of their samples. So we analyze the iso-PDcontour of signals. Assume the signals sl, * * *, s. aremutually independent and follow the same GGD (1), theirjoint probability density function is

P (Si' ' SJ)=I a exp-| I(1 22V ) (a)( i

=(2iiF(1/a)~I exp( ISilaJ (5)We investigate the contour of joint PDF (5). Given a

probability value P0, (0< po < 1) , and let

p (Si, ** ,)= po . From function (5) we have

(1 2F(V=)jJ) (ex(,j i i (6)

'=l Si = ),8'loglP. = c 2 0, (7)

-La

where c is a constant controlled by po. So the iso-PDequation (6) can be changed into

1Slia + ..+ISnl' =C' (8)

I-52

(a) a=3

(c) a = 1.5

(b) a==2

(d) a=1

S2

K1 -

(e) a= 0.5 (f) a = 0.3Fig.2 The contour ofjoint PDF of source signals sl, 52 with

a = 3,2,1.5,1,0.5,0.3, respectively.Though the parameter ,B is very important in PDF (5),

there is no parameter , in track equation (8). The contourshape of joint PDF (5) is completely controlled byparameter a. For the case n = 2, the contour ofjoint PDF(5), or the track curves of equation (8) are shown in Fig. 4.From Fig.2, we find that the contours of joint PDF (5) are

a series of closed curves. If a < 2, especially when a < 1,the iso-PD contours are asteroid curves. The smaller a is,the sharper the asteroid curves are. When a -- 0+, theiso-PD contour nearly degenerates to just two line segments(see Fig.2(f)). In addition, if a .2, the signals are notsparse any longer. Thus BSS is not available via sparsity in

1932

Is2

30.r-- Si

Page 3: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

theory, unless the source signals are sparser than Gaussiansignals, i.e. a < 2. Obviously, parameter a determineshow sparse a GGD signal is. Now the next question is howto estimate its parameter a for a given GGD signal ,u.From equation (4), we have

EE (2/a)I (ll/a) (9)

/3U212IF(3/a)Ell F(l/a)(El,al)2 (r(2/a))2

ElIl2 r(l/a) _(3/a) (10)

Denote

r(a) =(Fr(2/a))2r

7=lla__IF__3_a(11)

where r (a) is a monotonous function in the interval

[0, +oo) (see Fig.3). Additionally, from property 1, we

know that, when a = +oo, GGD signal ,u degenerate to auniform distribution signal. From formula (4), we obtain

(E l/ul2

Therefore, function r (a) satisfies limr r(a) = 0.75 .

Fig.3 shows that r (a) sharply increases in the interval

(0,2].From equation (9), (10) and (11), parameters a and ,B

can be estimated by the following expressions:

E/2 J (13)

IF(1/al E l niIF(2/6)where r1 (e represents the inverse function of r ()

0.75

aS

0 2 4 alpha 6 8

Fig.3 r (a) is a monotonous function.

Remark. Given a signal , , we can calculate its parameters

a and / by formula (13). We can judge how sparse the

signal , is by &. Unfortunately, function r((a) is verycomplicated. But at least we can firstly estimate the functionvalue r(a) from equation (10), follow to calculate a' byresolving nonlinear equation (11) using bisection method.Besides, we can also roughly estimate a- by ir(a) fromtable 1, which is established according to function (11) byMatlab 6.5.

Table 1. Some function values of r (a)a 0 0.2 0.4 0.6 0.8 1

r(a) 0 0.0629 0.2316 0.3562 0.4407 0.5a 1.2 1.4 1.6 1.8 2 2.2

r(a) 0.5431 0.5755 0.6007 0.6205 0.6366 0.6498

We know that, for GGD signals, their parameter acompletely controls how sparse it is. So we define theestimation statistic of &(u) in formula (13) as itssparseness measure.

According to the formula (13), the sparseness measure ofGaussian signal is 2, and the Laplacian signal is withsparseness measure 1. Intuitively, a signal is sparse if mostof its sample entries are zeroes or nearly zeroes and onlyfew sample entries take significant values [11]. A reasonablesparseness measure should at least accord with this intuition.Our sparseness measure (13) can perfectly illustrate thispoint. A typical example is given in Fig.4. Meanwhile, bytaking Gaussian signal and Laplacian signal as references,we can intuitively know how sparse a signal ,u is from its

sparseness measure a(4u) . In general, how sparse a signalis mainly lies in whether or not the percentage of its zerosample entries is high.

2 2

0 0ll|"|.,| 1||l

(a) a = 4.4530 (b)a = 2.02262 2

(c) a = 0.9907 (d)a = 0.3426Fig.4 Illustration of various degrees of sparseness. Four signals

are shown and each of them is with 20 sample points, exhibitingsparseness levels of 4.4530, 2.0226, 0.9907 and 0.3426. Each bardenotes the value of one sample entry of the signals. At low levelsof sparseness (left-top), all entries are roughly active; at high levels(right-bottom), most coefficients are zero whereas only few takesignificant values; Signal (b) is an approximate Gaussian signal,whose sparseness measure is 2.0226, while signal (c) is anapproximate Laplacian signal with sparseness measure 0.9907.

According to above discussion, for a GGD (1) signal p ,

1933

F

Page 4: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

if its sparseness measure a'(aU)<2, in general we canqualitatively call it sparse signal.

IV. NUMERICAL EXAMPLES AND ANALYSIS

As you know that, when the source signals aresufficiently sparse, one can achieve undetermined BSS by"sparse representation" [1]-[6]; but when the source signalsare insufficiently sparse, for undetermined BSS, by sparserepresentation we can possibly identify the mixing matrixwhile we can not well recover the source signals. To showthis point, we give the following two examples.Example 1. Undetermined BSS with sufficiently sparsesource signals

This example was given by Bofill in reference [5](Experiment SixFlutes I). By sparse represenatation, heseparated six source signals from only two observed signalsin the frequency domain. The source signals are 6 flutesounds and each of them has 32768 sample points. Themixing matrix is:

(0.9659 0.7071 0.2588 -0.2588 -0.7071 -0.9659y0.2588 0.7071 0.9659 0.9659 0.7071 0.2588)

Benefiting from the extreme sparseness of source signals,the scatter plot of two observed signals shows very clearline directions (see Fig.5(a)). So Bofill can succeed inseparating six source signals and obtain desirable results ofblind separation.

5000

x0

+I

*t *

* 41

-500O1

4

1

0.5

(NJx0

-0.5

-1L N

-5UUU U xi SUUU -1 U xi 1(a) (b)

Fig.5 (a) The scatter plot of observed signals in example 1.(b) The scatter plot of two observed signals in example 2,where the solid lines are the actual column vectors ofmixing matrix.By calculation from formula (13), we find that the

sparseness measure values of the source signals (taking itsreal part and image part in frequency domain, the detailedoperations was shown in reference [5]) in this example areabout 0.0830(<<1). Obviously they are much sparser thanLaplacian signals.Example 2. Undetermined BSS with insufficiently sparsesource signalsThe source signals are 3 voice signals with 65536 sample

points. The mixing matrix is:

A (cos(ff/12) cos(5ir/12) cos(3ir/4)'tsin (ff112) sin (5ff/12) sin (3yr/4))

C0.9659 0.2588 -0.70710.2588 0.9659 0.7071

By above mixing matrix, two observed signals areobtained. Here the source signals are not so sparse as thosein experiment 1, and their sparseness measure values areabout 0.7012. The scatter plot of two observed signals isshown in Fig.5(b).

Here Lee-Lewick undetermined BSS algorithm [3] isemployed to perform blind separation. The learning stepsize is A=0.5 and the mixing matrix is initialized byMatlab random command randn(sgate',0); A(°) =rand(2,3)as

A(°) (-0.4326 0.1253 -1.1465't-1.6656 0.2877 1.1909)

After 800 iterations, the algorithm converges, and theestimation of mixing matrix is

(0.9495 0.2292 -0.7264A= A-

y0.3125 0.9735 0.6873We find that the mixing matrix is relatively precisely

estimated, but the source signals are not well recoveredbecause from table 2, the SNRs are so small and smallerthan 15dB. Hence for undetermined BSS with theinsufficiently sparse source signals, sometimes one canidentify the mixing matrix, but can hardly recover thesource signals.

Table 2. The SIRs of estimations of source signalsSNR (dB)

1 2 312.9320 13.0355 12.8911

V. DISCUSSION

Although this paper mainly dedicates to establish anintuitive sparseness measure for signal, our analysis anddiscussion can simultaneously show that, for sparse BSS,generally recovering the source signals is relatively difficultthan identifying the mixing matrix. For undetermined BSS,if the source signals are insufficiently sparse, it's possible toidentify the mixing matrix but perhaps one can hardlysuccessfully recover the source signals. In many situations,the source signals with Laplacian sparseness are not enoughto be recovered for undetermined BSS and a little sparsersource signals are expected (see example 2).

Additionally, the sparsity of source signals plays a keyrole in sparse BSS. If the signals are not sparse or sparseenough in their work domain, if necessary and if possible,we can deal with the problem in an appropriate lineartransformation domain (e.g. frequency domain, waveletpackets domain, etc) rather than in the work domain.

Next, we consider the following probability distributionP (,u; a,, y) with PDF:

1934

Page 5: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

a(-y1efir1- a

(14)where 0.< <1, probability distribution P(fl; a, /3,y) is

produced byP(;a,/i, yr) = rN(11;a,)+(1-y)(1-fP2l()a,(0.< yrl), (15)

where the PDF of P (,u;a,/3) is

PI(iu;oa,,) Pi (,a;°a, ) = 8r(Ila)e 8

10, ,u<0,obviously, if a signal follows probability distribution

PI (,a; a, ,), it is a non-negative signal.

The PDF of P2(,u;a,c3) is

a" o

P2 (;°,3:P2 (/l; ar /3) =|r/(ila) e|4

10, a1>0,similarly, if a signal follows probability distribution

P2 (,u; a, ,B), it is a non-positive signal.In this paper we assume the source signals follow GGD

distribution (1) and all our analyses and discussions are

based on this assumption. In fact, we can also assume thesignals follow probability distribution (14), which is a more

general case, for example, the non-negative signal isincluded in it and can be described by probabilitydistribution P(g;a,l, 1) , i.e. y=1 ; GGD (1) is

corresponding to P(,u;a,,/,0.5), i.e. y=0.5. All our

research results (including the properties, definition, etc) inthis paper can almost be extended to the probabilitydistribution P(,a; a,/3, r) (14) without any change. By

the way, the PDF of probability distribution P (,u; a, r)is unimodal. For those signals with multimodal PDF, thestudy results of this paper are unavailable.

CONCLUSIONS

The GGD distribution is employed in this paper. Someproperties of GGD are very desirable and helpful to studysparse signals. With the aid of GGD, we give a sparseness

measure formula, by which we can calculate the sparseness

measure value for a given signal, we then know how sparse

it is and judge whether or not it is sparse enough for us to dosignal processing by sparsity.

APPENDIX

Appendix A: ProofofProperty IProof: The former two cases are obvious, so we mainlyprove the case a = +oo-. We have

lim exp-KI -)]e1, 1/11 =/i,

functoe n°1)weSo from function (1) we have

(A.1)

2/i a-*+l r( 1/a)'

lim p(.;a l)= em1 1) = 13,

0,

(A.2)

1111> 1.

On the other side, we have

lim rp(1;a,/3)d = lim d, = 1a-*+- a-4+- 2~f2lF(lla)

lim a =1.r(1/a)Substituting (A.3) to expression (A.2), we get

(A.3)

1 8 <4'(A.4)e-1

limp(yl; a,/,)= e I11/=6,a-++-~ ~ 2/3

0, I,uI>A3.

I.e. lim p (,;a,/3) = p (u;U (= 2/i

a-4+- 0,~~~~~1,u1>Aexcept 11, =,/ . Thus p (u;a,/13) converges to

p(,;U(-U ,/i)) (PDF (3)) in distribution as a -> +oo .

Appendix B: ProofofProperty 2

Proof: From PDF (1), we have

EI1#1I = 1,11 p(1; a,a)dy

= a|U|k (/ )exp -|f|a)'S= L11U1k21 exp -I d1

(1/a) \y

a k Krp )d/. (B.1)

1935

p(Au;a,s, Y)= a e= r(f(/a)

Page 6: [IEEE 2005 International Conference on Neural Networks and Brain - Beijing, China (13-15 Oct. 2005)] 2005 International Conference on Neural Networks and Brain - Sparseness Measure

Make the transformation z = (ul/l) , we have

1 ak| k = A -zaJ

d,u=- za * dzI a

REFERENCES

a

ILi = z into (B.1), we have~fi)£ kza e-z . za dz

lTa) tx

= kk_1__ fikf-((k +1)/a)

(/)jZ a ezdz F(l/a)

That is E Ilk =ik((k+1)/) Additionally,

lim EIlu = lir LIuIP (,u;U (-_, 6)) d4u

=_ HkdA /-k

k+1Thus, property 2 is proved.

ACKNOWLEDGMENT

The work is supported by the Guang Dong ProvinceScience Foundation for Program of Research Team (grant04205783), the National Natural Science Foundation ofChina (Grant 60274006), the National Natural ScienceFoundation of China for Excellent Youth (Grant 60325310)and the Trans-Century Training Program, the Foundationfor the Talents by the State Education Commission of China,

[1] M. Zibulevsky, B.A. Pearlmutter, "Blind source separation by sparsedecomposition", Technicial Report No. CS99-1, University of NewMexico, Albuquerque, July 1999, http:www.cs.unm.edu/-bap/papers/sparse-ica-99a.ps.gz.

[2] T. W. Lee, M.S. Lewicki, M. Girolami, T.J. Sejnowski, "Blind sourceseparation of more sources than mixtures using overcompleterepresentation", IEEE Signal processing letter, 6(4), pp. 87-90, 1999.

[3] M.S. Lewicki, T.J. Sejnowski, "Learning overcompleterepresentations", Neural computation, vol. 12, pp. 337-365, 2000.

[4] M. Zibulevsky, B.A. Pearlmutter, "Blind source separation by sparsedecomposition in a signal dictionary", Neural computation, vol. 13,no.4, pp. 863-882, April 2001.

[5] P. Bofill, M. Zibulevsky, "Underdetenmined source separation usingsparse representations", Signal processing, 81, pp. 2353-2362, 2001.

[6] Y.Q. Li, A. Cichocki, S. Amari, "Analysis of sparse representationand blind source separation", Neural computation, vol. 16, no.6, pp.1193-1234, June 2004.

[7] P.O. Hoyer, "Non-negative matrix factorization with sparsenessconstraints", Journal of machine learning research, 5, pp.1457-1469,2004.

[8] S. Choi, A. Cichocki, S. Amari. "Flexible independent componentanalysis", Journal of VLSI signal processing, 26(1): 25-38, 2000.

[9] H.C. Wu, J.C. Principe, "Generalized anti-Hebbian learning forsource separation". In proceedings of ICASSP'99, 1999 IEEEInternational Conference on Acoustics, Speech, Signal Processing,vol. 2(15-19): 1073-1076.

[10] J.L. Zhang, S.L. Xie, J. Wang, "Multi-input single-output neuralnetwork blind separation algorithm based on penalty function",Dynamics of Continuous Discrete and Impulsive Systems-SeriesB-Applications & Algorithms: 353-361 Suppl. SI 2003.

[11] B.A. Pearlmutter, V.K. Potluru, "Sparse separation: Principles andtricks", Proc SPIE vol. 5102, "Independent Component Analyses,Wavelets, and Neural Networks", pp. 1-4, April 2003.

[12] P. Georgiev, F. Theis, A. Cichocki. "Sparse component analysis andblind source separation of undetermined mixtures". IEEE Trans.Neural Networks, vol. 1(11), pp. 14, November 2002.

1936

(B.2)

Substitute (B.2) and

EIiulk =_aX (V/

and the SRF for ROCS, SEM.