11
Signal Processing 11 (1986) 1-11 1 North-Holland AN ITERATIVE RESTORATION TECHNIQUE Satpal SINGH and S.N. TANDON Centre for Biomedical Engineering, Indian Institute of Technology, New Delhi 110016, India H.M. GUPTA Department of Electrical Engineering, Indian Institute of Technology, New Delhi 110016, India Received 11 June 1985 Revised 1 October 1985 and 4 February 1986 Abstract. An iterative restoration algorithm for the solution of x in y = Hx has been described. For the purpose of comparison, a general overview of the existing iterative techniques is given. For the proposed technique, the constraints for convergence and its rate, which is quadratic in comparison with the linear rate of other techniques, have been derived. Also derived is the deviation in the solution when instead of H a perturbed operator H* is used in the algorithm. The effect of additive noise on the estimates has also been described. Finally, the technique has been applied to the deconvolution of a blurred signal, the results of which validate the theory. For the sake of comparison, the algorithm has been applied to the sample of Fig. 7 of Schafer, Mersereau and Richards (1981). It is seen that only four iterations compared with twenty in the cited reference are sufficient to produce similar results. Zusammenfassung. Ein iterativer Algorithmus zur Wiedergewinnung des Operators x aus der Gleichung y = Hx wird beschrieben. Zu Vergleichszwecken wird ein Uberblick fiber die bestehenden iterativen Veffahren auf diesem Gebiet gegeben. Fiir das hier vorgeschlagene Verfahren werden die Schranken fiir die N/iherung und die Geschwindigkeit der Ann/iherung hergeleitet; im Unterschied zur linearen Ann/iherung, wie sie bei anderen Veffahren beobachtet wird, erfolgt die Ann/iherung bier quadratisch. Ebenfalls abgeleitet wird die Abweichung der L6sung, wenn in diesem Algorithmus anstelle yon H ein gest6rter Operator H* benutzt wird. Ebenfalls beschrieben wird die Auswirkung additiven Rauschens auf die Sch/itzwerte. Das Verfahren wird schlieglich angewendet auf das Problem der Enffaltung eines unscharfen Signals. Die Ergebnisse dieses Versuchs stiitzen die theoretisehen Aussagen. Zu Vergleichszwecken wurde der Algorithmus an Hand des Beispieles aus Bild 7 yon Schafer, Mersereau und Richards (1981) praktisch erprobt. Wie sich hierbei zeigt, sind nur vier Iterationen notwendig, um einen Grad tier Ann/iherung zu erhalten, fiir den man in der zitierten Literaturstelle noch zwanzig Iterationen ben6tigt. R6sum6. Un algorithme de restoration it6rative pour la r6solution de x dans y = Hx est d6crit. Pour les buts de comparaison, une vue g6n6rale des techniques it6ratives est donn6e. Pour la technique propos6e, les contraintes pour sa convergence et sa cadance, qui est quadratique en comparaison ~ la cadance lin6aire des autres techniques, sont 6tablies. La d6viation dans la solution quand un op6rateur perturb6 H* est utilis6 ~ la place de H est 6galement 6tablie. L'effet du bruit additif dans les estimations est d6crit. Finalement, cette technique a 6t6 applique6e A la d6convolution d'un signal rendu flou, avec des r6sultats confirmant la th6orie. Pour la comparaison, ralgorithme a 6t6 appliqu6e ~ 1'6chantillon de la Fig. 7 de Schafer, Mersereau et Richards (1981). I! est not6 que seules quatre it6rations suffisent pour produire le mSme r6sultat, compar6 b. vingt dans la r6f6rence cit6e. Keywords. Iterative restoration, deconvolution. 1. Introduction Often, the experiments conducted to study various phenomena in the real world yield measurements of the signals that have been degraded or transformed by the characteristics of the experimental set up used. In order to gain greater insight to the processes involved, it is necessary to restore the signal from these observed values. Some of the areas where restoration 0165-1684/86/$3.50 O 1986, Elsevier Science Publishers B.V. (North-Holland)

An iterative restoration technique

Embed Size (px)

Citation preview

Page 1: An iterative restoration technique

Signal Processing 11 (1986) 1-11 1 North-Holland

A N ITERATIVE R E S T O R A T I O N T E C H N I Q U E

Satpal SINGH and S.N. T A N D O N Centre for Biomedical Engineering, Indian Institute of Technology, New Delhi 110016, India

H.M. GUPTA Department of Electrical Engineering, Indian Institute of Technology, New Delhi 110016, India

Received 11 June 1985 Revised 1 October 1985 and 4 February 1986

Abstract. An iterative restoration algorithm for the solution of x in y = Hx has been described. For the purpose of comparison, a general overview of the existing iterative techniques is given. For the proposed technique, the constraints for convergence and its rate, which is quadratic in comparison with the linear rate of other techniques, have been derived. Also derived is the deviation in the solution when instead of H a perturbed operator H* is used in the algorithm. The effect of additive noise on the estimates has also been described. Finally, the technique has been applied to the deconvolution of a blurred signal, the results of which validate the theory. For the sake of comparison, the algorithm has been applied to the sample of Fig. 7 of Schafer, Mersereau and Richards (1981). It is seen that only four iterations compared with twenty in the cited reference are sufficient to produce similar results.

Zusammenfassung. Ein iterativer Algorithmus zur Wiedergewinnung des Operators x aus der Gleichung y = Hx wird beschrieben. Zu Vergleichszwecken wird ein Uberblick fiber die bestehenden iterativen Veffahren auf diesem Gebiet gegeben. Fiir das hier vorgeschlagene Verfahren werden die Schranken fiir die N/iherung und die Geschwindigkeit der Ann/iherung hergeleitet; im Unterschied zur linearen Ann/iherung, wie sie bei anderen Veffahren beobachtet wird, erfolgt die Ann/iherung bier quadratisch. Ebenfalls abgeleitet wird die Abweichung der L6sung, wenn in diesem Algorithmus anstelle yon H ein gest6rter Operator H* benutzt wird. Ebenfalls beschrieben wird die Auswirkung additiven Rauschens auf die Sch/itzwerte. Das Verfahren wird schlieglich angewendet auf das Problem der Enffaltung eines unscharfen Signals. Die Ergebnisse dieses Versuchs stiitzen die theoretisehen Aussagen. Zu Vergleichszwecken wurde der Algorithmus an Hand des Beispieles aus Bild 7 yon Schafer, Mersereau und Richards (1981) praktisch erprobt. Wie sich hierbei zeigt, sind nur vier Iterationen notwendig, um einen Grad tier Ann/iherung zu erhalten, fiir den man in der zitierten Literaturstelle noch zwanzig Iterationen ben6tigt.

R6sum6. Un algorithme de restoration it6rative pour la r6solution de x dans y = Hx est d6crit. Pour les buts de comparaison, une vue g6n6rale des techniques it6ratives est donn6e. Pour la technique propos6e, les contraintes pour sa convergence et sa cadance, qui est quadratique en comparaison ~ la cadance lin6aire des autres techniques, sont 6tablies. La d6viation dans la solution quand un op6rateur perturb6 H* est utilis6 ~ la place de H est 6galement 6tablie. L'effet du bruit additif dans les estimations est d6crit. Finalement, cette technique a 6t6 applique6e A la d6convolution d'un signal rendu flou, avec des r6sultats confirmant la th6orie. Pour la comparaison, ralgorithme a 6t6 appliqu6e ~ 1'6chantillon de la Fig. 7 de Schafer, Mersereau et Richards (1981). I! est not6 que seules quatre it6rations suffisent pour produire le mSme r6sultat, compar6 b. vingt dans la r6f6rence cit6e.

Keywords. Iterative restoration, deconvolution.

1. Introduction

Often, the experiments conducted to study various phenomena in the real world yield measurements of the signals that have been

degraded or transformed by the characteristics of the experimental set up used. In order to gain greater insight to the processes involved, it is necessary to restore the signal from these observed values. Some of the areas where restoration

0165-1684/86/$3.50 O 1986, Elsevier Science Publishers B.V. (North-Holland)

Page 2: An iterative restoration technique

2

techniques are frequently used are aerospace imag- ing, geoscience and remote sensing, medical imag- ing, and spectroscopy.

Let the signal of interest be denoted as x. Often, this signal is distorted and, after a transformation H, is given by

y = n x . (1)

The problem of estimating x, give n y and H in (1), is called restoration. This objective may be achieved by applying the inverse transformation H -1 to y to obtain x as follows:

X = H - l y . (2)

However, the solution of (2) may not be as straight- forward as the equation itself [ 1, 15, 17], espeically when (i) the inverse of H does not exist, (ii) H has singular points, that is, H -~ has some points in its domain where it does not exist and (iii) the

problem of finding H -~ is ill-conditioned. Under such conditions, the iterative techniques [5, 12, 18] can be used. In general, iterative techniques have the following advantages:

(a) The inverse H -~ is not explicitly required and therefore the above-mentioned difficulties are circumvented.

(b) Restoration can be carried out for nonlinear or shift variant degradations [5, 18].

(c) Nonlinear constraints [5, 12, 18] can be incorporated in the restoration process.

To study the solution of x in (1) by iterative methods, it is assumed that x and y belong to a linear vector space S which is a Banach space [2, 9, 13]. In such a space, the distance between any two elements x and y is denoted by d(x , y ) ,

and the norm of an element x by Ilxll. Further, in such a space a Cauchy convergent sequence {Xk}

converges to a limit point x in S such that d(Xk, X ) ~ O as k~oo.

In addition to the assumption on S, the operator T on S including the distortion operator H in (1) has the following properties:

(a) Both the domain and the range of T are subsets of $.

(b) An operator T on S is bounded [2, 13] if

d( Tx, Ty) <~ M d ( x , y ) , (3) Signal Processing

S. Singh et al. / Iterative resotration technique

where M is a constant called the bound of T and

is denoted by 11 TI[. I f 0 <~ M < 1, the operator T is defined as a contraction operator.

(c) I is the identity operator. (d) The zero operator q~ has the property q~x =

O, V x ~ S .

(e) For an iterative operator T, T k would mean

that the operator T is applied k times.

2. Iterative techniques

2.1. A generalized approach

Let the applications of an iterative technique for solving x in (1) generate a sequence {Xk}. A n

associated error sequence {ek} is defined as follows:

ek = x - Xk. (4)

The iterative technique will be deemed to be successful if the sequences {Xk} and {ek} have x and 0 as their respective limit points.

If, after t h e kth ij;eration, ek is known, then the limit point x can be exactly calculated as

x = Xk + ek. (5)

However, in practice, ek would not be known and at best only its estimate ~k may somehow be com- puted. Using this estimate in (5), instead of the exact limit point x, its estimate denoted Xk+~ is obtained. Therefore, the general recursive equation for iterative techniques is derived from (5) as follows:

Xk+, = Xk + ek. (6)

For the purpose of evaluating the sequence of estimates {~k}, an associated residual error sequence {ey, k} is calculated from the observations y in (1) and the iteration sequence {Xk} as follows:

ey.k = y -- HXk. (7)

If for k -~ oo, Xk -~ X, then HXk ~ H x and equations (1) and (7) imply that ey.k ~ 0 as k -~ oo. Therefore, the sequence {ey.k} can be used as a control to test the desired convergence of {Xk}. The existing itera-

Page 3: An iterative restoration technique

S. Singh et al. / lterative restoration technique

tive techniques thus derive {ek} as a transformation F of {ev.k} as follows:

ek = Fev.k. (8)

The general iteration (6) can now be written as

Xk+, = Xk + Fey.k. (9)

Various iterative techniques can be written in the above form, as shown in Appendix A.

2.2. Convergence

The various iterative technRlues that follow (9) can be alternatively expressed in the following form of the Banach fixed point theorem [2, 9, 13]:

Xk+l = RXk + Fy = TXk, (10)

where the operators R and T are respectively given as R = ( I - F H ) and T x = R x + F y . If T is a con- traction operator as defined in (3), then the sequence {Xk} generated by (10) converges to a unique fixed point or the limit point x. The error at the kth iteration or the distance between the solution Xk and the true limiting point solution x is

d(Xk, X) = d( Txk_~, Tx)

<~ Md(xk_~, x) ( l l a )

<~ Mk d(xo,X) , ( l i b )

where the use of the definition of fixed point, that is, Tx = x, and of equations (10) and (3) has been

made. The convergence is thus linear from one step to the next iteration ( l la ) , and follows a geometric progression with reference to the initial starting point Xo (inequality ( l lb)) .

2.3. Perturbation in the operator T

The various iterative techniques, expressed in the generalized form of (10), will generate a sequence {Xk} which converges to a true solution x only if the operator T is correctly chosen. If instead a perturbed operator T* is used, then a sequence {Xk*} converging to x* is obtained. If the perturbation in T* is defined as

d(T*x , T x ) < ~ V x c S , (12)

3

then the deviations in the sequence and the limit point [2] are derived as follows:

d(x**, xk--,)

= d( T ' x * , Txk)

<~d( Txk, Tx*~)+ d( Tx*~, T*x*~)

<~ md(Xk, x*)+ ~. (13a)

Making repeated use of (13a) for k = 0 , 1, . . . , k + l .

d(x*+l, Xk+l)

<~Mk+ld(x * , x o ) + ( M k + M k-l+" • "+1)~:

I _ M k+l <~ - - s ¢, (13b)

1 - M

in the derivation of which it has been assumed that the initial guesses are the same, i.e., Xo* = Xo. Since M < l, the deviation in the limit point x* from the true limit point x is obtained from (13b) for k-~oo as

1 d(x*, x)~< 1 -----M ~' (13c)

where M is the contraction of the operator T as defined in (3).

2.4. Noise effects

In the presence of additive noise n, instead of y

in (1), a corrupted signal z is now available, which is given as

z= y + n = Hx + n. (14)

Let the application of the general iterative equation (10) now generate a corrupted sequence {x*} with limit point x*, instead of the sequnce {Xk} with limit point x obtained for the noiseless case. Use of (10) for the noisy case results in

Xk~+l = RX~k + F(y + n)

= R k + I X o

+ ( R ~ + R k - l + . . - + R + 1 ) F ( y + n )

= Xk+l Jf- n k + l , (15) Vol. II, No. I, July 1986

Page 4: An iterative restoration technique

4

where Xk+l, the solution for the noiseless case, and nk+~, the equivalent noise term at the ( k + l ) s t iteration for a linear F, are respectively given as

Xk+ 1 -~ R k + l X o ' ~ - ( R k + R k - I + . • • + 1)Fy,

(16a)

nk+~ = ( R k + R k-~ +" " " + 1)Fn. (16b)

Since T in (10) is a contraction operator for the iterations to converge, then, by the use of (3), R in (10) is also a contraction operator with bound IIRII, IIRII < 1. Therefore, from (16), the signal and noise strengths are, respectively:

Ilxk+lll ~< IlRIIk+lllxoll ~ IIFII IlYII(1- IIRII k+') 1-11R)l '

(17a)

IIn~÷,ll ~< IIFll II nil(1 -IIRll ~÷') (17b) 1-11g)l

Though the signal strength in (17a) depends on the initial starting point x0, for large k, the first term on the right-hand side of (17a) may be neglected. Asssuming signal-to-noise ratio (SNR) at the kth step as

(SNR)k = inf [[xkll Ilnklr (18)

we have, from (17),

S N R x _< IlYll Jk ~ ~ = original SNR. (19)

However, there exist iterative techniques [10, 11,22] which increase the SNR in restored images. These techniques use a priori knowledge about signal and noise statistics, and cannot be modelled with F being linear, which assumption was made in deriving (16) from (15).

3. The proposed technique

3.1. The technique

For the solution of x in (1), equation (6) forms the basis of the iterative technique. However, in the proposed technique, ~k is not derived according Signal Processing

S. Singh et al. / I terative resotration technique

to equations (7) and (8) as in the other iterative techniques. Let a transformation Hk be defined as

Xk= HkX Vk=0 , 1 , . . . . (20)

Then, (4) can be written as

ek = ( I - Hk)X (21a)

= BkX, (21b)

where

B k = I - - H k V k = 0 , 1 , . . . . (22)

Now, given the estimate Xk of X, the estimate ek is obtained from (21) as

ek = BkXk. (23)

The basic iteration equation (6) is now given as

Xk + 1 = X k "~ B k X k

= ( I + Bk)Xk (24a)

= (2I - Hk)Xk, (24b)

where use of (22) has been made in the derivation of (24b). Substituting (20) in (24b), we have

Xk+I = (21 -- Hk)HkX. (25)

Comparison of (20) and (25) indicates

Hk+~ = (21 -- Hk)Hk. (26)

Equations (24b) and (26) constitute the poposed restoration technique.

From (26),

I - - H k + l = I - 2 H k + H 2

= (I - Hg) 2. (27)

Equations (22) and (27) imply

Bk+~ = B~. (28)

Equations (24a) and (28) constitute the alternative form of the proposed technique. If the iterations are started with Xo = y, then (1) and (20) imply that Ho = H. The ease of y corrupted by noise is con- sidered in Section 3.4. The proposed algorithm is

Page 5: An iterative restoration technique

S. Singh et aL / lterative restoration technique

resumed by equations (22), (24a), and (28), respec- tively:

Bo = I - H o ,

Xk+ 1 = X k "~ BkXk,

Bk+l = B~,

or, alternatively, by equations (24b) and (26), respectively:

Xk+ 1 = (21 - H k ) X k ,

Hk+,=(2I--Hk)Hk.

in (32),

d(x~, x ) ~ II noll(2~-l)d(x~-, x)

( d ( X k - l , X ) ) 2

Ilxll

3.2. Convergence

For the proposed technique to converge, the distance d(Xk, X) should have limit point 0 as k ~ oo. From (24a),

d(Xk+t, x) = d( ( I + Bk)Xk, X). (29)

Making use of equations (20) and (22) in (29), we have

d(Xk+, , x) = d( ( I - B~)x, x)

<~ d(x, x )+ d(B~x, O)

~< IIBkll=llxll

o r

d(x~, x) ~ II nk_,ll211xll. (30)

Making repeated use of (28) for k =0, 1 . . . . in (30), we arrive at

d(xk, x) <- Ilnoll(2kqlxll. (31)

Therefore, d(xk, x ) ~ O for k~oo, if 0 ~ < liB011 <'1. This implies that Bo should be a contraction operator with bound less than unity. For the initial guess xo=y, according to (1) and (20), Ho=H, and (22) implies Bo = I - H. Therefore, for conver- gence, the operator H is constrained so that Bo = I - H is a comraction.

Equation (31) can be written as

d (x~, x) <~ (11Botl~=~-'))=llxll- (32)

Writing (31) for the ( k - 1)st step and substituting

(33a)

(33b)

Equation (33) is a geometric progression in 11 Boll and therefore indicates a very fast convergence in comparison to the linear form of (11 a) for the other iterative techniques. While ( l ib) is a geometric progression, the corresponding inequality (31) is a squared geometric progression.

Alternatively, the proposed technique will con- verge if Hk -~ I as k ~ oo, for then, in (20), Xk ~ X. This implies that, in (22), Bk ~ ~ as k-~ oo, where q~ is the null operator. From (28), it follows that

link÷ill ~< Ilnkll =. (34)

This implies quadratic contraction of the deviation operator Bk, with every iteration. Repeated substi- tution of (28) in (34) leads to

IIn~ll <~ Ilnoll (=~). (35)

Therefore, if0<~ Ilnoll < 1, convergence Bk ~ q~ and Hk ~ I, as k -~ oo, takes place.

3.2. Perturbation in the operator H

If, for the initial guess Xo = y, the operator H* is used instead of Ho = H, then the iterations lead to a perturbed solution {x*}. The perturbation in the deviation operator Bo, equation (22), is given by

Bo* = I - H*. (36)

The perturbation in the operator Ho will be assumed to be bounded as follows:

d(Hox, H*ox)<~ a for all x. (37)

Equations (22) and (37) imply

d(Box, B*x)<~a for all x. (38)

The iterative equations (24a) and (28) for the per- turbation case are respectively

x*+l ( I + * * = Bk)Xk (39) Vol. I1, No. 1, July 1986

Page 6: An iterative restoration technique

S. Singh et al. / lterative resotration technique

and

Bk*+, = Bk .2. (40)

Further it is assumed that the perturbation in the operator Bo is not excessive and the bound condi- tion in (31) is still satisfied, so that equations (39) and (40) can generate a convergent sequence. Thus, from (3),

d(B*ox, B*oy) <~ IIBo*ll d(x, y),

w h e r e 0 ~ II Bo ~ II < 1. (41)

The error due to perturbation at the (k + 1)st step can be defined as the distance d(x*÷ ,Xk+l ) .

Making use of (39) and (24a),

Repeated use of (45), for k = 1, 2 , . . . , k, yields

d(X*k+,, Xk+,)

d(X*l, x1)(1 + II Boll=) . . . (1 + IIBoll (=~))

d ( x * , x,)(1 + IlUoll=+ IJUoll4+ Jl Boll 6

--~-.. .)

1 d(x*, x,)

1-11Boll 2"

For the same initial starting equations (24a) and (29) imply

d(x* , x , ) = d( B*oxo, Boxo).

point

(46)

X~ -~ X O,

(47)

Since x* ~ x* and Xk ~ X as k ~ oo, equations (38), (46), and (47) imply

d (Xk~+l, Xk+l)

<~ d(X*k, Xk) + d( B*x* , BkXk)

<<- d ( x * , Xk) + d( BkXk, BkX*)

BkXk) . (42) + d ( BkX*k , * *

The second term at the right-hand side of (42) is

O~ d(x*, x) ~ 1 - I I B o (48)

Equation (48) is, similarly to (13c), valid for the general iterative techniques. The equations indi- cate that the distance between limit points x* and x is proportional to the perturbation a defined in (37) or s c as in (12).

d (BkXk, BkX*)

<~ IIBklld(xk, x*)

-< II Boll¢Zk)d(xk, x*), (43)

where use of (35) has been made. Since the bound condition on the perturbed

operator B* in (41) is the same as required by (31), equations (39) and (40) also generate a con- vergent sequence as equations (24a) and (28) do. Further, in the context of (35), B* in (40) also converges to • as k ~ oo. Therefore, for large k, the third term at right hand side of (42) is

3.4. Noise effects

If y in (1) is corrupted by additive noise n, then let

z = y + n = H x + n. (49)

Then, for the initial guess Xo = z, Ho = H, and Bo = I - H, repeated use of (24a) for k = 0, 1 . . . . , k - 1 yields

X*k = ( I + Bo)(I + BO " " " ( I + B k - O ( y + n)

= A ( y + n), (50)

where

d(BkX*k, B*X*k)~O as k~oo. (44)

Under the approximation, equations (42), (43), and (44) imply

d(x*+,, Xk+l)

<~ d ( x * , Xk)(a + IIBoll<2k>). (45)

A = ( I + B o ) ( I + B , ) . . . ( I + Bk_,)

= ( I + Bo)(I + B2o) ' ' " ( I + B~o2~-')), (51)

where use of (28) has been made. Futher, since (31) constrains Bo to be a bounded operator, operator A in (51) is also bounded. Let noise Iln~ll be the distance d(X*k, Xk), where Xk is the solution

Signal Processing

Page 7: An iterative restoration technique

S. Singh et aL / Iterative restoration technique

for the noiseless case, i.e., n = 0 in (50); from (50),

II'klL = d(xL xk)

= d ( A ( y + n), A(y))

<~llAIId(y+n,y)

<<-IIAlld(n, 0). (52)

For the noiseless case, the signal Xk has strength

d(xk, O)--d(Ay, O)<-IIAlld(y,O). (53)

Assuming the signal-to-noise ratio (SNR) to be given as

(SNR)k = inf d(Xk, 0__.___~), (54) IIn~[I

equations (52), (53), and (54) imply

d (y, 0) (SNR)k ~< - - original SNR. (55)

d(n, 0)

This implies that the noise effect in the proposed technique is, similarly to (19), valid for the other iterative techniques.

4. Application to deconvolution

The case of convolution (1) is equivalent to

y(n) : h(n) * x(n), (56)

where * denotes convolution and the identity operator I is the delta function,

{ ; if n--0, 8(n) : (57)

otherwise.

The iterations of equations (24a) and (28) are now written as

Xk+l(n) = (6(n) + bk(n)) * Xk(n) (58)

and

bk+,(n) = bk(n) * bk(n), (59)

where, from (22), bk(n) is defined as

bk(n) = 8(n) - hk(n). (60)

The iterations may be started with Xo---y, ho--h.

7

Let Xk(W), Bk(w), Hk(W), Y(w) , H(w), and X ( w ) be the respective Fourier transforms of Xk(n), bk(n), hk(n), y(n), h(n), and x(n). Then, the iterations in the Fourier domain are obtained from equations (58) and (59) as

Xk+~(w) = (1 + Bk(W))Xk(w) (61)

and

Bk+,( w) = ( Bk( W) ) 2, (62)

and (60) is equivalently written as

Bk(W) = 1 - Hk(W).

For and

(63)

convergence, condition 0~ < IIBoll < 1 in (31) (35), when applied to (63), implies

0~ < IIBo(w)ll = l1 - no(w)l < 1. (64)

This condition is depicted graphically as a circle with centre at (1, 0) in [18, Fig. 1].

The algorithm of (58) and (59) was tested on a signal which was degraded by a gaussian blur function with standard deviation equal to six samp- ling intervals. Positivity [5, 8, 18] was incorporated in the algorithm by clipping of the negative values obtained in the estimates. The original signal x(n), the blurred signal y(n) and the restored signal are shown in Fig. 1. For the sake of comparison with the iterative resoration algorithms, the blurred sig- nal of [18, Fig. 7] was deconvolved with the pro- posed algorithm and the results are shown in Figs. 2 and 3. Also shown in Fig. 3(b) are the results obtained with the proposed algorithm, but with no positivity constraint applied. Similar results for the other method [18] are shown in Fig. 2(d), (e). The results in Figs. 2 and 3 could not be compared with the original signal, as the sample in Fig. 2(a) is the observation of a gamma ray spectrum, and the true signal is not known. It is seen that,, for similar results, the proposed algorithm converges very rapidly within four iterations compared to twenty required by the other method [18].

5. Conclusions

A new iterative algorithm for the restoration of x in equation (1) has been suggested. The iterative

Vol. l l , No. 1, July 1986

Page 8: An iterative restoration technique

8 S. Singh et aL / lterative resotration technique

0.0

2.0

x

1.0

10

~ y(n)

Xl In}

20 30 t.O ( a )

3.0 1.00

0.75

xk(n)

0.50

0.25 -

0.0 0

, m o l

A

10 20 30 (b)

x1(n)

x2(n)

x3(n}

I t,O n

xk(n)

0.75

0.50 -

0.0 0 10

SNR = 40db

. . . . . x I (n)

. . . . x2(n}

x 3 (n)

20 30 ~0

xk(n)'

1.0

0.75

0.50 -

0 . 2 5 -

SNR = 2Bdb

. . . . x I (n)

x2{n)

- x 3 (n)

f

-- O.L~ ~N, . n v o 10 20 30 40 n

(c) (d)

Fig. 1. (a) x(n) is the original signal of interest and y(n) is the blurred signal obtained on convolving x(n) with a Gaussian blur function of standard deviation equal to six sampling intervals. (b), (e), (d): Restored signal xk(n) for k = 1, 2, 3. (b) SNR=oo. (e)

SNR = 40 dB. (d) SNR = 28 dB.

technique is defined by (24a) and (28) or, alterna- tively, by (24b) and (26). The theory of these equations is not based on the assumption that the operator H is linear and therefore the algorithm presented is applicable for nonlinear distortio.ns, too. The technique described thus involves two equations in each step in contrast to the other iterative techniques described by the single equation (10) or (9). However, as the iterations in (26) or (28) do not depend on the signal or its estimates, these equations may, be evaluated beforehand and the values of Bk or Hk be per-

Signal Processing

manently stored for later use. This additional com- putational effort is more than offset by the very rapid quadratic convergence (33b) of the proposed technique compared to linear convergence ( l l a ) of the other iterative techniques. Comparison of

Fig. 2 for the iterative algorithm in [18] with Fig. 3 for the proposed technique shows that only four iterations are sufficient compared to twenty required by the other method. This clearly demon- strates the efficacy of the proposed technique. It has also been shown that if, for the initial guess, H* is wrongly chosen instead of Ho = H, or when

Page 9: An iterative restoration technique

S. Singh et aL / lterative restoration technique 9

x

k=-

O u

3.2 y(n)

2./ ,-

3 .6 -

0 . 8 -

0.0 0 ! .

319 (~n) (0)

0.08

x 0.06 l - - i

o 0.04

0.02

0.0

h(n)

(b)

i h

99(~ )

, , o

x

I - - Z

Q

8.0

6.0

6.0

2.0

0.0(~

x20 (n)

(c)

!

319 ~n )

9.0

x

3.0

0.0

- 3 . 0 -

x 20 {n)

(d)

n

9.0

~o 6.0 x

z 3.0

S 0.0

-3.0

x 20 (n)

^_^AAA , V I V - v v V v 319 n

(e)

Fig. 2. (a) Observed signal y(n) of a gamma-ray spectrum [18, Fig. 7(a)]. (b) Blur function h(n) of Fig. 2(a) [18, Fig. 7(b)]. (c) Restored signal after 20th iteration using the equation Xk+l = Ay + ( 8 - Ah)* Cxk, with A = 2 and C imposing both finite support and positivity constraint [18, Fig. 7(0]. (d) Results obtained with a = 1 and no constraints applied (Van Cittert's algorithm) [18,

Fig. 7(d)]. (e) Results obtained with A = 1 and a finite support constraint. The region of support is 45 ~< n ~< 290 [18, Fig. 7(e)].

there is add i t ive noise, then the i tera t ions converge

to the p e r t u r b e d so lu t ion x*, the devia t ions o f

which f rom the t rue so lu t ion x are given by (48)

and (55), respect ively . These equa t ions are,

s imi lar ly to equa t ions (13c) and (19), va l id for

o ther i t e ra t ion techniques . F ina l ly , the app l i ca t i on

to the de c onvo lu t i on o f b lu r r ed signals in Figs. 1

and 3 va l ida tes the p r o p o s e d i terat ive a lgor i thm. Vol. 11, No. 1, July 1986

Page 10: An iterative restoration technique

10 S. Singh et al. / Iterative resotration technique

x~(n)~

1.00

0.75

0.50

0.25

0.0

j

x~(n)

1.0

0.5

0 I

319 (n} -0.25

(a) (b)

Fig. 3. (a) Restored signal after 4th iteration (the peak has been normalised to a maximum value of 1). Results were obtained by applying the proposed restoration technique along with the positivity constraint, to the signal y(n) in Fig. 2(a). Comparison with Fig. 2(c) reveals the fast convergence of the proposed technique. (b) Restored signal after 4th iteration, obtained on applying the proposed algorithm with no constraints invoked. Comparison with Fig. 2(d), (e) reveals the efficacy of the proposed algorithm.

Appendix A

Substituting (7) in (9), the general iteration

equation is

Xk+t = Xk + F ( y - HXk), (A.1)

where the observation y and the degradation are

related by (1), Xk is the estimate at kth iteration

and F is some transformation. Some of the iterative

techniques listed below are of the above form with

only F defined differently.

(a) Jacobi me thod [2, 4, 13, 14]: For the solution

of the system of linear equations (1), iteration (A. 1)

is

Xk+ 1 = X k "~ D - t ( y - H X k ) , (A.2)

where D is the diagonal matrix of H. (b) G a u s s - S e i d e l me thod [2, 13]: For the solu-

tion of the system of linear equations (1), iteration (A.1) is

Xk+~ = Xk + ( D + L ) - ~ ( y - HXk), (A.3)

where D and L are respectively the diagonal and lower triangular matrices of H.

(c) The method o f steepest descent [3, 7, 9]: In

this method, the operator F is chosen so as to minimise the residual error ey.k by the method of Signal Processing

steepest descent. Iteration (A.1) is of the form

Xk+l ---- Xk ÷ OtkPk (A.4)

where ot k is the descent value chosen according to

some criterion [9] and Pk is the direction of steepest descent or the gradient of ey, k. For the solution of

(1) in Hilbert space, the parameters t~g and Pk in

(A.4) are [9]

Pk = er. k = Y - HXk

and

Ctk = ( Pk, pk ) / ( Hpk , Pk ),

(A.5)

(A.6)

where (a, b) denotes the scalar product of a and b.

(d) M e t h o d o f least squares: This is the modified Jacobi method, in which the norm of the residual

error ey, k is minimised and is given by [7]

Xk+ 1 = X k ÷ a D - 1 H T ( y - H X k ) , (A.7)

where a is the acceleration parameter, H T is the

transpose of H, and D is the diagonal matrix of H. (e) Cons tra ined iterative restoration methods

[3, 7, 12, 16, 18]: In these methods, the variable of interest x is assumed to be constrained by an operator C as follows:

x = Cx. (A.8)

Page 11: An iterative restoration technique

S. Singh et al. / Iterative restoration technique

The operator C could be such as to constrain x to [7] be posit ive , or lie within certain limits or be bandlimited, etc. Iteration (A.1) is now of the form [8]

Xk+ 1 : C X k "~ F(y - H C x k ) . (A.9) [9]

(f) Projection methods: The projection operator [2, 6, 13] is applied on the current estimate to gen- erate a new one. [10]

A brief review of projection methods applied to restoration of signals is given in [14]. A general equation for these methods is of the type [11]

VT ey, k xk+l = xk + linTY211/4Tv, (A.10)

[12] where V is some vector. A solution of (1) by the method of alternating projections is given by

[13]

Xk+~ = y + Q~PbXk, (A.11) [14]

where Q~ and Pb are projection operators. The method of convex projections [19,21] is of the [15] form

Xk+, = PmPm-I " " " PIXk, (A.12) [16]

where P~, i = 1 , 2 , . . . , m are projectors onto various convex sets Ci, i-- 1, 2 . . . . . m.

References

[17]

[18]

11

[19]

[1] H.C. Andrews and B.R. Hunt, Digital Image Restoration, Prentice-Hall, Englewood Cliffs, NJ, 1977, Chap. 6, 8.

[2] L. Collatz, Functional Analysis and Numerical Mathe- [20] matics (translated by H. Oser), Academic Press, New York, 1966.

[3] J. W. Daniel, The Approximate Minimization of Func- tionals, Prentice-Hall, Englewood Cliffs, N J, 1971, p. 70. [21]

[4] G. Ferrano and H. Maitre, "TV---optical iterative picture restoration: Experimental results", Optics Commun., Vol. 38, Nos. 5, 6, September 1981, pp. 336-339.

[5] B. R. Frieden, "Image enhancement and restoration", in: [22] T.S. Huang, ed., Picture Processing and Digital Filtering, Springer, Berlin, 1975, pp. 177-248.

[6] A.S. Householder, The Theory of Matrices in Numerical Analysis, Blaisdell, New York, 1964, Chap. 4, pp. 98-103.

Y. Ichioka, Y. Takubo, K. Matsuoka and T. Suzuki, "Itera- tire image restoration by the method of steepest descent", J. Opt., V01.12, No. 1, January/February 1981, pp. 35-41. B.A. Jansson, R.H. Hunt and E.K. Plyler, "Restoration enhancement of spectra", J. Opt. Soc. Amer., Vol. 60, No. 5, May 1970, pp. 596-599. L.V. Kantorovich and G.P. Akilov, Functional Analysis (translated by H.L. Silcock), Pergamon Press, Oxford/New York, 1982. A. Katsaggelos, J. Biemond, R. M. Mersereau and R.W. Schafer, "Nonstationary iterative image restoration", presented at 1985 lnternat. Conf. on Acoustic Speech and Signal Processing, Tampa, FL, March 26-29, 1985. A. Katsaggelos, J. Biemond, R. M. Mersereau and R.W. Schafer, "A general formulation of constrained iterative restoration algorithms", presented at 1985 lnternat. Conf. on Acoustic Speech and Signal Processing, Tampa, FL, March 26-29, 1985. S. Kawata and Y. Ichioka, "Iterative image restoration for linearly degraded images, I. Basis", J. Opt. Soc. Amer., Vol. 70, No. 7, July 1980, pp. 762-768. E. Kreyszig, Introductory Functional Analysis with Applica- tions, Wiley, New York, 1978. H. Maitre, "Iterative picture restoration using video optical feedback", Comput. Graphics and Image Process., Vol. 16, June 1981, pp. 95-115. W.K. Pratt, Digital Image Processing, Wiley, New York, 1978, Chap. 14, pp. 378-425. R. Prost and R. Goutte, "Discrete constrained iterative deconvolution algorithms with optimized rate of conver- gence", Signal Processing, Vol. 7, No. 3, December 1984, pp. 209-230. A. Rosenfeld and A.C. Kak, Digital Picture Processing, Academic Press, New York, 1976, Chap. 7, pp. 203-255. R.W. Schafer, R.M. Mersereau and M.A. Richards, "Con- strained iterative restoration algorithms", Proc. IEEE, Vol. 69, No. 4, April 1981, pp. 432-450. M.I. Sezan and H. Stark, "Image restoration by the method of convex projections: Part 2--Applications and numerical results", IEEE Trans. on Med. Image, Vol. MI-1, No. 2, October 1982, pp. 95-101. D.C. Youla, "Generalized image restoration by the method of alternating orthogonal projections", IEEE Trans. on Circuits and Systems, Vol. CAS-25, No. 9, Sep- tember 1978, pp. 694-702. D.C. Youla and H. Wehb, "Image restoration by the method of convex projections: Part 1--theory", IEEE Trans. on Med. Image, Vol. MI-1, No. 2, October 1982, pp. 81-94. Y.H. Yum ahnd S.B. Park, "Optimum recursive filtering of noisy two-dimensional data with sequential parameter identification", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-5, No. 3, May 1983, pp. 337-344.