Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

Embed Size (px)

Citation preview

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    1/13

    2024 I E E E TRANSACTIONS ON ACOUSTICS. SPEECH. AND SIGNAL PROCESSING. VOL. 37. NO . 12. DECEMBER 19x9

    Image Reconstruction and Restoration: Overview ofCommon Estimation Structures and Problems

    G U Y D E M O M E N T

    Abstract-Developments in the theory of image reconstruction andrestoration in the past 20 or 30 years are outlined. Particular attentionis paid to common estimation structures in the field and to practicalproblems not properly solved yet.

    I . INTRODUCTIONN D OUTLINEESTORATION and reconstruction of images can beFL fined as the general problem of estimating a two-dimensional (2-D) object from a degraded version of thisobject. The mathematical form of the degradation processdepends on the problem at hand. In image restoration, theunknown object and its degraded observed version, re-ferred to as the image, are both scalar functions defined

    on FI2 o r Z2.n image reconstruction, observations resultfrom the interaction between the unknown object andsome scattering wave. The 2-D object must be recon-structed from a finite set of project ions, i.e ., scalarfunctions defined on I ; ; ) o r 2.This problem has been extensively studied for its ob-vious practical importance as well as its theoretical inter-est. Literature on the subject is abundant and highly var-ied since the problem arises in almost every branch ofengineering and applied physics. It is, for instance, fre-quently encountered in various fields of application suchas optics, X-ray or diffraction tomography, radioastron-om y , biomedical engineering, machine vision, nonde-structive evaluation, geophy sics, etc. The variety of ref-erence sources quoted in this paper is evidence for thisfact.Hence, the aim of the paper is not to give an exhaustiveoverview of the literature, which would be too am bitious,but to stress two major points.First, most existing image reconstruction and restora-tion methods have a common estimation structure in spiteof their apparent variety. This can be assumed from theliterature [8], [121, [191, [24] and w as clarified a few yearsago in a series of seminal papers by Titterington [22],[23]. Everything was summed up in a single word: regu-larization. Since then, cha nges have occurred in the field.The common theoretical limitations presented by thesemethods have led to a broadening of their theoretical

    Manuscript received March 17, 1989.The autho: was with the Labomtoire des Signaux et Systemes (CNRS/ESE/UPS) , Ecole Supirieure dElectricitC, 91 192 Gif-sur-Yvette CCdex,France. He is now with the Department of Physics, UniversitC de Paris-

    Sud, Orsay, France.IEEE Log Number 8931343.

    foundations, comparable to other recent changes occur-ring in the whole signal processing field. The same phe-nomenon occurred recently in the area of image process-ing, particularly with the introduction of random Markovfields and related estimation methods [80]-[89]. How -ever, the same com mon structures are still present and canbe used as a unifying framew ork for understanding thisrather complex field, which is rapidly evolving throughthe influence of statisticians, electrical engineer s, and ma-chine vision ex perts .Second, in addition to a comm on estimation structure,most image reconstruction and estimation methods pre-sent some common practical limitations. The most so-phisticated methods require the use of several tuning pa-rameters (referred to as hyperparameters) w hich should beestimated from the data. Methods are available to performthis task. So me of them are reputed to give correct results,others not, and a great effort should be made to clarifythis aspect.In order to set the framework and make these points ,the paper is organized a s follows.In Section 11, the problem of image reconstruction andrestoration is formulated. Although it reduces in mostpractical situations to solving a system of l inear equa-tions, this special case is important because it provides aneasy way of studying the ill-posed o r ill-conditioned na-ture of these inversion problems.Section I11 describes som e of the current regularizationapproaches used to solve this problem. The importantconcepts of a priori information and compound criterionare introduced.In Section IV , we give a Bayesian interpretation of theseregularization techniques which clarifies the role of thetuning parameters and gives indications on how they couldbe estimated.Sections V and VI are devoted to the practical aspectsof computing the solution, first when the hyperparametersare known, and second when they must be estimated.In Section VII, conclusions are drawn and points thatstill need to be investigated are outlined.The statistical approach taken in the paper allows u s tocover most probabilistic methods, and a number of deter-ministic ones as well. Ho wever, du e to the wide range ofthe subject and space lim itations, some of them, althoughuseful in some situations, have been left out. Justice isdone to them in the references which are organized undernine subheadings roughly corresponding to the sections of

    0096-3518/89/1200-2024$01OO O 1989 IEEE

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    2/13

    DEMOME NT: IMAGE RECONSTRUCTION A ND RESTORATION 2025the paper. Although the division is somewhat arbitrary,we hope it will enable the reader to acquaint himself withresults that could not be covered in this paper.

    11. T H E ILL-POSEDN A T U R EF IMAGERECONSTRUCTIONN D RESTORATION

    An im age is generally defined as a real or complex-val-ued function of two space variables belonging to somesupport region. Although this support may be continuous,it is commonly sampled on a rectangular grid. This de-fines a set of pixels, and the image is represented by avector x of N pixel intensity values, N being typically apower of two and also a huge number.In image reconstruction and restoration problems, theoriginal object cannot be directly measured and mustbe either reconstructed or restored from the observed datain order to remove the effects of the observation mecha-nism.In image restoration, the cause of degradation is somedistortion which can be modeled asy = A ( x ) b (2.1)

    with the following interpretation: A ( * ) is a degradationmechanism, b denotes a corruptive noise process, and thesymbol represents a pixel-by-pixel interaction. The dis-tortion mechanism o ften involves convolution o r blurringof x by some point spread function and the addition of anindependent Gaussian white noise, with zero mean andvariance U;, thus leading to the following simpler model:y = A x + b . (2.2)

    Sometim es, however, the distortion also involves a non-linear pixel transformation and a multiplicative noise. Theproblem of restoring x is then much mor e difficult [ 11 , [6],[99], [ 1 141, even though some methods developed in theprevious case can still be formally applied. In model (2.2),matrix A and the statistical characteristics of the noise a reimplicitly assumed to be known. But this assumption isnot always fulfilled, and these quantities may also have tobe estimated [72], [95], [98], [1021. This is , for instance,a common situation in geophysics. In this paper, we willdeal only with the linear, additive , and Gaussian case witha known distortion model. Although this is somewhat re-strictive, the corresponding problem is generic in the sensethat its solution is the basis of many other ones.The problem of image reconstruction is a little morecomplicated s ince the true object x is no longer a measureof light intensity over some scene, but a mapping of somephysical property, e.g., a density of matter in a physicalobject, an acoustical impedance, a complex permittivity[3], [5], etc. Th e true object must be reconstructed, i .e . ,decoded from wave interaction results that are not im-ages. These data are commonly called projections,even when, because of diffraction effects, for instance,they are not projections of the object in a geometricalsense, i .e ., l ine integrals , or when they a re simply objectsamples in the space or frequency domains. In most ex-isting methods, the image reconstruction problem is also

    modeled as that of solving a system of linear equations.When the object is illuminated by a radiating source witha very short wave length , as in X-ray tomog raphy , the datay are explicit and linear functions of the object x. Bu twhen diffraction effects cannot be neglected, this is onlya rough approximation which corr espon ds to a first-orderexpansion of the solution to the corresponding w ave prop-agation equations.The whole se t of data is obtained by rotating the objectin the incident radiated field and by measuring its projec-tions at various incidence angles belonging to a discretefinite set. All the data are then concatenated in a singlevec tory , thus leading to a model s imilar to (2.2):y = A x + b . (2.2)

    In both cases, our image processing problem is to ob-tain an estimate P of the true object x from the data vectory . Under our assum ptions, a solution which em erges im-mediately is the least squares one with minimum norm,which isP, = ( A A ) - A ~ ( 2 . 3 )

    ioAy. (2.4)when the normal matrix AA is regular or else the gen-eralized inverse solution

    This seems to be a reasonable choice, from a statisticalstandpoint at least, s ince under o ur assumptions, Po is anunbiased and minimum variance solution.Although the effective computation of Po can be madecomputationally tractable in actual problems with hugedimensions, this solution is usually unacceptable since Ais most often ill conditioned. The noise component am-plification exceeds any acceptable level.The ill conditioning is a direct consequence of the illposedness of the initial continuous-data problem which isapproximated by (2.2). In the restoration problem, theimage is a convolution integral

    which is a particular case of a Fredholm equation of thefirst kind. T he kernel of this integral equation h is the 2-Dimpulse response or point spread function (PS F) of theimaging system. Since the data are erroneous or noisy,we cannot expect to solve this equation exactly and thetrue solution must be approximated in some sense. Theconcept of distance between images is then a natural wayof evaluating the quality of an approximation, which ex-plains why x an d y are often assum ed to belong to Hilbertspaces. This continuous-data problem can then be rewrit-ten as

    y = A x (2.6)where y an d x are now elements of infinite dimensionfunction spaces X an d Y , respectively, and A is a linear

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    3/13

    2026 IEEE T R A N S A C T I O N S O N A C O U S TI C S. S P E E C H . A N D S I G N A L P R O C E S S IN G . VOL 37 , NO 12. D E C E MB E R 19x9

    operator corresponding to an integral equation of the firstkind. To be acceptable, the solution must satisfy threeconditions: the usual mathematical conditions of exis-tence and uniqueness, and the less usual physical condi-tion of stability or continuity with respe ct to the data sincethese data are inevitably noisy. These are the so-calledHadamard conditions for a problem to be well posed 191-1181.

    Let y an d x belong to the same Hilbert space and let thePSF be square integrable. This assumption is fulfilled bymost physical PSF's . The direct problem, i .e ., the cal-culation of the imaging system response y to the true ob-ject X, is then well posed in the Hadamard sense: a smallerror 6x on the data leads to a small error Sy on the solu-tion. However, this condition is not satisfied in the cor-responding inverse proble m where the true object x is tobe computed from the response y:x = A - l y . (2 .7 )

    In fact, the necessary and sufficient conditions of exis-tence, uniqueness, and stability of the solution are, re-spectively,Y E Im ( A ) (2 .8 )

    K e r ( A ) = ( 0 ) ( 2 . 9 )Im ( A ) = Im ( A ) (2 .10)

    where Im ( A ) and Ker ( A ) re the range and the null spaceof A , respectively, and where Im ( A ) s the closure of Im( A ) . When the PSF h is square integrable, the Riesz-Frechet theorem indicates that the operator A is boundedand compact [121. But the range of a compact operator isnot closed (except in the degenerated case where its di-mension is finite, which corresponds to a separable PSF).This means that the inverse operator A - ' is unbounded,its range Im ( A is unclosed, and the third Hadam ard con-dition is not met for the inverse problem.These difficulties remain unchanged in the symmetricproblemA * y = A * A x (2 .11 )

    where A * is the adjoint operator of A , i .e . , ( A x , y ) =( x , A * y ) for all x E X an d y E Y . ( . ) represents thescalar products used in the image and object spaces.Additional light can be shed on these problems using aspectral approach. Equation (2.11) is used since A * A sa self-adjoint, nonnegative definite operator, and hencehas an eigensystem.Let { A, } denote the eigenvalues of A * A (and of A A * )ordered and counted with their multiplicity: X I I 2 2- . . X, I . - 0. According to the Hilbert-Schmidt theo-rem 1121, A, goes to zero as n -+ 03, and the alternativeis either the limit is reached for n = noor the limit is neverreached for any finite value of n . Let E be the set of in-dexes n such that A, # 0, and let { U, , U ,; U, } be a sin-gular system of operator A . The singular vectors U, , U,and the singular values U, satisfy the following condi-

    tions: n E E2, = A,M * u , X,U, A * A v , = X , U ,

    Au, == U,U, A*un = UnUn. ( 2 . 1 2 )Equation (2.11) has a solution for a given image y E Y ifand only if

    (2 .13)Condition (2.13) is derived from Picard's criterion forsolvability of an integral equation of the first kind [12].In order for (2.13) to be fulfilled, components ( y , U, ) ofthe image expansion on the set of eigenfunctions { U , }have to decrease faster than the eigenvalues U: when n -+03. Since, in practice, these functions are rapidly oscillat-ing when n increases, this condition can be intuitively in-terpreted as a limitation on the high-frequency content ofthe perfect image y .In addition, when the existence and the uniqueness areassumed, a perturbation or noise Sy on the data gives theperturbed or noisy solution [121

    This shows that the error in each component of this ex-pansion is proportional to the inverse signal-to-noise ratio( Sy , U, ) / ( y, U, ) . This is a first source of degradationsince this ratio may often be mu ch greater than unity whenthe noise Sy and perfect image y have distinct spectralproperties . H owever, even when the signal-to-noise ratioremains high over the whole spectrum { U , } , the errorcomponents ( Sy, U, ) are divided by the singular valueswhich go to zero as n -+ 00. Hence, small errors in theimage induce large errors in the solution. This explainswhy usual trial-and-error methods cannot be applied tosolve these integral equations of the first kind since a smallobservation error would not ensure a small error on thesolution

    (2 .15 )This situation frequently occurs in inverse problems in-volving different function spaces for the object and theobserved data, and careless application of inverse filtersmay lead to unacceptable results. In reconstruction, thesolution is even more com plicated since the uniqueness ofthe solution may also become a difficult problem [3].In the discrete case, x an d y belong to finite dimensionalspaces and the linear operator A is a matrix A . Equation(2.2) has a unique solution of minimal norm Po = A t ywhich depends continuously on y since the generalized

    inverse A t is always bounded 1121. The problem is thenwell posed in the least squares sense. The advantage ofadopting this generalized inverse solution is that an in-consistent finite system of linear algebraic equations willnot be ill posed in this sense, while it is ill posed in thesense of Hadamard.

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    4/13

    DEMOMENT: IMAGE RECONSTRUCTION AND RESTORATION 2027

    But even in this setting, the inversion problem has, froma numerical viewpoint, instability properties akin to thoseof ill-posed problems. The spectral decomposition (2.14)is still valid, with the only difference being that the num-ber of singular values of the normal matrix A A is nowfinite and equal to N . These singular values cannot be ex-pressed in closed form, even when the normal matrix isToeplitz. How ever, in the latter case, AA can be approx-imated by a circulant matrix whose eigenvectors are thecolumns of the discrete Fourier transform matrix [1081.The spectral decomposition (2.14) is then a Fourier spec-trum with the usual spatial frequency interp retation . In thecirculant case, the eigenv alues of AA are the PSF Fou-rier transform values computed at regularly spaced fre-quencies. When the degradation is a low-pass filter, U,,take on very small values when n + N . Even if we ex-clude the zero singular values, as in computing A +,herewill s till be singu lar values close to zero. The matrix A isill conditioned. The weight U,,- ( a y , U , ) in (2.14) canthus become very large for those U, close to zero, evenwhen 6 y is small. We are in a somew hat paradoxical s it-uation: the finer the pixel grid (or equivalently, the betterthe discrete approximation), the better the approximationof the singular values by the PSF , and the worse the con-ditioning of matrix A !Problems of this kind arise not only in image process-ing, but in the whole applied physics field. A generalprinciple for dealing with the instability of the problem isthat of regularization [9]-[18]. It consists mainly ofchanging ou r idea of what a solution is. Obtaining the truesolution from imperfect data is impossible. When regu-ularizing the problem, this fact is acknowledged and theinitial equation i s considered to define, in fact, only a classof admissible solutions { P } :

    ( 2 . 1 5 )among which an acceptable solution must be sought. Butto do this , the problem must be stated more completely,which implies that some extra or a priori information beincluded.As mentioned in the Introduction, some methods do notfall into the regularization framework (e.g., homomor-phic filtering [l]). However, most of them do, and sinceregularization is conceptually very simple and intuitivelynatural, and since it can be given a conventional as wellas a Bayesian statistical interpretation, it can be used as aunifying framewo rk for this rather com plex field of im ageprocessing.

    111. REGULARIZATIONF A N ILL-POSED R OB LEMNumerous methods have been proposed for solving andregularizing various types of ill-posed problems [1]-[8],[12]. They can b e separated into two categories: regular-ization in functional spaces, and control of dimensional-ity. The framework proposed here belongs to the first cat-egory. The basic feature is the introduction of acom prom ise between fidelity to the data and fidelity tosome orior information about the solution. This comuro-

    mise is measured with a single optimality criterion andcan be physically justified as follows.The least squares solution Po is the minimizer of thetotal energy of residual error between the model Ax an dthe data y. In this sense, it provides maximum fidelity tothe data. But for a wide-band observation noise, the en-ergy of the restored or reconstructed object at high spatialfrequencies is high, mainly due to the noise. In general,this solution Po, although unbiased, is rejected since thetrue image is expected to be significantly sm oother. Th us,some infidelity to the data must be introduced in order toobtain a smoother solution. Th e question which immedi-ately arises is, how can we get a smoother solution? Inorder to succeed in suppressing the noise without distort-ing the original image too much, we need information onthe spectral content of the image. A simple and very nat-ural way of performing such smoothing or regularizationis to define two measures of distance J l ( x , 2, ) an dJ 2 ( x ,a,) between x and two extreme pictures Po an d 2,.Po is the ultrarough least squares solution, and P, corre-sponds to an a priori ultrasmooth object. We then balancethe fidelity of the solution to the data, which is measuredby J 1 ,and the fidelity to some prior information, whichis measured by J 2 . Usually, P, turns out to be a pictureof uniform intensity, but it may also be some referencemap based on prior knowledge [ I ] , [72]. A regularizedsolution P ( p , y ) is simply defined as the solution of thefollowing problem:

    ~ ( p ,) = arg minxGx J ~ ( x ,0 ) + p ~ 2 ( x , , ) ) .( 3 . 1 )

    Note that f depends on y through ionly. The choice ofmeasures J1 an d J 2 defines a particular path between foan d f, in the set X of all possible objects [22]. This is aqualitative choice that determines the manner in whichregularization is done. On the other hand, the choice ofp , which is the regularization parameter, is quantitativeand allows the user to decide how far to go along this pathto achieve an appropriate degree of smoothing. Perfectfidelity to the data is achieved if p = 0, whereas perfectfidelity to the priors is achieved if p = 03.Mathematical conditions on J , an d J 2 are weak. Ac-tually, J I an d J 2 do not even need to be true distances inthe usual mathematical sense. They simply need to bepositive measures which vanish only if their argumentsare identical. They may not be symmetric. Since J1 an dJ 2 measure different properties of the solution, they neednot be the same and, in fact, they are usually different.But the surface mapped out by the compound criterionJ I x, Po) + p J 2 ( x ,a,) has to be a s traightfonvard con-vex bowl with a unique lowest point to yield a uniquesolution f ( , y ) .Although the scheme presented above allows us to covermost stochastic and deterministic restoration and recon-struction methods, two popular ap proa ches do not fit wellinto this framew ork. The y are briefly outlined below.The first one belongs to the same functional regulari-zation category and uses compactness and a priori bounds.

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    5/13

    2028 IEEE TRANSACTIONS ON ACOUSTICS. SPEECH. AND S IGNAL PROCESSING. VOL. 37. NO . 12. DECEMBER 1989

    For som e prob lems, the prior information available on theobject can be expressed as a set of constraints such as apriori bounds, each defining a convex set C , in the solu-tion space X . An example of such a convex set is givenby (2.16). It has long been known that restriction to acompact set ensures well posedness [121. Problems of thistype are solved using the method of projections onto con-vex sets (POCS) 11171-11191, [121]. However, thismethod do es not yield a unique solution sin ce all elementsof the intersection of all the constraints sets, when non-empty, are equally acceptable. An iterative search for thesolution depends on initial conditions, which is hardlysatisfactory. To remedy this draw back , and also to exploitthe statistical properties of the noise, an alternative wasrecently introduced [1151 as a minimization of som e dis-tance to the data, such as Jl under the set of previousconstraints C , :

    P ( y ) = arg min,,,J,(x, Po ) C = fI C , . ( 3 . 2 )This does yield a unique solutio n, but clearly differs fromour scheme.The second family of methods is the well-known trun-cated sing ular value decomposition (TSVD ) [11 where reg-ularization is performed through control of the dimen-sionality of the solution space. The summation in (2.14)is taken on a limited number of singular values A,, n rp , which significantly differ from zero, in order to avoidexcessive noise contamination. This is equivalent to pro-jecting the solution onto a "significant" subspace spanned

    9 P .y the remaining singular vectors U , , n = 1, 2 ,This method yields solutions that are numerically wellconditioned. In most practical s ituatio ns, the sing ular val-ues decrease slowly when n -, Nand the truncation orderp must be chosen by comparing the norms of the noiseand of the residual errors [cf. (2.16)]. But the fundamen-tal difference, and in some sense limitation, with otherregularizing methods lies in the fact that the a priori in -formation depends on the distorting system and not on thesolution itself. TSVD regularization means that the solu-tion is limited exclusively by the properties of the obser-vation device, more precisely, by the width of the sin-gular value spectrum. Such a conclusion clearlycontradicts the principles of superresolution techniques.To remedy this, weighted singular value decompositionshave been introduced [ 121, [181. However, these have nosimple functional interpretation and are far less popularthan TSVD.Despite the generality of (3. l) , comparatively few dis-tance measures have been used in image reconstructionand restoration. T o our knowledge, JI an d J2 have alwaysbeen chosen from the same limited set of candidates,which is listed below.

    I

    *

    A . Quadratic (or Euclidean) and Weighted QuadraticDistancesA Euclidian distance between two pictures xI nd x 2 s,of course,

    NJQ ( x I , x 2 ) = ( X l r - 2 i ) 2r = l

    = ( X I - X Z ) ' ( X I - x 2 ) = I1 XI - X 2 1 l 2 .( 3 . 3 )

    This is a particular case of weighted quadratic distancesN N

    JW&, x 2 ) = c c wr, (XI1 - X 2 r ) (XI, - x 2 , )r = l ] = I= (X I - X 2 ) ' W ( X I - x 2 ) = II XI - x2 1

    ( 3 . 4 )where W = diag { w,, is a symmetric, positive definitematrix which is designed to outline certain special fea-tures of the desired proximity.A distance of this type is the common choice fo r J I inmost regularization schemes when noise b is assumed tobe white, Gaussian, z ero mean, and independent from x:

    = (X - P ~ ) ' A ' C - ~ A ( Xa,) ( 3 . 5 )where C = diag {a : 1 an d W = A ' C - ' A .B. Kullback Distances

    A very important feature of X , the class of real worldobjects, is that in.many picture processing problems, thepixel intensity is mainly nonnegative. It is highly desir-able that this important physical property be preserved inthe restored or reconstructed object. There is a variety ofways for achieving this, one of them being to considerthat the object may be identified, after proper normaliza-tion, to a probability distrib ution , and then to use distancemeasures between such distributions. The Kullback dis-tance is often used 1641-1791:N

    J , ( X , , ~ 2 ) .x l i l og ( x I i / x 2 i ) . ( 3 . 6 )r = INote that this distance measure is nonsymmetric with re-spect to its arguments and, in this sense, pathological.This distance is the negative of the relative entropy ofdistribution xl with respect to prior distribution x 2 .C . Roughness Measures

    A simple way of measuring the roughness of an imageis to apply some finite difference operator to it and thentake the Euclidian n orm of the resulting differentiated im-age. Since such a differentiation operation is linear withrespect to the original image, the resulting measure ofroughness is quadratic:

    Q ( x ) = I I V k ( X ) / 1 2 = 11 D k X = x ' D : D k x ( 3 .7)where k is the order of the difference operator V k ( ) .Usually, k = 1 o r 2 1261. Q ( x ) is minimized if all thecomponents of x are equal. C learly, this is a special case

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    6/13

    DEMOMENT: IMAG E RECONSTRUCTION AN D RESTORATION 2029

    of weighted quadratic distance where the second argu-ment x2 is any uniform image and the weighting matrixis W = DLD,.D. ocal Energy Functions

    All previous distances measure some global property ofan image which is implicitly considered as a homoge-neous field. But real images are often inhomogeneous,with nearly uniform regions separated by relatively sharpedges. To break off the homog eneity, local energy func-tions have been recently introduced [go]-[86]. In this ap-proach, the original im age x is regarded as a pair x = ( z ,t ) where z is the vector of observa ble pixel intensities andt denotes a dual vector of unobservable additional featuressuch as edges, textures, etc. To express local propertiesof the image, a neighborhood system { d j } s definedwhere d j is a collection of pixels that are assumed to in-teract directly with pixel i . As an elementary example, letus suppose that the intensity values of x are discrete. Abasic characteristic of most im ages is that intensity valuesat nearby locations are likely to be nearly the same. Toexpress this prop erty, w e define

    V ( z j , , ) = -1 zi= z j j E d j= 1 z , + z, j E d,= o otherwise. ( 3 . 8 )

    These bond energies, computed am ong neighboring pix-els , are added to define the energy of the im age:J ( z ) = c c V ( Z i , Z j ) = c V J X ) ( 3 . 9 )! I I

    where Vi is the potential of the ith set of neighbor pixelsand where i ranges over the set of all sites which are pairsof neighbors. The idea is that J ( z ) is small for picturesconsistent with the properties w e wish to use to define ourpriors , in this example, pictures, for which neighboringpixels tend to have the sam e intensity value.Of course, the property of locally constant intensities isnot sufficient, and we have to introduce additional prop-erties of other features such as edges. For this , we definea global energy of the formJ G ( x ) = J,(z)+ J 2 ( t ) + J ~ ( z , ) ( 3 . 1 0 )

    composed of three terms: an intensity energy J I z ) , anedge energy J2( t ) and a third term J 3 ( z , ) hat describesthe interaction between edges and pixels [81]-[83], 1861.These local energy functions are also in the form E; , V i .They have close connections to statistical physics andGibbs distributions. This point will be discussed in thenext section. The design of these potentials is generallyarduous and still empirical.Having chosen the appropriate distances J , an d J 2 , theregularized solution a ( p , y ) s obtained for a given valueof p by minimizing the corresponding criterion. How-ever, this brings up some practical questions about thecomplexity of the calculations, the complication whicharises from the positive character of the solution, and thepossible existence of local minima of the criterion.

    The literature demonstrates that there is no satisfactoryanswer to these three questions simultaneously. As a rule,the calculations are easiest when JI an d .I2 are quadratic:

    This corresponds to the classical regularization meth-ods of Phillips, Twom ey, and Tykhonov [13], [15]-[16],who have dominated the image restoration literature forthe past 15 years [24]-[29], [44]-[63]. The solution to(3 . 11) is unique, linear with respect to the data, and canbe calculated explicitly:f = ( A C - A + p K ) - A E - y . ( 3 . 1 2 )

    But apart from the computational problems, which willbe considered in Section V , this solution is not guaranteedto be positive, a point which is unacceptable in many ap-plications. To remedy this deficiency, we can either di-rectly impose positivity on the solution by nonlinear pro-gramming or we can use another distance J 2 , e . g . ,Kullback distance. As the minimization of { JI x, io )p J 2 (x , 2, ) } is equivalent to that of { J 2 x, 2, ) + ( 1 pJI x, Po) } , this yields the methods of reconstruction orrestoration which are called m aximum entropy meth-od s [64]-[79]. Unfortunately, it is no longer possible toobtain a closed-form solution, and minimization of thecriterion must be carried out iteratively.When we wish to imp rove the description of the imageby using contours and textures as well as the pixel inten-sity values, the difficulties in obtaining the solution areconsiderably increased. The criterion no longer presentsa unique minimum and the global minimum must besought by rather unwieldy techniques [80]-[86].In addition to these computational difficulties, oth er dif-ficulties arise such as the choice of the value of the reg-ularization coefficient p and possibly of other parameterswhich characterize the distances J , an d J 2 . These are thehyperparameters of the problem. There is a large varietyof methods for determining them, but before consideringthem in Section VI, it will be useful to study the Bayesianinterpretation of regularization methods.

    IV . BAYESIANNTERPRETATIONF REGULARIZATIONPROBLEMSAn important part of statistical inference methods isbased on the use of a priori information on the parametersto be estimated, which is added to the information pro-vided by the data. It is thus not surprising, taking intoaccount the profound nature of the regularization princi-ple described above, that there is a close relationship be-

    tween regularization and Bayesian estimation [30]-[43].In a statistical context, the a priori information on theobject x is expressed in the form of an a priori probabilitydistribution p ( x ) .Bayes rule allows us to combine it withthe information contained in the data to obtain the a pos -teriori distribution

    (4.1

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    7/13

    2030 I k E E T R A N S A C T I O N S ON ACOUSTICS. SPEECH . AND SIGNAL PROCESSING. V O L . 37. N O . 17. D E C E MB F R 19x9

    In this relationship, p ( y/x) denotes the probabil i tydistribution of the data conditioned on the real solution x.It is completely determined by the knowledge of models(2.1 ) or (2.2) and the noise distr ibution. Th e last termP ( Y ) = ~ P ( Y / X ) P ( X ) ( 4 .2 )

    ensures the normalization of the a posteriori distr ibution.In a strict Bayesian sen se, (4 .1 ) is the solution to theinversion since it summarizes all the information on x.However, manipulation of probability density functions iscumbersome and very often intractable, and a decisionmust be made to attribute a value to each pixel. A popularchoice is that of attributing to x the value which maxi-mizes the a posteriori (MAP) distribution 1301-[43]

    i arg m a x p ( x / y ) . ( 4 . 3 )But this is only a special case since MAP estimation cor-responds to minimizing an average cost with a zero-oneloss function. Recently, othe r cost functionals have gainedinterest in the context of Markov random fields modelingand lead to maximization of marginal probabilities 1801-1861.In the case of interest here, it is clear that regularizingaccording to the general schem e given in the previous sec-tion is equivalent to choosing the optimal image maxi-mizing the a posteriori distribution under the conditionthat this distribution is expressed in the following way:

    P ( X l Y > = ex p { - t [ J , ( . , i o ) + PJ*(X,& ) I ]( 4 . 4 )

    The probability distribution expressed i n (4.4 ) is only on epossible choice. Any strictly monotone function would besuitable. However, choice of an exponential function isadequate here since with a linear model (2.2 ) and with theusual hypotheses of normality and independence on thenoise, the conditional distribution p ( y/x) is t ruly

    P ( Y / X ) = ex p { - t J k o ) } ( 4 . 5 )as long as J , = J, , with W = A ' C - I A are chosen. Tocomplete the a nalogy, the a priori distribution must alsobe written as follows:

    p ( x > ex p i - f . ( x , a> 1 ( 4 . 6 )Again, this is only one Bayesian interpretation of reg-ularization methods. It is also often considered as a jus-tification of regularization methods, although this is acontroversial point which we shall not discuss here. Suf-fice it to say that, on the one hand, the Bayesian approachis, in fact, the framework in which the most recent res-toration methods have been introduced, and on the otherhand, that it allows an appreciable extension of the rangeof hyperparameter estimation methods.In the case of regularization methods of the Phillips,Twomey, and Tykhonov type, the distance J 2 is a quad-

    ratic, definite positive form, and (4.6) shows that this re-verts to modeling the real object as a homogeneous ran-dom Gaussian field with mean i, nd with covariancematrix W-I . The matrix W - ' can be chosen in the form( D ' D ) I if the distance J 2 is a roughness measure of thetype (3.7 ). M ore generally, it can be any positive definitematrix, but its structure must be specified beforehand. Ifthis matrix is itself parameterized, these hyperparametersmust be determined in the same manner as the regular-ization coefficient p , which is analogous to th e inverse ofa signal-to-noise ratio.In maximum entropy methods, the proximity measureJ 2 is of the Kullback distance type. Therefore, the realobject is modeled as a random field with a priori distri-bution

    ( 4 . 7 )where S is the negative of the Kullback distance or therelative entropy of the image. This choice is often givenas the only one which is statistically consistent with a pos-itive object 1711-1741. We should nonetheless note that inthis fully Bayesian approach, the constant p should bespecified by a "hyperprior" den sity. But in practic e, thisdimensional constant is not set a prior i . Its choice ismainly empirical and depends on the quality of the data.However, apart from these more or less philosophicalproblems, the m ain drawback of these methods arises fromthe proximity measure which is used. The entropy of thewhole im age as defined in (3.6) is the sum of the entropiesof each pixel, which implicitly assumes an a priori in -dependence between the pixels since their joint a prioridistribution (4.7) is the product of their marginal distri-butions:

    P ( X ) = rI P ( X l ! ) . ( 4 . 8 )This no doubt explains why they give very good resultsin problems where the objects have supports which are ofvery limited dimensions a2d almost pinpoint, and whythey differ only slightly from quadratic methods (with orwithout a positivity constraint) in the other cases.In the case of local energy functions, the a priori dis-tribution can be expressed in the following form:

    ( 4 . 9 )which is a Gibbs distribution. A phy sical system with suchan associated distribution is such that the lower energyconfigurations have higher expectation. However, it mustbe noted that because of the strong nonlinearity of theGibbs potentials and because of the large number of pos-sible configurations for an image ( M N ,M being the pos-sible number of states, levels of gray, for example), it isimpossible to compute all the distributions which are de-fined by (4.9 ) [80]-[86]. The marginal distributionsp ( x i ) , for example, cannot be calculated. Only condi-tional distributions p ( x , / x j , E d, ) where d j indicates aneighborhood of pixel i can be simply and easily com-

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    8/13

    DEMOMENT: IMAGE RECONSTRUCTION A ND RESTORATION 203

    puted from the local description leading to the choice oflocal energies. This difficulty has a strong influence onthe computation of the solution, as we shall see in thefollowing section. T he assignment of these energy func-tions JG ( x ) also defines a Markov random field (MRF)image model s ince an M RF has the property that the prob-ability distribution of the configuration of the field canalways be expressed in the form of a Gibbs distribution[82]. The MRF-Gibbs equivalence is exploited for com-puting the solution. In order to partly overcom e the com-putational problems, the fact that the a posteriori distri-bution is again Gibbsian with approximately the sameneighborhood system as that of the original object is ex-ploited and a sampling method called G ibbs sampleris used. However, the computational complexity in-creases rapidly with the neighborhood dimension, whichis a major drawbac k in restoration problems w ith largePSF.

    V . PRACTICALROBLEMSN C OMPUTINGHESOLUTIONPractical problems in computation of the reconstructedor restored imag e play an important role in the final choiceof a particular method because of the very high dimen-sions of the real images. These problem s arise even whenthe two distances J , an d J 2 are quadratic and when aclosed-form solution can be obtained since the use of therelation (3.12) requires the inversion of a very large ma-trix.When the object has a limited support and when thecorresponding hypotheses of uniformity of the imageedges and circulant s tructure of the covariance or weight-ing matrix can be made, the matrix to be inverted is cir-culant in the case of a space-invariant degradation. It canthen be rapidly diagonalized by FFT, which considerablysimplifies its inversion [26]-[27].Inversion can also be performed recursively using Kal-man filtering techniques inspired by optimal statisticalsignal processing. A significant part of the literature onimage restoration in recent years has been devoted to thedevelopment of these techniques [44]-[63]. Computa-tional savings are generally obtained by introducing a statevector of reduced dimension com pared to that of the realimage. Although Kalman filtering techniques have beensuccessful in 1-D problems, their extension t o 2-D is nottrivial. Derivation of suitable 2-D recursive models, com-patible with a reasonable computational load, is a majordifficulty. Image modeling with stationary 2-D randomfields is an active research topic. But the study of thesefields is not a simple extension of time series properties[8]. The lack of any Archimedean ordering in the planeraises new problems: existen ce of favored direction s inlexicographically ordered ARMA processes, instability offinite memory inverse prediction filters, and imp ossibilityto extend the 1-D stochastic realization theory to 2-Dproblems. The only state-space models for which a com-plete realization theory exists are those of Attasi [45] an dRoesser [54]. Usually, additional assumptions, such as

    separability of covariance, are made in order to facilitatethe identification of the image model. It should be notedthat these ad hoc assumptions hardly reflect th e true na-ture of the object.When the distance J 2 is an entropy measure, things be-come more complicated since we no longer have a closed-form solution and we must proceed by iteration. The lit-erature on this subject most often deals with iterativemethods of the first ord er, of the conjugate-gradient type,but they m ust be modified in orde r to ensure the positivityof the solution at each iteration [HI-[79]. Experienceshows that the compu tational cost is 10-20 t imes greaterthan with former linear methods using quadratic dis-tances.When Markovian representations associated with Gibbspotentials are used, the main difficulty in computing thesolution arises from the impossibility of calculating the aposteriori distribution. This difficulty was resolved byusing stochastic techniques of the M onte Carlo type. T hefirst idea cons ists of introducing an auxiliary d istribution

    P T b l Y ) cc t P ( Y l x ) p ( x ) l l T (5.1)where T > 0 is an additional parameter called temper-ature of the system. Note that if T + 03, (5.1) tends toa uniform distribution, while if T + 0, the distribution(5.1) is concentrated on the maximum a posteriori esti-mate. T he second idea con sists of creating , in a stochasticway, a set of images by making random runs following aGibbs distribution p ( x ) Starting from an initial imagex), the change from x( ) to x(+l )only concerns onepixel, at the most, an d is done randomly according to thelocal conditional distributions p ( x i / x , , E di ) which caneasily b e calculated from p ( x ) When n + 03, x( ) re -sembles a random configuration with distribution p ( x )whose empiric mean is the requested expectation. This isthe Gibbs sampler [82]. When this technique is usedwith a distribution like the one defined in (5 . ) , and whenthe temperature T is brought very slowly to zero, conver-gence takes place towards the minimum a posteriori en -ergy. This is simulated annealing [85]-[87]. The con-ditions to be fulfilled by the random set of images x),ible and do not depend on how the object pixels are ex-plored. The only requirement is that all pixels be ex-plored. This flexibility and the particular form of theprobability distribution p T allow parallelization of theprocessing by using independ ent, and even asynchronous,processors, and thus should compensate for the impres-sive computational complexity of the method. Note that,in practice, the solution may not be unique and dependson the cooling schedule.

    2 , n ord er to achieve convergence are very flex-( l ) . . .

    V I. PRACTICALR OB LEMSN C O M P U T I N GH Y ERPARAMETERSAll methods described above require the knowledge ofthe value of the regularization param eter p , and more gen-erally, of all the hyperparameters defining the distancemeasures J , an d J 2 : noise variances, correlation parame-

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    9/13

    2032 IEEE TRANSACTIONS ON ACOUSTICS. SPEECH . AND SIGNA L PROCESSING. VOL. 37. NO . 12. Dt-CEMBt:R I Y X Y

    ters of the object, and parameters of local energy func-tions. Let 8 denote the set of all hyperparameters. Esti-mation of e is the most delicate part of imagereconstruction and restoration methods, and it must be ad-mitted that this problem is not properly solved yet.It is obvious that vector 8, as the object x , should becalculated from the observed data y . When 8 contains onlyparameter p , the most intuitive and oldest idea [ 131, [161,[26], [28] is to consider p as a Lagrange multiplier in theequivalent problem

    f ( p , Y ) = arg minxExJ2(x, a,),subject to J , ( x , Po) I . ( 6 . 1 )

    The degree of regularization is set by the valu e of c whichcan be considered as a statistics whose probability distri-bution can be deduced from p ( y / x ) Since c now fol-lows a known d istribution, its value can be easily chose n,and a common choice is that of its expected value. Forexample, in the frequent case when J , x , Po) is a quad-ratic form, c follows a x 2 distribution with N degrees offreedom and c = N is often chosen.Such a choic e has been reported to lead to overregular-ization of the solution. The residuals are given by [ yi -( A P ) i ] / a j , i = 1 , 2 , * * * , N . With P equal to the truex , these residuals would be a sam ple of N standard normalvariables. But as P ( p , y ) s inevitably biased , the residualerrors used in the computation of JI x, Po) do not exactlyfollow any known d istr ibution.To ov ercome this difficulty, we can consider that dis-tances such as J , an d J , are, in fact, measures of loss ofthe form J { P ( p, y ) x } . The expectation conditioned onthe data

    defines an average risk which dep ends on p , and the choiceof the value of p which minimizes this criterion is surelyacceptable. Unfortunately, this average risk also dependson the actual object which is unknown! As this risk is notcalculable, we can try to estimate it, and this is done bycross-validation methods in the case of quadratic J dis-tances [91], [96]-[97]. The basic principle is very simpl e,and consists of removing a datum yi and of predicting itwith the help of a regularized solution computed from theremaining data. The value of p which ensures the bestaverage prediction over all the removed data is retained.Although this criterion presents good asymptotic proper-ties and is currently used in one-dimensional problems, itseems to have been seldom used in ima ge processing [90].The two methods presented above are deterministic innature, but it is obvious that if an a priori distributioncould be attributed to x , a criterion depending on p alonecould be obtained by averaging (6.2), which correspondsto a Bayes' risk. The main interest of a Bayesian inter-pretation of regularization probably lies at this level. Infact, we have seen in the previous section that, save inthe very special case of an a priori Gaussian distribution,it is impossible to calculate explicitly the m aximum apos -

    teriori solution and that the solution has to be computediteratively. T he main benefit to be drawn from a Bayesianinterpretation is essentially that of methodology.Another approach could be to estimate the hyperpara-meters and the object in the same way, by maximizing ajoint a posteriori distribution. Since som e of these quan-tities are random w hile others are deterministic, we maydefine a generalized likelihood functionY / O ) = P ( Y / X >6) P ( X / O ) ( 6 . 3 )

    (3, 8) = arg max, ,x.eEep(x, Y / o ) . ( 6 . 4 )Other functionals can be chosen (e.g. , p ( x / y , e ) whichis the a posteriori distribution of the object), but theirevaluation generally involves computation of a normali-zation factor p ( y ) which is also a function of 8. Thismakes the estimation of 8 impracticable. On the otherhand, p ( x , y / 8 ) is easier to compute. N o convergenceresult can be established for x since the dimensions of theobject and image grow accordingly. However, eventhough no such limitation applies to 8, the properties ofthis estimator have not been established yet. This is dueto the complexity of the mathematical derivations in-volved in the general case . Som e results [92] indicate thatthe M AP estim ator does not always exhibit the desirableproperty of convergence, which makes its use questiona-ble. Alternative approaches have to be sought.The most commonly employed method consists of max-imizing a marginal likelihood which is obtained by inte-grating the object out of the problem:

    and maximize it over X an d 8:

    P ( Y / 8 ) = j P ( Y / X , B ) P ( X / 8 )h. ( 6 . 5 )This can be considered as a special case of the EM algo-rithm which is a broadly applicable method for computingmaximum likelihood estim ates when the observations yare viewed as incomplete data [107]. The term "incom-plete data" implies the existe nce of two spaces X an d Yand a mapping from X to Y . The observed data y are arealization from Y and the corresponding x in X is notobserved directly, but only indirectly through y = y ( x ) .The corresponding distributions p ( x / 8 ) an d p ( y / 8 ) de-pend on parameters 8 that have to be estimated. The EMalgorithm is an iterative process which consists of an ex-pectation ( E step followed by a maximization ( M ) stepat each iteration. Assume that 8 ' " ) denotes the currentvalue of 8 after n iterations of the algorithm. The nextiteration can be described in two steps, as follows.E Step: Compute E{ log p ( x / O ' " ' ) / y , e } =

    M Step: Choose 8 ( " + ' ) = arg maxeEe Q ( e / e ( P ) ) .Equation (6.5) is the E step: conditional expectationgiven y an d 8 = e ( " ) . For the M step, we maximize thisexpectation or marginal log likelihood over 0 . The cal-culation of the likelihood itself is simplified when the res-toration is made by a Kalman filter which performs a de-composition of the observed data into uncorrelated

    Q ( e ( " ) / e ) .

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    10/13

    DEMOMENT: IMAGE RECONSTRUCTION AN D RESTORATION 2033variables, the innovations. Nevertheless, the maximiza-tion of (6 .5 ) with respect to 8 cannot be made explicitlyand must be sought iteratively.In the case of Markovian models, the problem becomesmore complicated since, as we saw before, it is incon-ceivable to calculate the exact likelihood. W e then use apseudolikelihood drawn from local conditional distribu-tions which are computable [80]-[83]:

    d Y / O ) = P ( Y / X > 0 ) s(x/e) (6.6)where q ( x / 8 ) is a product of local conditional distribu-tions:

    The maximization of the pseudolikelihood is performedby using an EM i terative algo rithm, for exam ple [81] ,( 6 . 8 )

    and calculation of the expectation in this expression iscarried out using a G ibbs sampler.Finally, we could also , in a fully Bayesian fram ework,attribute an a priori distribution to the hyperparameters 0and estimate them as the image x itself using an M APtechn ique. This solution was suggested recently for max-imum entropy m ethods, but it is s till too soon to estimateits usefulness in these image restoration and reconstruc-tion problems.In general, regardless of the particular expression of J ,an d J 2 , and whatever interpretation is adopted, the im-portant practical problem of the choice of regularizationparameter p , and more generally of the hyperparameters

    8, remains open.

    arg maxeee E { og q( Y / O ( ~ ) ) }( n + l ) =

    VII. CONCLUSIONSIn the usual vision model, image restoration and recon-struction are low-level information processing tasks. Un-der the pressure of recent spe ctacula r technological d e-velopments that allow more and more complex dataprocessing to be done, their traditional mathematical andstatistical bases are being widened to rise toward higherinformation processing levels.In this rapid and inevitably superficial survey, we havetried to show that, in spite of their great diversity, imagerestoration and reconstruction methods have a commonestimation structure which is not basically undermined bythe recent appearance of methods based on M arkov mod-eling. These m ethods are interesting, as they allow an im-provement in the description of an image, but they alsopresent an impressive computational complexity. It thus

    becomes more necessary than ever to have at ones dis-posal results of comparative studies of the performancesof these different methods by using different models anddifferent types of image.Computational and methodological complexity has tobe balanced against the quality of results. Of course, thismay depend on the later use of the processed image, butmany papers report different restorations of the sam e stan-

    dard images with, indeed, few visual differences. It willbe invaluable to carry out meaningful comparative studiesof the performances of a variety of reconstruction and res-toration methods, using different regularization tech-niques on different types of images, to provide the userwith objective elements of choice in a given practical sit-uation. ACKNOW L E DGM E NT

    The author wishes to express his gratitude to D r. Y .Goussard for his expert editing assistance.REFERENCES

    A . Textbooks on Image Reconstruction and Restoration[ I ] H . C . Andrews and B . R . Hunt , Digiral Image Restorat ion. En -glewood Cliffs, NJ: Prentice-Hall, 1977.[2 ] B . R. Frieden, Image enhancement and restoration, in PictureProcess ing and Digi tal Process ing, T . S. Huang, Ed. Ber l in :Springer-Verlag, 1975, pp. 177-248.[3] G. T . Herman, H. K . Tuy, H . Langenberg , and P . C . Saba t ie r ,Bas ic Meth ods of Tomogr aph y an d I n ver s e Pr oblems . A d a m H il -ger , 1987.[4] J. H. Justice, Maximum-Entropy and Bayesian Methods in AppliedSratistics. Cambridge, England: Cambridge University Press,

    1986.[SI A . C. Kak and M . S laney, Principles of Compu ter i z ed Tomogr aph icImaging.[6] A. Rosenfeld and A . C. Kak, Digiral Picture Process ing. Ne wYork: Academic , 1982.[7] C . R . Smith and W. T . Grandy, J r . , Maximum-Entropy and Baye-s ian Merhods in Inverse Problems. Dordrecht, The Netherlands:Reidel, 1985.181 A. S . Willsky , Digital Signal Process ing and Conrrol and Est imu-t ion Theory.

    New York: IEEE Press , 1988.

    Cambr idge , MA: M .I .T . P ress , 1979.B. Basic Regularization Techniques

    M. Bertero, C. de Mol, and G. A. Viano , Th e stability of inverseproblems, in Inverse Scat ter ing Problems in Oprics , H. P . Bakes ,Ed. Berlin: Springer-Verlag, 1980.1. W. Hilge r s , O n the equivalence of regularization and certainreproducing kernel Hilbert space approaches for solving first kindproblems, SIAM J . Nu mer . An al . , vol. 13, pp. 172-184, 1976.(Er ra tum, vol . 15, p. 1301, 1978.)M . Z . Nashed and G. Wah ba, Generalized inverses in reproducingkernel spaces: A n approach to regularization of linear operatorsequations, SIAM J . M a t h . A n a l . , vol. 5 , pp. 974-987, 1974.M . Z. Nashed, Operator-theoretic and computational approachesto ill-posed problems with applications to antenna theory, IEEETr an s . Anten n as Pr opaga t . , vol. AP-29, pp. 220-23 1, 19 8 .D . L. Phillips, A technique for the numerical solution of certainintegral equations of the f irst kind, J . A s s . C o m p u t . M a c h . , vol.J. L. C. Sanz and T. S . Huang, Unified Hilbert space approach toiterative least-squares linear signal restoration, J . O p t . S o c . A m e r . ,vol. 73, pp. 1455-1465, 1983.A . Tikhonov and V. Arsenin, Solutions of I l l -Pos ed Pr oblems .Washington , DC: Wins ton , 1977.S . Tw om ey, On the numerical solution of Fredholm integral equa-tions of the first kind by the inversion of the linear system producedby quadra ture , J . Ass. Comput . M a c h . , vol. I O , pp. 97-101, 1962.

    9 , pp . 84-97, 1962.

    [ 171 G . Wahba, I ll-posed problems: Numerical and statistical methodsfo r mildly, moderately and severely ill-posed problems with noisyda ta , in Pr oc . I n t. Con$ I l l -Pos ed Pr oblems , Newark , NJ , Oc t .1979.[ I81 -, Cons trained regularization for ill-posed linear operator equa-tions, with applications in meteorology and medicine, in Staristi-cul Decis ion Theory and Reluted Topics 111, S. S. Gupta and J. 0 .Berger, Ed. New York: Academic , 1982, pp . 383-417.C. Choice of a Regularization Prescription

    [ 191 M. Bertero, T. Poggio, and V. Torre, I ll-posed problems in earlyvis ion , M.I .T . , Cambr idge , Art . In te l l. Lab . M emo 924, 1986.

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    11/13

    2034 IEEE TRANSACTIONS ON ACOUSTICS. SPEECH, AND SIGNAL PROCESSING. VOL. 37, NO. I? .DECEMBER 19x9[20] J. Cullu m, The effective choice of the smoo thing norm in regular-ization, Math. Comput . , vol . 33 , pp . 149-170, 1979.[21] P. Hall and D. M. T itter ington, Com mon structure of techniquesfor choosing smoothing parameters in regression problem s, J . Roy.Sfa t i s t . Soc . B , vol. 49, pp. 184-198, 1 987.1221 D. M. Titter ington, G eneral structure of regularization proced uresin image restoration, Astron. As t rophys . , vol. 14 4, pp. 381-387,1985.[23] -, Comm on structure of smoothing techniques in statistics,Int. Statist . Rev. , vol . 53 , pp . 141-170, 1985.D. tandard Regularization Procedures

    I241 M. Bertero, C. de Mol, and G . A. Viano, On the problems ofobject restoration and imag e extrapolation in opti cs, J . M a f h .P h y s . , vol . 20 , pp . 509-521, 1979.[25] C . W. Helstrom, Image restoration by the method of leastsquares, J . Opt . Soc. Amer . , vol . 57 , pp . 297-303, 1967.[26] B. R. Hunt , The inve r se problem of r ad iography, Math. Biosci. ,vol. 8, pp. 161-179, 1970.1271 -, Biased estimation fo r nonparametric identif ication of linears y s t e m s , Math. Biosc i . , vol . 10, pp. 215-237, 1971.[28] -, The application of constrained least squares estimation toimage restoration by digital com pute r , IEEE Trans. Comput. , vol.C-22 , pp. 805-812, 1973.. .[291 A. K . K a tsagge los , J . B iemond, R . M. Merse reau , and R . W.Schafer , A general formulation of constrain ed iterative restorationalgorithms, in Proc. IEEE Int. Conf. Acous t. , Speech, Signal Pro-cessing, 1985, pp . 700-703.E. Bayesian Approach

    (301 J . F. Abramatic and L. M. Silverman, Non-stationary linear res-toration of noisy images, in Proc. IEEE Conf. Decis ion Contr . ,For t Lauderda le, FL, D ec . 1979, pp . 92-99.[31] B. P. Agrawal and V . K. Jain, Digital restoration of images in thepresence of correlated nois e, Cornput. Elec . Eng . , vol . 3, pp. 65-74, 1976.1321 M . H . Buonocore , W. R . Brody, and A. Makovsky, Fas t m ini-mum variance estimator for limited angle CT image reconstruc-tion, Med. Phys . , vol. 8, pp. 695-702 , 1981.(331 A. M . D ar l ing , T . J . Hall, and M. A. Fiddy, Stable noniterativeobject reconstruction from incom plete data using a priori knowl-e d g e , J . Opt . Soc . Amer . , vol. 73, pp. 1466-1469, 1983.1341 J . N. Fran klin, Well-posed stochastic extensions of ill-posed lin-ea r problems, J . Math. Ana l . App l . , vol . 31 , pp . 682-716, 1970.B. R. Frieden , Statistical estimates of bounded optical scenes bythe method of prior probabilites, IEEE Trans. Inform. Theory , vol.K . M. H anson, Bayesian and related methods in image reconstruc-tion from incomplete data, in Image Recovery : Theory and Appli-cation, H. S ta rk , Ed.G . T. Herman and A. Lent, A computer implementation of a Baye-sian analysis of image reconstruction, Inform. Con tr . , vol. 31, pp .G . T. Herman, H. Hunvi tz , A. Lent , and H. P . Lung, On theBayesian approach to image reconstruction, Inform. Con tr . , vol.42 , pp . 60-71, 1979.B . R . Hunt and T. M . Cann on, Nonsta t ionary a ssumpt ions forGaussian models of images, IEEE Trans. S y s t . , M a n , C y b e r n . ,B. R. Hunt, Bayesian methods in nonlinear digital image restora-tion, IEEE Trans. Cornput. , vol . C -26, pp . 219-229, 1977.H. J . Trusse l , Improved me thods of maximum a posteriori resto-ration, IEEE Trans. Co mput . , vol . C -27, pp . 57-62, 1979.-, Th e relationship between image restoration by the maximumapos ter ior i method and a maximum entropy meth od, IEEE Trans.Acoust. , Speech, Signal Processing, vol. ASSP-28, pp. 114-1 17,1980.S . L. Wood and M. Morf, A fast implementation of a minimumvariance estimator for computerized tomography image reconstruc-tion, IEEE Trans. Biomed. Eng. , vol . BME-28, pp . 56-68, 1 981,

    IT-19, pp. 118-1 19, 1973.

    New York: Academ ic, 19 87, pp. 79-125.

    364-384, 1976.

    vol . SMC-6, pp . 876-882, 1976.

    F. Kalman Filtering Techniques1441 A . 0. Abouta l ib . M. S . Murphy, and L. M. S i lve rman, Digi ta lrestoration of images degraded by general m otion blurs, IEEETrans. Automat . Con tr . , vol . AC-22, pp . 294-302, 1977.

    [45] S . Attasi, Modelling and recursive estimation for double indexedsequences , in System Identij icafion: Advances and Studies, R . K .Mehra and D. G . Lainio t is, Eds . New York: Academic , 1976, pp .1461 M. R . Az imi-Sadjadi and P . W. Wong , Tw o-dimensiona l b lockKalman filter ing for image restoration, IEEE Trans. Acoust. ,Speech, Signal Proce ssing, vol. ASSP-3 5, pp. 1736-1749, 1 987.I471 J . Biemond, J . Rieske , and J . J. Gerbrands, A fast Kalman f ilterfor images degraded by both blu r and noise, IEEE Trans. Acoust. ,Speech, Signal Processing, vol. ASSP-31, pp . 1248-1256, 1983.1481 S. S . Dikshit, A recursive Kalman window approach to image res-toration, I EE E Tr an s. A c o u . ~ Speech, Signal Processing, vol .[49] D. X. Cheng, D . Sa in t -Fe l ix , and G . Demoment , Compar ison be -tween a factorization method and a partitioning method to deriveinvariant Kalman f ilters for fast image restoration, in Mathematicsin Signal Processing, T . S . Durrani et a l . , Eds. Oxford , England:C la rendon, 1987, pp . 349-362.[50] A. Habib i , Tw o-dimensiona l Bayes ian e s t ima te of im ages , Proc.[51] M. S. Murphy and L. M. Silverman, Image model representationand line-by-line recursive restoration, IEEE Trans. Automat.C o n t r . , vol . AC-23, pp . 809-816, 1978.1521 N. E . Nahi, Role of recursive estimation in statistical image en-hancement , Proc. IEEE, vol . 60 , pp . 872-877, 1972.(531 S . A. Ra ja la and R . J . P . de F igue iredo, A dapt ive nonl inea r imagerestoration by a modified Kalman f ilter ing approac h, IEEE Trans.Acoust. , Speech, Signal Processing, vol. ASSP-29, pp. 1033-1042,1981.1541 R. Roe sser , A discrete state-space m odel for linear image pro-

    cess ing , IEEE Trans. Automat. Contr. , vol . AC-20, pp . 1-10,1975.[55] D. Rohler and P. S . Krishnaprasad, Radon inversion and Kalmanreconstructions: A compariso n, IEEE Trans. Automat. Con tr. , vol .AC-26, pp . 483-487, 1981.[56] D. Saint-Felix, D. X . Cheng, and G . Demoment, Image restora-tion using a non-causal state space model and a fast 2D Kalmanfilter, in Mathematics in Signal Processing, T . S . Durrani e t a l . ,Eds. Oxford , England: C la rendon, 1987, pp . 363-378.[57] B. R. Suresh and B. A . Shenoi, New results in two-dimensionalKalman filter ing with applications to image restoration, IEEETrans. Circuits S y s f . , vol . CAS-28, pp . 307-319, 1981.[58] S . L. Wood, A. Makovski , and M. Mor f , Reconst ruc t ions wi thlimited data using estimation theory, in Compurer Aided Tomog-raphy and Ultrason ics in Medicine, J . Raviv e t a l . , Eds. Amster-dam: Nor th-Hol land, 1979, pp . 219-231.1591 J . W. Woods and C . H . Radewan, Kalman filter ing in two dimen-sions, IEEE Trans. Inform. Theory, vol. IT-23, pp. 473-482, 1977(and Correction, IEEE Trans. Inform. Theory, vol. IT-25, pp.[60] J . W. Woods and V. K. Ingle, Kalman filter ing in two dimensions:Further results, IEEE Trans. Acoust. , Speech, Signal Processing,vol . ASSP-29, pp . 188-197, 1981.I611 J . W . W o o d s , J . Biemond , and A. M. Tekalp, Boundary valueproblem in image restoration,in Proc. IEEE Int. Con$ A c o u s t . ,Speech, Signal Processing, 1985, pp . 18 .11 .1-18 .11 .4 .[62] J . W . W o o d s , S . Dravida, and R. Mediavilla , Image estimationusing doubly stochastic Gaussian random field models,IEEE Trans.Pattern Anal. Machine Intell . , vol . PAMI-9 , pp . 245-253, 1987.1631 W. Zh e, Multidimensional state-space model Kalman f ilter ing withapplication to image restoration, IEEE Trans. Acoust. , S p e r c h ,Signal Processin g, vol. ASSP-33 , pp. 1576-1592, 1986.

    289-348

    ASSP-30 , pp. 125-140, 19 82.

    IEEE, vol . 60 , pp . 878-883, 1972.

    628-629, 1979) .

    G. Maximum EntropyR. K. Bryan and J. Skilling, Deconvolution by maximum entropy,as illustrated by applicatio n to the jet of M8 7, Mon. Not. R. Astr .S o c . , vol . 191, pp . 69-79, 1980.S . F. Burch , S . F. Gull, an d J . Skilling, Image restoration by apowerful maximum entropy method, Comput . V is ion , Graphics ,Image Processing. vol. 23, pp. 113-128, 1983 .G . J . Daniel1 and S . F . Gul l , Maximum entropy a lgor i thm appl iedto image enhancem ent, Proc. IEE, vol. 127-E, pp. 170-172, 1980.B. R. Frieden, Restoring with maximum likelihood and maximume n t r o p y , J . O p t . S o c . A m e r . , vol . 62 , pp . 511-518, 1952.-, Restoring with maximum entropy, 11: Superresolution ofphotographs of diffraction-blurred impulses, J . Opt. Soc. Amer . ,vol. 62, pp. 1202-1210, 1972.

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    12/13

    DEMOMENT: IMAGE RECONSTRUCTION AND RESTORATION 2035[69] -, .Restoring with maximum entropy. 111. Poisson sources andb a c k g r o u n d s , J . Opt. S o c . A m e r . , v o l . 6 8 , pp . 93-103, 1978.[70] -, Statistical models for the image restoration problem, C o m -p u t . Graphics Image Process ing , vol . 12 , pp . 40-59, 1980.1711 S. F. Gul l and G . J . Danie l l , Image recons t ruc t ion f rom incom-plete and noisy data, Nature , vol . 272, pp . 686-690, 1978.1721 S . F . Gull and J . Ski l l ing , Maximum entropy me thod in imageprocess ing , P r o c . I , vol . 131-F , pp . 646-659, 1984.[73] E . T. Jaynes , P r ior probabi l i t ie s , IEEE Trans. Syst . Sci. Cy-b e r n . , vol . SSC-4, pp . 227-241, 1968.1741 -, On the rationale of maximum-entropy m e t h o d s , P r o c . f E E E ,[75] R . Kikuchi and B . H. Sof fe r , Maximum entropy image res tora t ion .I. The entropy express ion , J . Opt . Soc. A m e r . , vol . 67 , pp . 1656-1 6 6 5 , 1 9 7 7 .[76] A. M ohammad-Dja fa ri and G . Demom ent , Maximum-entropy d i f -f r ac t ion tomography, in Proc. IEEE In t . Conf. Acous t . , Speech,Signal Processing, Tokyo, Japan, Apr . 1986, pp . 1749-1752.[77] -, Maxim um-entropy Fourier synthesis with application to dif-f r ac t ion tomograph y, A p p l . O p t . , v o l . 2 6 , pp . 1745-1754, 1987.[78] J . E . Sho re and R . W . Johnson , Axio mat ic de r iva tion of the prin-c ip le of maximum entropy and the pr inc ip le of minimum c ross-en-t ropy, IEEE Trans. Inform. Theory, vol . IT-26, pp . 2 6 - 3 7 , 1980.( Comments and cor rec t ion , IEEE Trans. Inform. Theory , vol.[79] S . Sibisi, Two-dimensio nal reconstruc tions from one-dimens ionaldata by maximum entropy. Nature , vol . 301, pp. 134-136. 1983.

    vol . 70 , pp . 939-952, 1982.

    IT-29, pp . 942-943, 1983.)

    H . Markov Random Fields and Related EstimationTechniques1801 J . Besag, O n the statistical analysis of dirty pictures, J . Roy.Stut is t . Soc . B , v o l . 4 8 , pp . 2 5 9 - 3 0 2 , 1 9 8 6 .[81] B . C ha lmond , Image re s tora t ion us ing an e s t ima ted Markovm o d e l , Signal Processing, vol . 15 , pp . 115-129, 1988.1821 S . Geman and D. G eman, S tochas tic r e laxa tion , Gibbs d is t r ibu-t ions , and the Bayes ien re s tora t ion of im ages , IEEE Trans. PatternAnal. Machine Intell . , vol . PAMI-6 , pp . 721-741, 1984.1831 S . Gem an, Stochastic relaxation methods for image restoration andexper t sys tems, in Maximurn-Entropy and Bayesian Methods inScience and Engineering, G . J . Er ikson and C . R . Smith , Eds .Norwe l l , MA : Kluwer Academic , 1988, pp . 265-31 1.[84] H. J inchi , T . S imch ony, and R . Che l lapa , S tochas t ic r e laxat ion

    for MA P restoration of gray level images with multiplicative noise,in Proc . IEEE In:. Conf. Acoust . , Speech, Signal Processi ng, Dal-la s , TX , Apr . 19 87, pp . 1236-1239.1851 S . Kirkpa tr ick , C . D . Ge l la t t , and M. P . V ecchi , Opt imiza t ion bys imula ted annea l ing , Science , v o l . 2 2 0 , pp . 671-680, 1983.1861 J. Marroquin , S . Mitte r , and T. Pogg io , P robabi li s t ic so lu t ion ofill-posed prob lems in computational vision. J . Amer . S ta t i st . Ass . ,

    [87] N. Metropol is , A. Rosenblu th , M . Rosenblu th , A. Te l le r , and E.Teller , Equ ation of state calculations by fast computing ma-chines , J . C h e m . P h y s . . vol . 21 . pp . 1087-1091, 1953.[88] J . W . Wo ods , Two-dimensiona l d isc re te Markovian f ie lds , IEEETrans. Inform. Theory, vol . IT-18, pp . 232-240, 1972.[89] -. Markov image mode l ing , IEEE Trans. Automat. Contr. .vol . AC-23, pp . 846-850, 197 8.

    vol . 82 , pp . 76-89, 1987.

    I . Hyperparameter Estimation1901 D. G ira rd , Les me thodes de r egula r isa t ion opt ima le e t leur s appl i -ca t ions en tomographie , thkse de D oc teur - Ingtn ieur . Ins t i tu t Na-t iona l Poly technique de Greno ble , F rance , 19 84.[91] G . H . G o l u b , M . H e a t h , a nd G. Wahba , Genera lized c ross-val i -da t ion a s a me thod for choosing a good r idge pa ramete r , Techno-metr ics , vol . 21 , pp . 215-223, 1979.[92] Y. G o u s s a r d , G . Demom ent , and F . Monf ront. M aximum a pos-

    teriori detection-estimation of Bernoulli-Gaussian processes, inProc . 2nd IMA Con$ Math. Signal Processing, Warwick. England,Dec . 1988. 19 pp .[93] J . W . Hi lge r s , A note on e s t ima t ing the opt ima l r egula r izat ion pa-ramete r , S I A M J . N u m e r . A n a l . . vol . 17 , pp . 472-473. 1980.1941 H. Kaufman, J . W . W o o d s , S . Dravida , and A. M . Teka lp , Est i -mation and identification of two-dimensiona l imag es , IEEE Truns.Automat . Contr . , vol . AC-28. pp . 745-756, 1983.1951 F . van de r Put ten . J . B iemond, and J . W. W oods , A hybr id iden-tif ication scheme for image and blur parameters, in Signcrl P r o -

    cessing I I I , I . T . Y o u n g et a l . , Eds. New York: Elsevie r, 1986,[96] F. OSullivan, B. S . Yande l l , and W. J . Raynor , J r . . Automat icsmoothing of regression functions in generalized linear models, J .Amer . S ta t i st . Ass . , vol. 81, pp . 96-103, 1986.[97] B. W. Silverman. A fast and eff icient cross-validation method forsmoothing pa ramete r choice in sp l ine r egress ion , J . Amer . Statist.1981 A. M . Teka lp , H . Kau fman, and J . W. Wood s , Ident i f ica t ion ofimage and blur parameters for the restoration of noncausal blurs ,IEEE Trans. Aco ust. , Speec h, Signal Processing, vol . ASSP-34, pp .963-972, 1986.

    pp . 765-768.

    A S S . , ol . 79 , pp . 584-589, 1984.

    J . Miscellaneous1991 J . F. Abramat ic and L . M. S i lve rman, Nonl inea r r e s tora t ion ofnoisy images , IEEE Trans. Pattern Anal. Machine h i e l l , , v o l .1001 G . L. Anderson and A. N . Ne trava l i, Image re s tora t ion based ona subjective criter ion. IEEE Trans. Syst . . M a n , C y h e r n . , v o l .1011 C . L . Byrne , R . M. F i tzg e ra ld , M. A . F iddy, T . J . Hall. and A . M .Dar l ing , Imag e re s tora t ion and re solu t ion enhancement . J . Opt .S o c . A m e r . , vol . 73 , pp . 1481-1487, 1983.1021 M. Cann on, Blind deconvolu tion of spatially invariant image blurswi th phase , IEEE Trans. A cou st. , S peech, Signal Proc.e.s.s ing, v o l .

    [ IO31 M. Cann on, H. J . Trusse l l , and B . R . Hunt , Compar ison of imageres tora t ion me thod s , Appl . O p t . , vol . 17 , pp . 3384-3390, 1978.11041 P. Chan and J . S . Lim. One-dimensiona l process ing for adapt iveimage restoration, fEEE Trans. Acous t . , Speech, Signul Process-i n g , vol . ASSP-33. pp . 117-126. 1985.[I051 R. Chellappa and R. L. Kashyap, Digital image restoration usingspa t ia l in te rac t ion mo de ls , IEEE Trans. Acoust. , Speech, SignalProcess ing , vol . ASSP-30, pp . 461-472, 1982.[IO61 M . E. Davison, The i l l - condi t ioned na ture of the l im i ted angletomography problem, S I A M J . A p p l . M a t h . . vol . 43 . pp . 428-448.1983.[ IO71 A. P . De mpste r , N. M . Laird , and D. B . Rubin , Maximum l ike -l ihood f rom incomple te da ta v ia the EM a lg or i thm, J . Roy . Statist .Soc . E , vol . 39 , p p . 1-38, 1977.[IO81 M . P. Ekstrom and R . L . Roads , O n the appl ica t lon of eigenvectorexpans ions to numer ica l deconvolu t ion , J . Comput. P h ~ s . , ol .11091 G . T. H erman, A. Lent . and H. Hurwitz . A s torage -e f ic ient a l -gorithm for f inding the regularized solution of a large. inconsistentsystem of equa t ions , J . Inst . M a t h . A p p l . , vol . 25 , pp . 361-366,1980.[ I IO ] A. K. Ja in and E . Ange l . Image re s tora t ion . mode l ing , and reduc -t ion of d imensio na l i ty . IEEE Trans. C o m p u t . . vol . C -23, pp . 470-476, 1974.[ I 1 I ] A. K. Ja in , A semicausal model for recursive f ilter ing of two-di-mensional images, IEEE Trans. Comput . , vo l . C-26, pp . 343-350,1977.[ I 121 A. K. Jain and J. R. Jain, Partial differential equations and f initedifference methods for image processing. Part 11: Image restora-t ion , IEEE Trans . Automat . Contr . , vol . AC-23, pp . 817-834.1978.11 131 H . E . Knutsson. R . Wilson , and G. H. Granlund, Anisot ropicnonstationary image estimation and its application. Part I: Restora-tion of noisy images, IEEE Trans . Commun. , v o l . C O M - 3 1 , pp .388-397, 1983.[ I 141 D. T . Kuan , A. A. Saw chuk, T . C . S t rand, and P . Chave l , Adap-tive noise smoothing f ilter for images with signal-dependent n oise,IEEE Trans. Pattern Anal. Muchine I n te l l . , vol . PAMI-7 , pp , 165-177. 1985.11151 R . M . Leahy and C . E . Go ut is , An opt ima l technique for c o n -

    straint-based image restoration and reconstruction, fEEE Trans .Acoust. , Speech, Signul Processirig. vol . ASSP-34, pp . 1629-1642,1986.[ I161 N. D. A . Masca renhas and W. K . P ra t t . Digi ta l image re s torat ionunder a r egress ion mode l , IEEE Truns. Circuits Sysr. . vol . CAS-2 2 , pp . 252-266, 1975.[ I171 M . I. Sezan and H . S ta rk . Im age re s tora t ion by the me thod ofconvex projections. IEEE Truns. Med. Imaging. vol . M I- 1 . p p .95-101, 1 9 8 2 .[ I 181 -, Tomog raphic image reconstruction from incomp lete view data

    PAMI-4 , pp . 141-149, 1982.

    S M C - 6 , pp . 845-853, 1976.

    ASSP-24, pp . 58-63, 1976.

    14 , pp . 319-340, 1974.

    Authorized licensed use limited to: UCD Health Sciences Library. Downloaded on August 05,2010 at 21:24:16 UTC from IEEE Xplore. Restrictions apply.

  • 8/8/2019 Acoustics Speech and Signal Processing IEEE Transactions on DOI - 10.110929.45551 1989 Demoment

    13/13

    2036 I E E E T R A N S A C T I O N S O N A C O U S TI C S , S P E E C H . A N D S I G N A L P R O C E S S I N G , VOL. 37. NO . I? .D E C E MB E R 1989by convex projections and direct Fourier inversion, IEEE Trans. Gu y Demoment was born in France in ,1948 HeMe d Imaging , vol M I-3, pp 91-98 . 1984 graduated from the Ecole SupCrieure dEk ctr icit e

    [ I 191 A M Tekalp and H J Trussell, Comparativ e study of some re- in 1970, and received the Doctorat dEt at de-cent statistical and set- theoretic methods for image restoration, in gree in physics from the UniversitC de P ans-S ud,Proc IEEE Inr. Conf AcouJr , Speech, Signal Procerying, Ne w Orsay. in 1977York , NY, Apr 1988, pp . 988-991. From 1971,to 1977 he was with the Service des[120] T J Uhl, Lin ear and nonlinear image restoration methods in com- Mesures d e IEcole SupCrieure dElectricitC Fromparison, in Signal Processing I I I , I T Young , Ed New York. 1977 to 1988 he was with the Centre National deElsevier Science, 1986, pp 739-742 la Recherch e Scientif ique, assigned to the Labor-[I211 D C Youla and H Web b, Image restoration by the method of atoire des Signaux et Systkmes He is presently aconvex projections Part I-Theory, IEEE Truns Med Imaging, Professor in the Department of Physics, Univer-VO I M I - I , pp. 81-94, 1982 site de Paris-Sud, Orsay After some work on biological system s modeling,his interests moved toward inverse problems in signal and image process-

    ing Dr Demoment is a member of TCO and S RV