16

Click here to load reader

Comparison of motion de-blur algorithms and real world deployment

Embed Size (px)

Citation preview

Page 1: Comparison of motion de-blur algorithms and real world deployment

Acta Astronautica 64 (2009) 1050–1065www.elsevier.com/locate/actaastro

Comparison ofmotion de-blur algorithms and realworld deploymentSebastian Schuon∗, Klaus Diepold

Institute for Data Processing, Technische Universität München, Munich, Germany

Received 22 December 2006; accepted 5 January 2009Available online 20 February 2009

Abstract

If a camera moves fast while taking a picture, motion blur is induced. There exist techniques to prevent this effect to occur,such as moving the lens system or the CCD chip electro-mechanically. Another approach is to remove the motion blur after theimages have been taken, using signal processing algorithms as post-processing techniques. For more than 30 years, numerousresearchers have developed theories and algorithms for this purpose, which work quite well when applied to artificially blurredimages. If one attempts to use those techniques to real world scenarios, they mostly fail miserably. In order to study whythe known algorithms have problems to de-blur naturally blurred images we have built an experimental setup, which producesreal blurred images with defined parameters in a controlled environment. For this article we have studied the most importantalgorithms used for de-blurring, we have analyzed their properties when applied to artificially blurred images and to real images.We propose solutions to make the algorithms fit for purpose.© 2009 Elsevier Ltd. All rights reserved.

1. Purpose

Often security cameras are mounted on posts tooverview a parking lot or similar venues. If the windblows those posts may start to shake and the attachedcameras often produce blurry images. Due to this mo-tion blur details in the captured images such as facesare unrecognizable or unintelligible [1]. A similar prob-lem is addressed in [22], where prerecorded images ofa truck, moving at a speed of 60mph, are de-blurred inorder to recognize the truck’s side printing.

In [16] another version of the same problem is de-scribed, but in a total different setting. The Hubblespace telescope (HST) has two different capture modes:fine lock and gyro-hold. At times observations are donein gyro-hold mode, which does not offer any position

∗Corresponding author.E-mail addresses: [email protected] (S. Schuon), [email protected]

(K. Diepold).

0094-5765/$ - see front matter © 2009 Elsevier Ltd. All rights reserved.doi:10.1016/j.actaastro.2009.01.012

control to keep the HST at a fixed location during cap-ture. This mode is prone to motion during the capturingprocess, such that significant motion blur finds its wayinto the final image. To get images, which are as sharpas possible, advanced motion de-blur algorithms havebeen applied to the images, achieving good results.

Apart from these examples motion blur is aphenomenon common to all photographers. There areeither moving objects in the scene or the camera ismoving during the capture progress, both situationsleading to blurred images. In order to address this is-sue various techniques in hardware [4,17] and software[1,2,5,6,18] have already been developed. All of themhave technical constrains of some kind such as requiringadditional hardware or producing sub-optimal results.

The corresponding image restoration process, re-ferred to as motion de-blur, can be broken up in twoparts: motion estimating and deconvolution. The firstpart deals with the challenge to identify the path thecamera has followed during the image capture process.

Page 2: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1051

The second part uses this information to reverse the con-volution during the image formation process in order torestore the original picture. Lately much effort has beenput into the first part and some remarkable results havebeen published [1–3,18,19]. All the previously proposedde-blurring methods rely on a small number of algo-rithms to perform the second part, the deconvolution.Therefore we perform a quantitative comparison of thealgorithms already in use and some which have beenrecently proposed in the literature. We point out promis-ing candidates and give clues for further improvement.Notably de-blur algorithms are demonstrated using syn-thetically blurred images, which provide quite differentcharacteristics from motion blur encountered in the realworld. We have build an experimental setup allowing usto generate real motion blur with predefined parametersallowing us to measure and compare the performanceof algorithms under real world conditions.

2. Modeling motion blur

We use a linear, non-recursive (FIR) model to repre-sent the degradation of digital (sampled) images causedby motion blur. We consider the original, blur-free M×N -image f to be convolved with a convolution kernelh, referred to as the point spread function (PSF). Addi-tionally, some noise is introduced during the capturingprocess, which is modeled with the additive noise termn. Hence, the blurred M × N image b, as it is capturedby the moving camera, is modeled as

b= h�f+ n, (1)

where the symbol � represents the convolution operator.De-blurring images account to the application of the

de-blurring operator D, which produces a de-blurredimage f̂ when applied to the blurred image b, that is,D(b)= f̂.

2.1. Synthetic motion blur

For test purposes we create images which are synthet-ically blurred according to the blur model. This yieldsthe advantage that we have access to the reference im-age f, which is not known in real environments, forcomparison against the restored image, which we willdenote by r̂.

Motion blur is described by means of a PSF, whichprovides information of the underlying motion duringthe capture process. In the most simple case, that is, fora uniform linear motion along the x-axis with a speedof k pixels during the capturing period the PSF is given

Fig. 1. Complex PSF (left) and resulting image (right).

by a one-dimensional vector of the length k + 1:

hlin = 1

k + 1[1 1 1 · · · 1]. (2)

However, in a real environment with shaking cameras,neither the path of the moving camera nor the PSF isknown a priori and needs to be estimated from the mea-sured data.

In [2], Ben-Ezra and Nayar have proposed a methodto determine the motion paths during the capturing pro-cess. Their analysis shows that themodel for the PSF hasto be extended to represent motion in a two-dimensionalplane. The PSF is a matrix h of size U ×V , where eachentry hi, j , i=1, 2, . . . ,U , j=1, 2, . . . , V represents thepercentage the camera has been displaced by i −U/2,j − V/2 from the center during the capture

h= 1

K

⎡⎢⎢⎢⎣

h1,1 h2,1 · · · h1,V

h2,1. . .

.... . .

hU,1 hU,V

⎤⎥⎥⎥⎦ , (3)

where the parameter K is a normalizing constant toachieve that the sum over the entries of h equals 1:

K =U∑i=1

V∑j=1

h(i, j). (4)

An example for such a PSF, which represents a trian-gular motion path, is given in Eq. (5). Fig. 1 shows thegraphical representation of this PSF and its impact ona checkerboard structure test picture

hcomp = 1

20

⎡⎢⎣1 2

1 22 3

4 5

⎤⎥⎦ . (5)

2.2. Border area

When generating synthetic blur as described aboveproblems occur in the border region where no

Page 3: Comparison of motion de-blur algorithms and real world deployment

1052 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

information of pixel values beyond the border ofthe image is available, which is necessary to computethe convolution properly. Several approaches exist tosolve this issue:

• circular: the image is considered to be periodic, andvalues are therefore taken from the opposite border;• repetitive: the very last pixel next to the border isrepeated;• mirroring: the image is mirrored at the border, there-fore providing values for the region beyond the bor-der;• constant: the values beyond the border are consideredto be constants (often black or white).

Another approach is to perform clipping around the bor-der area, therefore reducing the size of the output imagebut computing more realistic results. The border issuehas been addressed further in [23].

2.3. Real motion blur

In the literature a number of examples are shownwhere synthetically motion-blurred images are de-blurred using various algorithms and where the re-sults look quite promising. However, when applyingde-blurring algorithms to real world pictures, whichcontain motion blur, it is revealed that those algorithmsperform quite unsatisfactorily. This may be due to thepoorly estimated PSFs or due to failures in the under-lying motion blur model itself. To investigate this issuein more detail and to locate the root of the problem,an experimental setup has been built, which allowsus to capture blurred images in a controlled settingwhich facilitates to have access to well-defined andparameterized PSFs.

The unit shown in Fig. 2 comprises a camera unit, aguiding rail and a stepper motor. The camera carriageis accelerated to a constant speed and takes a photowith a medium exposure time (around 100ms) to allowsignificant motion blur to appear in the picture. As amotif a checkerboard structure has been chosen, sincethis allows an easy method for estimating the associatedPSF.

An image captured with our experimental setup isshown in Fig. 3. From the way we have set up the cap-turing process we assume the motion path of the cam-era to be linear and uniform in the horizontal direction.The first assumption is verified by looking at the plotshown in Fig. 4. The plot depicts the luminance of thepicture, which is taken across the blurred zone (markedwith (a) in Fig. 3). The luminance curve is very closeto linearly decreasing.

Fig. 2. Experimental setup.

Fig. 3. Estimating the PSF with real motion blur.

The second assumption that the motion is only in thehorizontal direction is verified with a second luminancegraph shown in region (b) of Fig. 5. The sharp decay ofluminance at the border between the two boxes provesthat there is almost no motion in the vertical direction(otherwise the graph must look like that shown in Fig. 4,where the decay is linear and spread about a significantnumber of pixels).

Measuring the blur area shown in Fig. 4 allows usto directly infer the length of the PSF (the length hasbeen visualized in Fig. 3). With the two assumptionsverified above and the length given, the PSF can nowbe determined according to Eq. (2) as

hreal = 150 [1 1 1 · · · 1] ∈ R1×50. (6)

Page 4: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1053

3. Description of algorithms

3.1. Direct approach (lin)

Starting from Eq. (1) we transform the equation intothe frequency domain, which yields

F(b)=F(h)F(f)+F(n). (7)

As the additive noise is unknown, we assume it to bezero (F(n)= 0). Rewriting Eq. (7) and performing the

Fig. 4. Verification of linearity.

Fig. 5. Verification of horizontal motion.

re-transformation into the spatial domain, we arrive atthe restoration filter

f̂=F−1(F(b)F(h)

). (8)

This gives us a direct filter requiring only out-of-the-box mathematical methods. As zero noise has been as-sumed it is expected that problems will occur with noisyimages.

3.2. Wiener filter (wnr)

The Wiener filter seeks to minimize the followingerror function:

e2 = E[( f − f̂ )2], (9)

where E denotes the expected value operator, f is theundegraded image and r̂ its estimate. The solution tothe thereof arising optimization task can be written asfollows in the frequency domain (according to [8]):

F̂=[

1

H (u, v)

|H (u, v)|2|H (u, v, )|2+S�(u, v)/S�(u, v)

]B (10)

with H (u, v) being the PSF in the frequency domain,S�(u, v) the power spectrum of the noise and S�(u, v)the power spectrum of the undegraded image F. The ra-tio NSR=S�(u, v)/S�(u, v) is normally referred to as thenoise to signal ratio. If no noise is present (S�(u, v)=0)

Page 5: Comparison of motion de-blur algorithms and real world deployment

1054 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

Fig. 6. Test images reference (left: checkerboard, right: natural).

Eq. (10) reduces to

F̂(u, v)= B(u, v)

H (u, v). (11)

Therefore we see that the Wiener filter is a generaliza-tion of the direct filter. If the ratioNSR is unknown, it canbe approximated with the ratio r of average noise powerand average image power (parametric Wiener filter):

NSR ≈ r = �average�average

. (12)

Even better results can be achieved using the auto-correlation function of the noise and the undegradedimage [8]. A derivation of the formulas mentioned canbe found in [7].

3.3. Regularized filter (reg)

This algorithm [8] is based on finding a direct fil-ter solution using a criterion C, which ensures optimalsmoothness of the image restored. Therefore the filterconstruction task is to find the minimum of

C =M∑u=1

N∑v=1

[∇2 f (u, v)]2 (13)

under the constraint of the rewritten Eq. (1)

‖b− h�f̂‖2 = ‖n‖2. (14)

In the frequency domain the solution to this problemcan be written as follows:

F̂(u, v)=[

H∗(u, v)

|H (u, v)|2 + �|P(u, v)|2]B(u, v), (15)

where � is the parameter which has to be adjusted tofulfill the constraint C and P(u, v) is the Laplacian op-erator in the frequency domain.

3.4. Richardson–Lucy deconvolution (lucy)

This algorithm was invented independently byRichardson [20] and Lucy [13]. Its usage (especiallyconcerning MATLAB) is further outlined in [8]. TheRichardson–Lucy (RL) algorithm is an iterative restora-tion algorithm that maximizes a Poisson statistics imagemodel likelihood function. As summed up in [11] theRL algorithm consists of one initial and three iterativesteps:

1. A first approximation of the restored image f̂0 mustbe made, typically the constant average of all pixelvalues in the blurred image b.

2. The current approximation is convolved with thePSF

un = h�f̂n . (16)

3. A correction factor is computed based on the ra-tio of the blurred image and the result of the laststep

/n =←−h �bun

, (17)

where←−h denotes the PSF in reverse order and b/un

a “pixel-by-pixel” division.4. A new approximate is composed out of the current

one and the correction factor

f̂n+1 = f̂n · /n, (18)

where · denotes a “pixel-by-pixel” multiplication.The algorithm continues with step 2.

As with all iterative techniques the question ariseswhen to stop the computation, but this will be addressedlater on.

Page 6: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1055

Fig. 7. Test images checkerboard (left: real, right: synthetic).

Fig. 8. Test image natural scene (real blur).

Fig. 9. Computation speed vs. image size.

3.5. Maximum likelihood estimation (mem)

A complete description of this algorithm would bebeyond the scope of this paper, but good descriptionscan be found in [9,10]. In brief, the algorithm hasthe ability to alter the PSF used for deconvolution

Fig. 10. Computation speed vs. PSF length.

Fig. 11. Number of iterations affecting PSNR.

according to some constraints to an improved solution.The deconvolution itself is performed in a comparablefashion to the RL algorithm.

Page 7: Comparison of motion de-blur algorithms and real world deployment

1056 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

Fig. 12. Restored synthetic blur images (repetitive wrap around).

Page 8: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1057

Fig. 13. Restored synthetic blur images (circular wrap around).

Page 9: Comparison of motion de-blur algorithms and real world deployment

1058 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

Fig. 14. Restored synthetic blur images with prior edgetaper (repetitive wrap around).

Page 10: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1059

3.6. TU berlin (tub)

Mery and Filbert proposed an algorithm in [15] whichseeks to minimize the equation

‖ f̃ − b̃‖ (19)

under the constraint of Eq. (1). f̃ is a vector of the firstN pixels of a line of the restored image and b̃ vector ofN pixels of a line of the blurred image. Using only Npixels instead of the whole vector allows the algorithmto be fast compared to other techniques.

The optimization problem is solved using Lagrangemultipliers, resulting in a direct restoration algorithm.It should be noted that the algorithm in its current formcan handle only uniform motion blur.

3.7. Sondhi (sondhi)

Sondhi [7,21] addressed the problem of motion de-blurring very early. He assumes the blur process to inte-grate over certain amount a of pixels during the captureprocess. A blurred image line (length L) can thereforebe written as

b(x)=∫ a

x=0f (�)d�. (20)

Calculating the derivate on this equation, defining�(x) = f (x − a), K = �L/a and rewriting it wearrive at

f̂ (x)=�x/a∑k=0

b′(x − ka)+ �(x − �x/aa). (21)

Taking some assumptions about � into account anddefining b̃(x) =∑�x/a

0 b′(x − ka) the restored imageline is given by

f̂ (x)= b̃(x)− 1

K

K−1∑0

b̃(x + ka)+ b̄, (22)

where b̄ is the average value of a pixel line. It shouldbe mentioned that the version mentioned here does onlyapply to linear, uniform motion blur.

3.8. Advanced landweber (alm)

This algorithm [12] is mentioned here for complete-ness, but it has been excluded from the comparison asour implementation was ten times slower than the restof the field. Furthermore, it was originally proposed forthe removal of blur induced due to defocusing.

4. Comparison

4.1. Data material

Two different motifs have been selected for the al-gorithms to work on (Fig. 6). The checkerboard struc-ture (Fig. 7) as the first test pattern represents a moresynthetic motif, but the motion blur is clearly visible.Due to the regular and well-known pattern the resultsof the algorithms can easily be analyzed. Furthermore,the sharp borders of the squares allow the estimation ofthe parameters of the PSF (see Section 2.3).

The second motif (Fig. 8), although still having thecheckerboard structure as a background for referenceand PSF estimation purposes, comprises items whichare not only 2D (i.e. flat) but also 3D (e.g. bottle) or havecharacters on them, which are normally hard to restore.

4.2. Computation speed

One criterion for the evaluation of the de-blurringalgorithms is the computation speed required to restorean image. Computation power may be limited in someenvironments, for example in mobile applications, orthe sheer execution time is far beyond the expectationof users. For example, the RL algorithm needs morethan 1h for restoring a consumer class digital cameraimage on a modern computer hardware.

Table 1Metric results for synthetic blur (circular wrap around).

Algorithm PSNR (dB)

wnr 30.6reg 30.6tub 31.7lucy 42.4mem 39.0sondhi 22.2lin 35.4

Table 2Metric results for synthetic blur (repetitive wrap around).

Algorithm PSNR (dB)

wnr 13.1reg 13.1tub 34.2lucy 22.3mem 13.5sondhi 21.8lin 8.8

Page 11: Comparison of motion de-blur algorithms and real world deployment

1060 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

Fig. 15. Restored synthetic blur images with Gaussian noise (repetitive wrap around, prior edgetaper).

Page 12: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1061

The time required to perform the deconvolution de-pends on a number of parameters. The most obviousone is the size of the input image. Fig. 9 shows the com-putation time as a function of the image size. The timescale has been normalized to the result of the fastest al-gorithm wnr at the smallest pixel value computed (2500pixel). It is clearly visible that the computation timeof most algorithms rises linearly with increasing imagesize. A big difference shows up in the total computationtime required. The time required by iterative algorithmsis much greater, but can be influenced by the number ofiterations (discussed further below), whereas the directalgorithms perform much faster. The difference amongthem is possible due to the implementation. As for thewnr and reg existing, optimized MATLAB implemen-tation has been used whereas tub, lin and sondhi wereimplemented without special optimization. Interestinglythe tub algorithm does perform slightly above linear.The second factor determining the algorithms’ perfor-mance in terms of computation speed is the length, re-spectively, size of the PSF. This dependency is outlinedin Fig. 10. As before the computation is normalized tothe calculation performed with wnr. It can be seen thatmost algorithms are invariant toward different sizes ofthe PSF with the exception of sondhi which seems todeliver low performance with small sizes of PSFs. Withiterative algorithms the number of iterations has a stronginfluence on the computation time and the restorationperformance. It can safely be assumed that the com-putation time depends linearly on the number of itera-tions. Therefore the dependency of the PSNR is shown(Fig. 11) at selected iteration numbers. It can clearlybe seen that with the lucy algorithm an increase of it-erations leads to an improved PSNR for the restorationresult. But at a certain point the incremental improve-ment is negligible compared to the computation timerequired (in this case this is at about 20–30 iterations).For the mem algorithm, the PSNR does not improveany more beyond a certain number of iterations. This ispossibly due to the algorithm reaching its optimum forthe newly estimated PSF (the mem algorithm does tryto estimate the PSF even more precisely). Computingmore iterations than necessary can even have a negativeeffect (lucy beyond 50 pixels).

4.3. Restoration quality

In order to evaluate the quality (similarity to the un-available, perfectly captured image) of the restored im-age two different techniques have been applied. Forpictures where no reference image was available (realblurred images) a “no-reference perceptual blur metric”

Table 3Metric results for synthetic blur with prior edgetaper (repetitive wraparound).

Algorithm PSNR (dB)

wnr 24.6reg 24.6tub 34.9lucy 33.6mem 19.3sondhi 22.1lin 14.6

as proposed in [14] has been tested. Unfortunately itfailed, as the lines introduced due to the ringing con-fused the algorithm. Therefore the results of this metrichave been omitted. Furthermore the PSNR, defined as

20 log10

⎛⎝ 1√

1M ·N

∑Mi=1

∑Nj=1( f̂ (i, j)− f (i, j))2

⎞⎠ (23)

between the synthetically restored image f̂ and the ref-erence image f is computed.

4.3.1. Synthetic blurIn the first test case the algorithmswere to restore syn-

thetically blurred image. As the border areas are prob-lematic for all algorithms (as discussed in Section 2.2),the image was assumed to be periodic when adding thesynthetic motion blur (“circular”). The restored imagescan be seen in Fig. 13, where all algorithms achievenearly perfect restoration results and the images containonly minor ringing in some cases. In the second run de-blurring of synthetically blurred images has been testedwith pixel repetition at the border (Fig. 12). It can beseen that obviously wrong pixels in the border regionlead to massive ringing in the restored picture. Only thetub and the sondhi algorithms seem to be able to han-dle this issue adequately and present quite acceptableresults. MATLAB offers a function edgetaper which isrecommended to be applied to images which show a lotof ringing [8]. The edgetaper function blurs the endsof the image with the PSF later used for deconvolu-tion. To evaluate this function, it has been applied tothe images with repetitive pixel wrap around at the bor-der region as these images proved to be challenging forthe algorithms. Fig. 14 presents the result of this case,where it can be seen that edgetaper does help to de-crease ringing, but is unable to suppress it completelyor assure equivalent results to the circular blur situation(Tables 1 and 2).

Page 13: Comparison of motion de-blur algorithms and real world deployment

1062 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

Fig. 16. Restored real blur (checkerboard structure).

Page 14: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1063

All the test cases presented before did not comprisenoise. Therefore, they are just theoretical cases to studysome effects of motion de-blurring. In order to get a bet-ter understanding of the schemes under real world con-ditions, zero-mean Gaussian noise with variance 0.01 isadded to the test image before restoration. The restoredimages for this case can be seen in Fig. 15.

The wnr result still looks blurry, whereas the regrestoration was able to restore the squares, but still com-prises noise and ringing. The tub algorithm delivers thesharpest image without any ringing, but some noise vis-ible. Both lucy and mem restore the image without anynoise visible, but the edges are still a little bit blurred.The noise in the restoration results computed by tub andsondhi shows a very similar behavior, but the latter oneinduces some ringing and some blur at the edges of theboxes. The lin filter is not able to remove the noise inthe image and introduces a lot of ringing (Table 3).

For the lucy and mem algorithms a modified imple-mentation has been used, which allows to specify aweight for certain pixels corresponding to the reliabilityof the associated values. On account of this, the bor-der pixels have received a much lower weight, thereforesuppressing the ringing induced by incorrect/missingpixel information in the border area. Pixels on all fourborders have reduced weight, which explains the blackborders.

4.3.2. Real blurFig. 16 shows the restored images of the checker-

board motif, which has been captured by a real camera.The most simple algorithm lin fails completely, becausethe noise is amplified. Both the wnr and the reg algo-rithms show comparable results, which comprise a lotof low frequency ringing, but restore the contours moreor less satisfactorily. The problem with both algorithmsis that the noise power can only be guessed (or deter-mined by trial and error as it was here the case). The tubalgorithm produces a lot of noise in the restored image,but the contours of the squares are the sharpest amongthe competitors. Furthermore, high frequency ringingis clearly visible in the restored image. A similar re-sult but with much less noise is produced by the sondhialgorithm. The lucy and the mem algorithms producemore or less equivalent results, which are very close tothe original checkerboard motif. Again, the pixels at theborders have received lower weight to inhibit ringing(Table 4).

In a scene which comprises more than just a checker-board structure the algorithms seem to have huge prob-lems concerning ringing (Fig. 17). The first challenge inrestoration is noise, which is mastered properly only by

Table 4Metric results for synthetic blur with Gaussian noise (repetitive wraparound, prior edgetaper).

Algorithm PSNR (dB)

wnr 19.8reg 21.0tub 21.9lucy 23.5mem 23.3sondhi 19.3lin 13.4

lucy andmem. The algorithms wnr and reg still show ac-ceptable results. The restored images of the tub, sondhiand lin algorithms are very noisy such that it is difficultto recognize anything. The remaining candidates haveall difficulties with ringing, but only lucy and mem de-liver acceptable results as their ringing artifacts are moresmooth and therefore more pleasant for the human eye.The restored image of themem algorithm is slightly bet-ter than the result produced by lucy, due to the ability ofthe mem algorithm to adapt itself to the estimated PSF,therefore correcting inaccuracy in the PSF estimation.

5. Conclusion

We have seen that the de-blurring algorithms dis-cussed perform differently on synthetic and real motionblur. Two groups of algorithms performed best underboth circumstances.

The first group, comprising the lucy and the mem al-gorithms, produce equivalent results for most cases. Thelucy algorithm is preferable as it requires less compu-tation time. In cases where the PSF could only be esti-mated roughly, the mem algorithm is superior as it canadapt itself to the PSF and therefore correct inaccura-cies in the estimated PSF. As both algorithms produceimages quite pleasant to the human eye, they shouldbe employed when restoring photographs (e.g. from adigital camera). The key for good results on real im-ages with this two algorithms is the ability to weight thepixels and therefore mask pixels at the borders to sup-press ringing. Still, their problems remain the tremen-dous need of computation time and the question of theoptimum number of iterations.

The second group, comprising the tub and the sondhialgorithms, deliver sharp restored images, but they alsointroduce a lot of noise. Therefore, they can be used inapplications where sharpness is crucial, e.g. pattern ortext recognition. An important advantage over the firstgroup is that they are remarkably faster and do not need

Page 15: Comparison of motion de-blur algorithms and real world deployment

1064 S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065

Fig. 17. Restored real blur images (natural scene).

Page 16: Comparison of motion de-blur algorithms and real world deployment

S. Schuon, K. Diepold / Acta Astronautica 64 (2009) 1050–1065 1065

an estimate for the number of iterations (as they aredirect algorithms). Furthermore their implementation isquite straight forward.

One can consider to implement the possibility to usethe weighting of pixels (as seen in the lucy and the memimplementation) for the second group of algorithms, asthis improved the results of the first group a lot.

The wnr, reg and lin algorithms do not produceacceptable results under real world conditions.

References

[1] M. Ben-Ezra, S.K. Nayar, Motion deblurring using hybridimaging, in: Proceedings of IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition, 2003.

[2] M. Ben-Ezra, S.K. Nayar, Motion-based motion deblurring,IEEE Transactions on Pattern Analysis and MachineIntelligence 26 (6) (2004) 689–699.

[3] D. Capel, A. Zisserman, Super-resolution enhancement of textimage sequences, in: Proceedings of International Conferenceon Pattern Recognition, 2000, pp. 600–605.

[4] Nikon Corporation, Shake Reduction Technology.[5] DynaPel, Dynapel Steadyhand 2.2.[6] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, W.T. Freeman,

Removing camera shake from a single photograph, ACMTransactions on Graphics (TOG) 25 (3) (2006) 787–794.

[7] R.C. Gonzalez, R.E. Woods, Digital Image Processing,Addison-Wesley, Reading, MA, 1987.

[8] R.C. Gonzalez, R.E. Woods, S.L. Eddins, Digital ImageProcessing Using MATLAB, Pearson Prentice-Hall, UpperSaddle River, NJ, 2004.

[9] R.J. Hanisch, R.L. White, R.L. Gilliland, Deconvolution ofImages and Spectra, Academic Press, California, 1997.

[10] T.J. Holmes, S. Bhattacharyya, J.A. Cooper, D. Hanzel,V. Krishnamurthi, W. Lin, B. Roysam, D.H. Szarowski,J.N. Turner, Light microscopic images reconstructed bymaximum likelihood deconvolution, Handbook of BiologicalConfocal Microscopy, 1995, pp. 389–402.

[11] X. Jiang, D.C. Cheng, S. Wachenfeld, K. Rothaus, MotionDeblurring, University of Muenster, Department of Mathe-matics and Computer Science, 2005 〈http://cvpr.unimuenster.de/teaching/ws04/seminarWS04/downloads/MotionDeblurring-Ausarbeitung.pdf〉.

[12] L. Liang, Y. Xu, Adaptive landweber method to deblur images,IEEE Signal Processing Letters 10 (5) (2003) 129.

[13] L.B. Lucy, An iterative technique for the rectification ofobserved distributions, The Astronomical Journal 79 (6) (1974)745–754.

[14] P. Marziliano, F. Dufaux, S. Winkler, T. Ebrahimi,S.A. Genimedia, S. Lausanne, A no-reference perceptual blurmetric, in: Proceedings of International Conference on ImageProcessing, 2002.

[15] D. Mery, D. Filbert, A fast non-iterative algorithm for theremoval of blur caused by uniform linear motion in X-rayimages, in: 15th World Conference on Non-Destructive Testing,vol. 8, 2000, pp. 15–21.

[16] J. Mo, R.J. Hanisch, Restoration of HST WFPC2 images ingyro-hold mode, Astronomical Data Analysis Software andSystems IV 77 (1995).

[17] Pentax, Shake Reduction Technology.[18] R. Raskar, A. Agrawal, J. Tumblin, Coded exposure

photography: motion deblurring using fluttered shutter, ACMTransactions on Graphics (TOG) 25 (3) (2006) 795–804.

[19] A. Rav-Acha, S. Peleg, Restoration of multiple images withmotion blur in different directions, in: 5th IEEE Workshop onApplications of Computer Vision, 2000, pp. 22–28.

[20] W.H. Richardson, et al., Bayesian-based iterative method ofimage restoration, Journal of the Optical Society of America62 (1) (1972) 55–59.

[21] M.M. Sondhi, Image restoration: the removal of spatiallyinvariant degradations, Proceedings of the IEEE 60 (7) (1972)842–853.

[22] C. Williams, It’s degrading; its not delovely, Security 42 (5)(2005) 50–52.

[23] J. Woods, J. Biemond, A. Tekalp, Boundary value problem inimage restoration, Acoustics, Speech, and Signal Processing,IEEE International Conference on ICASSP’85, 10 (1985).