5
BAYESIAN PARTIAL OUT-OF-FOCUS BLUR REMOVAL WITH PARAMETER ESTIMATION Bruno Amizic 1 , Rafael Molina 2 , Aggelos K. Katsaggelos 1 1 Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208, USA Email: [email protected], [email protected] 2 Departamento de Ciencias de la Computaci´ on e I.A., Universidad de Granada, 18071 Granada, Spain Email: [email protected] ABSTRACT In this paper we propose a novel partial out-of-focus blur removal method developed within the Bayesian framework. We concentrate on the removal of background out-of-focus blurs that are present in the images in which there is a strong interest to keep the foreground in sharp focus. However, of- ten there is a desire to recover background details out of such partially blurred image. In this work, a non-convex l p -norm prior with 0 < p < 1 is used as the background and fore- ground image prior and a total variation (TV) based prior is utilized for both the background blur and the occlusion mask, that is, the mask determining the pixels belonging to the fore- ground. In order to model transparent foregrounds, the val- ues in the occlusion mask are assumed to belong to the closed interval [0, 1]. The proposed method is derived by utilizing bounds on the priors for the background and foreground im- age, the background blur and the occlusion mask using the majorization-minimization principle. Maximum a posteriori Bayesian inference is performed and as a result, the back- ground and foreground image, the background blur, the oc- clusion mask and the model parameters are simultaneously estimated. Experimental results are presented to demonstrate the advantage of the proposed method over the existing ones. 1. INTRODUCTION It is not uncommon for the different objects or regions within the original scene to move independently throughout the scene by taking their own directions and velocities. There- fore, often the blurring function spatially varies throughout the image. Another example of a spatially varying blur is the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general, this type of blurring does not fit well into the single-layer image for- mation model (e.g., [1]), and consequently more advanced image formation models are needed. The most common way to remove the spatially varying blur is to consider acquir- ing multiple images of the original scene [2, 3]. In some re- cent methods a redesign of the imaging system is proposed. For example, in [4] a fluttered shutter camera is proposed to achieve a broader frequency response compared to the tradi- tional imaging system. Note that the method in [4] still re- quires knowledge of the moving object boundaries and object velocities to successfully restore a partially blurred image. In addition, a parabolic camera is proposed in [5, 6] in order to minimize the information loss for 1D and 2D constant- velocity motion, respectively. The restoration of partially blurred images acquired by the traditional imaging systems is still of great importance given that the majority of imaging devices are traditional ones. For the reminder of the sec- tion we concentrate on a two-layer image formation model. The restoration method proposed in this paper is based on a single-image observation, and therefore multiple captures of the original scene are not required. The standard formulation of the degradation model for the partially blurred images in which only one layer is de- graded is given in matrix-vector form by (please see [7] for more details) y = H f (o x f )+(H b x b ) (1 H f o)+ n, (1) where the N × 1 vectors x f , x b , y, o and n represent re- spectively the foreground image, the background image, the available noisy and partially blurred image, the occlusion mask and the noise with independent elements of variance σ 2 n = β 1 , while matrices H f and H b represent the blurring matrices created from the blurring point spread functions (h f and h b ) of the foreground and background, respectively. The images are assumed to be of size m × n = N, and they are lexicographically ordered into N × 1 vectors. The operator denotes the Hadamard product and 1 denotes the N × 1 col- umn vector with all components equal to one. Given y, the partially blurred image restoration problem calls for finding estimates of x f , x b , H f , H b and o using prior knowledge on them. This paper is organized as follows. In Section 2 we pro- vide the proposed Bayesian modeling of the partially blurred image restoration problem. The Bayesian inference is pre- sented in Section 3. Experimental results are provided in Section 4 and conclusions drawn in Section 5. 2. BAYESIAN MODELING The observation noise is modeled as a zero mean white Gaus- sian random vector. Therefore, the observation model is de- fined as p(y|β , x f , h f , x b , h b , o) β N/2 exp β 2 y H f (o x f ) (H b x b ) (1 H f o)2 , (2) where β is the precision of the multivariate Gaussian distri- bution. In this work we assume that the foreground blur is avail- able to us. We make this assumption since partially blurred out-of-focus images are often encountered in portrait photog- raphy (i.e., the foreground blur is a unit impulse) in which the foreground is of very high visual quality. In general this as- sumption works well in practice as can be seen in Section 4. 19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011 © EURASIP, 2011 - ISSN 2076-1465 1673

Bayesian partial out-of-focus blur removal with parameter ... · the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general,

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Bayesian partial out-of-focus blur removal with parameter ... · the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general,

BAYESIAN PARTIAL OUT-OF-FOCUS BLUR REMOVAL WITH PARAMETERESTIMATION

Bruno Amizic1, Rafael Molina2, Aggelos K. Katsaggelos1

1 Department of Electrical Engineering and Computer Science, Northwestern University, Evanston, IL 60208, USAEmail: [email protected], [email protected]

2 Departamento de Ciencias de la Computacion e I.A., Universidad de Granada, 18071 Granada, SpainEmail: [email protected]

ABSTRACT

In this paper we propose a novel partial out-of-focus blurremoval method developed within the Bayesian framework.We concentrate on the removal of background out-of-focusblurs that are present in the images in which there is a stronginterest to keep the foreground in sharp focus. However, of-ten there is a desire to recover background details out of suchpartially blurred image. In this work, a non-convex lp-normprior with 0 < p < 1 is used as the background and fore-ground image prior and a total variation (TV) based prior isutilized for both the background blur and the occlusion mask,that is, the mask determining the pixels belonging to the fore-ground. In order to model transparent foregrounds, the val-ues in the occlusion mask are assumed to belong to the closedinterval [0,1]. The proposed method is derived by utilizingbounds on the priors for the background and foreground im-age, the background blur and the occlusion mask using themajorization-minimization principle. Maximum a posterioriBayesian inference is performed and as a result, the back-ground and foreground image, the background blur, the oc-clusion mask and the model parameters are simultaneouslyestimated. Experimental results are presented to demonstratethe advantage of the proposed method over the existing ones.

1. INTRODUCTION

It is not uncommon for the different objects or regions withinthe original scene to move independently throughout thescene by taking their own directions and velocities. There-fore, often the blurring function spatially varies throughoutthe image. Another example of a spatially varying blur isthe out-of-focus blur that is widely used to accentuate thedepth of field during image acquisition. In general, this typeof blurring does not fit well into the single-layer image for-mation model (e.g., [1]), and consequently more advancedimage formation models are needed. The most common wayto remove the spatially varying blur is to consider acquir-ing multiple images of the original scene [2, 3]. In some re-cent methods a redesign of the imaging system is proposed.For example, in [4] a fluttered shutter camera is proposed toachieve a broader frequency response compared to the tradi-tional imaging system. Note that the method in [4] still re-quires knowledge of the moving object boundaries and objectvelocities to successfully restore a partially blurred image.In addition, a parabolic camera is proposed in [5, 6] in orderto minimize the information loss for 1D and 2D constant-velocity motion, respectively. The restoration of partiallyblurred images acquired by the traditional imaging systemsis still of great importance given that the majority of imagingdevices are traditional ones. For the reminder of the sec-

tion we concentrate on a two-layer image formation model.The restoration method proposed in this paper is based on asingle-image observation, and therefore multiple captures ofthe original scene are not required.

The standard formulation of the degradation model forthe partially blurred images in which only one layer is de-graded is given in matrix-vector form by (please see [7] formore details)

y =H f (o⊙x f )+ (Hbxb)⊙ (1−Hfo)+n, (1)

where the N × 1 vectors x f , xb, y, o and n represent re-spectively the foreground image, the background image, theavailable noisy and partially blurred image, the occlusionmask and the noise with independent elements of varianceσ2n= β−1, while matricesH f andHb represent the blurring

matrices created from the blurring point spread functions (h f

and hb) of the foreground and background, respectively. Theimages are assumed to be of size m× n = N, and they arelexicographically ordered into N×1 vectors. The operator⊙denotes the Hadamard product and 1 denotes the N× 1 col-umn vector with all components equal to one. Given y, thepartially blurred image restoration problem calls for findingestimates of x f , xb,H f ,Hb and o using prior knowledge onthem.

This paper is organized as follows. In Section 2 we pro-vide the proposed Bayesian modeling of the partially blurredimage restoration problem. The Bayesian inference is pre-sented in Section 3. Experimental results are provided inSection 4 and conclusions drawn in Section 5.

2. BAYESIAN MODELING

The observation noise is modeled as a zero mean white Gaus-sian random vector. Therefore, the observation model is de-fined as

p(y|β ,x f ,h f ,xb,hb,o) ∝

βN/2 exp

[

−β

2‖y−H f (o⊙x f )− (Hbxb)⊙ (1−Hfo)‖2

]

,

(2)

where β is the precision of the multivariate Gaussian distri-bution.

In this work we assume that the foreground blur is avail-able to us. We make this assumption since partially blurredout-of-focus images are often encountered in portrait photog-raphy (i.e., the foreground blur is a unit impulse) in which theforeground is of very high visual quality. In general this as-sumption works well in practice as can be seen in Section 4.

19th European Signal Processing Conference (EUSIPCO 2011) Barcelona, Spain, August 29 - September 2, 2011

© EURASIP, 2011 - ISSN 2076-1465 1673

Page 2: Bayesian partial out-of-focus blur removal with parameter ... · the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general,

In summary, the foreground blur h f is assumed to beknown (i.e., unit impulse), while the background image, theforeground image, the background blur, the occlusion maskand the model parameters are assumed to be unknown. Con-sequently we can simplify the observation model defined in(2) with

p(y|β ,x f ,h f ,xb,hb,o) ∝

βN/2 exp

[

−β

2‖y−Dox f − (I−Do)Hbxb‖2

]

, (3)

where I denotes the identity matrix andDo denotes a diago-nal matrix defined asDo = diag(o).

For the background image prior we utilize a variant of thegeneralized Gaussian distribution, given by

p(xb|α) =1

ZGG(α)exp

[

− ∑d∈D

αd ∑i

|∆di (xb)|p

]

, (4)

where ZGG(α) is the partition function, 0 < p < 1, α de-

notes the set {αd} and d ∈ D = {h,v,hh,vv,hv}. ∆hi (u) and

∆vi (u) correspond to, respectively, the horizontal and vertical

first order differences, at pixel i, that is, ∆hi (u) = ui− ul(i)

and ∆vi (u) = ui−ua(i), where l(i) and a(i) denote the nearest

neighbors of i, to the left and above, respectively. The op-

erators ∆hhi (u), ∆vv

i (u), ∆hvi (u) correspond to, respectively,

horizontal, vertical and horizontal-vertical second order dif-ferences, at pixel i.

In order to eliminate the need to estimate each αd weassume that αh = αv = α and αhh = αvv = αhv = α/2. Ad-ditionally, similarly to [8], the partition function will be ap-

proximated as ZGG(α) ∝ α−λ1N/p, where λ1 is a positive realnumber. We then simplify (4) accordingly to obtain the fol-lowing background image prior

p(xb|αb) ∝ αλ1N/pb exp

[

−αb ∑d∈D

21−o(d)∑i

|∆di (xb)|p

]

,

(5)where o(d) ∈ {1,2} denotes the order of the difference oper-ator ∆d

i (xb). Similarly, for the foreground image we assumethe following prior

p(x f |α f ) ∝ αλ2N/pf exp

[

−α f ∑d∈D

21−o(d)∑i

|∆di (x f )|p

]

,

(6)where λ2 is a positive real number. For the background-image blur we utilize a total-variation prior given by

p(hb|γb) ∝ γλ3Nb exp [−γbTV(hb)] , (7)

where λ3 is a positive real number and TV(hb) is defined as

TV(hb) = ∑i

(∆hi (hb))2+(∆v

i (hb))2. (8)

Similarly, for the occlusion mask we utilize a total-variationprior,

p(o|γo) ∝ γλ4No exp [−γoTV(o)] , (9)

where λ4 is a positive real number and TV(o) is defined as

TV(o) = ∑i

(∆hi (o))

2+(∆vi (o))

2. (10)

In this work we use flat improper hyperpriors for eachunknown hyperparameter, that is, we utilize

p(ω) ∝ const, (11)

where ω ∈ {β ,αb,α f ,γb,γo}.3. BAYESIAN INFERENCE

Bayesian inference on the unknown components of thepartially blurred image restoration problem is based onthe estimation of the unknown posterior distributionp(αb,α f ,β ,γb,γo,xb,hb,o,x f | y,h f ), given by

p(αb,α f ,β ,γb,γo,xb,hb,o,x f | y,h f ) =

p(αb,α f ,β ,γb,γo,xb,hb,o,y,x f ,h f )

p(y,h f ). (12)

In this work, we adopt the maximum a pos-teriori (MAP) approach to obtain a single point

(αb, α f , β , γb, γo, xb, x f , hb, o) estimate, denoted as Θ,that maximizes p(αb,α f ,β ,γb,γo,xb,x f ,hb,o | y,h f ) asfollows,

Θ = argmaxΘ

p(αb,α f ,β ,γb,γo,xb,x f ,hb,o | y,h f )

=minΘ

{

β

2‖y−Dox f − (I−Do)Hbxb‖2+

+αb ∑d∈D

21−o(d)∑i

|∆di (xb)|p+α f ∑

d∈D21−o(d)∑

i

|∆di (x f )|p+

+γbTV(hb)+ γoTV(o)−λ1N

plogαb−

λ2N

plogα f

−N

2logβ −λ3N logγb−λ4N logγo

}

.

(13)

As can be seen from (13), obtaining the pointestimate that maximizes the posterior distributionp(αb,α f ,β ,γb,γo,xb,x f ,hb,o | y,h f ) is not straight-forward since it requires the minimization of a non-convexfunctional. Note that maximizing the posterior distributionp(αb,α f ,β ,γb,γo,xb,x f ,hb,o | y,h f ) with the maximum aposteriori approach is equivalent to the variational Bayesianbased maximization (see [9]) for the example case when allthe posterior distributions are assumed to be degenerate.

In this paper, we resort to a majorization-minimizationapproach to bound the non-convex image prior p(xb|αb) bythe functional M1(αb,xb,V), that is

p(xb|αb)≥ const · M1(αb,xb,V). (14)

The majorization-minimization approach has been utilized inseveral approaches for image restoration [9, 10].

The functional M1(αb,xb,V) is derived by consideringthe relationship between the weighted geometric and arith-metic means, which is given by

zp/2v1−p/2 ≤ p

2z+

(

1− p

2

)

v, (15)

1674

Page 3: Bayesian partial out-of-focus blur removal with parameter ... · the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general,

where z,v and p are positive real numbers. We first rewrite(15) as

zp/2 ≤ p

2

z+ 2−ppv

v1−p/2. (16)

Using (16) we obtain

|∆di (xb)|p ≤

p

2

[∆di (xb)]

2+ 2−ppvd,i

v1−p/2d,i

. (17)

Therefore we have

p(xb|αb)= const·αλ1N/pb exp

[

−αb ∑d∈D

21−o(d)∑i

|∆di (xb)|p

]

≥ const·αλ1N/pb exp

−αbp

2∑d∈D

21−o(d)∑i

[∆di (xb)]

2+ 2−ppvd,i

v1−p/2d,i

.

(18)

and so M1(αb,xb,V) is defined as

M1(αb,xb,V) =

αλ1N/pb exp

−αbp

2∑d∈D

21−o(d)∑i

[∆di (xb)]

2+ 2−ppvd,i

v1−p/2d,i

,

(19)

where V is a matrix with elements vd,i. Following the sameapproach, the foreground image prior can be bounded by

p(x f |α f )≥ const · M2(α f ,x f ,B). (20)

where B is a matrix with elements bd,i > 0 andM2(α f ,x f ,B)is defined as

M2(α f ,x f ,B) =

αλ2N/pf exp

−α f p

2∑d∈D

21−o(d)∑i

[∆di (x f )]

2+ 2−ppbd,i

b1−p/2d,i

.

(21)

Similarly, the majorization-minimization criterion isused to bound the blur prior p(hb|γb) utilizing the functionalM3(γb,hb,u). Let us define, for γb and any N−dimensional

vector u ∈ (R+)N , with components ui, i= 1, . . . ,N, the fol-lowing functional

M3(γb,hb,u)= γλ3Nb exp

[

− γb2

∑i

(∆hi (hb))

2+(∆vi (hb))

2+ ui√ui

]

.

(22)Using the inequality in (16) with p = 1, for z ≥ 0 and

v> 0√z≤

√v+

1

2√v(z− v), (23)

we obtain

p(hb|γb)≥ const · M3(γb,hb,u). (24)

Once again, the majorization-minimization criterion isused to bound the occlusion mask total variation priorp(o|γo) utilizing the functional M4(γo,o,z), such that,

p(o|γo)≥ const · M4(γo,o,z), (25)

where

M4(γo,o,z) = γλ4No exp

[

− γo2

∑i

(∆hi (o))

2+(∆vi (o))

2+ zi√zi

]

,

(26)and z ∈ (R+)N is an N−dimensional vector.

The lower bounds of p(xb|αb), p(x f |α f ), p(hb|γb) andp(o|γo) defined above lead to the following lower bound ofthe distribution p(αb,α f ,β ,γb,γo,xb,hb,o,y,x f ,h f ),

p(αb,α f ,β ,γb,γo,xb,hb,o,y,x f ,h f )= p(αb)p(α f )p(β )p(γb)

p(γo)p(xb|αb)p(x f |α f )p(hb|γb)p(o|γo)p(y|β ,xb,hb,x f ,h f ,o)≥const·M1(αb,xb,V)M2(α f ,x f ,B)M3(γb,hb,u)M4(γo,o,z)

p(y|β ,xb,hb,x f ,h f ,o).

Therefore, a single point estimate that maxi-mizes a lower bound of the posterior distributionp(αb,α f ,β ,γb,γo,xb,x f ,hb,o | y,h f ) is found as fol-lows

Θ =minΘ

{

β

2‖y−Dox f − (I−Do)Hbxb‖2+

αbp

2∑d∈D

21−o(d)∑i

[∆di (xb)]

2+ 2−ppvd,i

v1−p/2d,i

+

α f p

2∑d∈D

21−o(d)∑i

[∆di (x f )]

2+ 2−ppbd,i

b1−p/2d,i

+

γb2

∑i

(∆hi (hb))

2+(∆vi (hb))

2+ ui√ui

+

γo2

∑i

(∆hi (o))

2+(∆vi (o))

2+ zi√zi

+

−λ1N

plogαb−

λ2N

plogα f+

−N

2logβ −λ3N logγb−λ4N logγo

}

.

(27)

Using (27) for all unknowns in an alternating fashion, weobtain the final algorithm as shown below.

Algorithm. Given α1b ,α

1f ,β

1,γ1b ,γ1o ,o

1,u1,z1,x1f ,B

1 and V1,

where the rows of Vk and Bk are denoted by vkd ∈ (R+)N and

bkd ∈ (R+)N respectively, with d ∈ {h,v,hh,vv,hv} and initial

estimate of the blurring filter h1b.

For k= 1,2, . . . until a stopping criterion is met:1. Calculate

xkb =

[

β k(DkbH

kb)

t(DkbH

kb)+αk

b p∑d

21−o(d)(∆d)tWkd(∆

d)]−1

×β k(DkbH

kb)

t(y−Dkox

kf ), (28)

where Wkd is a diagonal matrix with entries Wk

d(i, i) =

(vkd,i)p/2−1, Dk

o is a diagonal matrix defined as Dko =

diag(ok) andDkb = I−Dk

o.

1675

Page 4: Bayesian partial out-of-focus blur removal with parameter ... · the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general,

2. Calculate

hk+1b =

[

β k(DkbX

kb)

t (DkbX

kb)+ γkb ∑

d∈{h,v}(∆d)tUk(∆d)

]−1

×β k(DkbX

kb)

t(y−Dkox

kf ), (29)

where Uk is a diagonal matrix with entries Uk(i, i) =

(uki )−1/2 and Xk

b denotes a convolution matrix created

from the background image estimate xkb.

3. Calculate

ok+1 =

[

β k(Hk+1o )t(Hk+1

o )+ γko ∑d∈{h,v}

(∆d)tZk(∆d)

]−1

×β k(Hk+1o )t(y−Hk+1

b xkb), (30)

where Zk and Hk+1o are diagonal matrices defined

as Zk(i, i) = diag((zki )−1/2) and Hk+1

o = diag(xkf ) −

diag(Hk+1b xk

b).4. Calculate

xk+1f =

[

β k(Dk+1o )t(Dk+1

o ) (31)

+αkf p∑

d

21−o(d)(∆d)tWkd(∆

d)

]−1

×β k(Dk+1o )t(y−Dk+1

b Hk+1b xk

b), (32)

where Wkd is a diagonal matrix with entries Wk

d(i, i) =

(bkd,i)p/2−1.

5. For each d ∈ {h,v,hh,vv,hv} calculate

vk+1d,i = [∆d

i (xkb)]

2, (33)

bk+1d,i = [∆d

i (xkf )]

2, (34)

6. Calculate

uk+1i = [∆h

i (hk+1b )]2+[∆v

i (hk+1b )]2, (35)

zk+1i = [∆h

i (ok+1)]2+[∆v

i (ok+1)]2, (36)

7. Calculate

αk+1b =

λ1N/p

∑d∈D 21−o(d)∑i |∆di (x

kb)|p

, (37)

αk+1f =

λ2N/p

∑d∈D 21−o(d)∑i |∆di (x

k+1f )|p

, (38)

β k+1 =N

‖ y−Dk+1o xk+1

f −Dk+1b Hk+1

b xkb ‖2

,(39)

γk+1b =

λ3N

TV(hk+1b )

, (40)

γk+1o =

λ4N

TV(ok+1). (41)

Set restored image, x,

x= limk→∞

{Dk+1o xk+1

f +(I−Dk+1o )xk

b}. (42)

In this work we set the values of the parameters p, λ1, λ2,λ3, and λ4 equal to 0.8, 0.5 1.0 0.5 and 0.5, respectively. Therobustness of the proposed method will be tested and evalu-ated by restoring the partially blurred images photographedby a commercial camera. Additionally, since the proposedalgorithm is initialized with the unit impulse as the initialbackground blur estimate, it is particularly important in thefirst few iterations to keep parameters αb and γb relativelyhigh compared to the parameter β . This procedure preventsthe proposed algorithm from converging to the undesirablebackground blur estimate of unit impulse.

4. EXPERIMENTAL RESULTS

In this section we present experimental results obtained withthe use of the proposed algorithm. For all experiments, theinitial background blur was set to the unit impulse, and theproposed algorithm is terminated if the termination criterion

‖xkb−xk−1

b ‖/‖xk−1b ‖ < 10−4 is satisfied or if the maximum

number of iterations reaches 100. After updating the un-knowns, as described in Section 3, we apply known domainconstraints and truncate the background blur estimate, thebackground and foreground image (both images are normal-ized) estimate and the occlusion mask to an interval [0,1]. Inaddition, the occlusion mask is initialized with a user definedtrimap (available at alphamatting.com) and the blur estimateis normalized (after truncation) so that the sum of its ele-ments equals one. A trimap is a popular way [11, 12] to ini-tialize the alpha matting algorithms in which the black colordefines a clear background, the white color defines a clearforeground and the gray color defines an unknown region ofthe occlusion mask that has to be estimated. Some examplesof the user defined trimap are shown in Figure 1.

Figure 1: Example of initial estimates of the occlusion mask(trimaps from alphamatting.com).

We evaluate the performance of the proposed methodon the images taken by a commercial camera. Thetest images used in this section are available online(www.alphamatting.com). In [13], the authors compared dif-ferent partial blur removal methods [14–16]. In this workwe consider for the comparison the best performing methodreported in [13]. Example restorations obtained by the pro-posed algorithm are shown in Figure 2. It is clear from Fig-ure 2 that the proposed algorithm provides restorations withhigh visual quality that are very competitive with the existingstate-of-the art method proposed in [13].

5. CONCLUSIONS

In this paper a novel partially blurred image restoration algo-rithm is presented. More specifically, the focus of this paperwas to remove partial out-of-focus background blur that isoften present in images with sharp in focus foreground. Theproposed algorithm was developed within a Bayesian frame-work utilizing an lp-norm based sparse prior for the back-ground and foreground image, and a total-variation prior forboth the background blur and the occlusion mask. Restora-

1676

Page 5: Bayesian partial out-of-focus blur removal with parameter ... · the out-of-focus blur that is widely used to accentuate the depth of field during image acquisition. In general,

Figure 2: Example restorations from alphamatting.com: 1stcolumn represents nine different partially blurred observa-tions, 2nd column represents restorations obtained by theproposed algorithm, 3rd column represents restorations ob-tained by the method proposed in [13].

tions of the partially blurred images taken by a commercialcamera demonstrate that using sparse priors and the proposedparameter estimation can substantially improve the quality ofthe observed partially blur image. Finally, it was shown thatthe performance of the proposed algorithm is higher than ex-isting state-of-the-art blind partially blurred image restora-tion algorithms. Future work includes extending the pro-posed method for the restoration of partially blurred imagesin which the foreground part of the observed image is not insharp focus (e.g., when the foreground is in motion).

ACKNOWLEDGEMENTS

This work was supported in part by the Comision Nacionalde Ciencia y Tecnologıa under contract TIC2010-15137.

REFERENCES

[1] B. Amizic, S. D. Babacan, R. Molina, and A. K. Kat-saggelos, “Sparse bayesian blind image deconvolutionwith parameter estimation,” in European Signal Pro-cessing Conference, Eusipco 2010. Aalborg, Denmark,August 2010, pp. 626–630.

[2] L. Bar, B. Berkels, M. Rumpf, and G. Sapiro, “A vari-ational framework for simultaneous motion estimationand restoration of motion-blurred video,” in Proc. IEEE11th Int. Conf. Computer Vision ICCV 2007, 2007, pp.1–8.

[3] S. Cho, Y. Matsushita, and S. Lee, “Removing non-uniform motion blur from images,” in Proc. IEEE 11thInt. Conf. Computer Vision ICCV 2007, 2007, pp. 1–8.

[4] R. Raskar, A. Agrawal, and J. Tumblin, “Coded ex-posure photography: motion deblurring using flutteredshutter,” ACM Trans. Graph., vol. 25, pp. 795–804,July 2006.

[5] A. Levin, P. Sand, T. S. Cho, F. Durand, andW. T. Free-man, “Motion-invariant photography,” in In Proc. ACMSIGGRAPH, p. 2008.

[6] T. S. Cho, A. Levin, F. Durand, and W. T. Freeman,“Motion blur removal with orthogonal parabolic expo-sures,” in Proc. IEEE Int Computational Photography(ICCP) Conf, 2010, pp. 1–8.

[7] S. Dai and Y. Wu, “Removing partial blur in a sin-gle image,” Computer Vision and Pattern Recogni-tion, IEEE Computer Society Conference on, vol. 0, pp.2544–2551, 2009.

[8] Ali Mohammad-Djafari, “A full bayesian approachfor inverse problems,” in in Maximum Entropy andBayesian Methods. 1996, pp. 135–143, Kluwer Aca-demic Publishers.

[9] S.D. Babacan, R. Molina, and A.K. Katsaggelos, “Pa-rameter estimation in tv image restoration using varia-tional distribution approximation,” IEEE Transactionson Image Processing, vol. 17, no. 3, pp. 326–339, 2008.

[10] J. Bioucas-Dias, M. Figueiredo, and J. Oliveira,“Total-variation image deconvolution: A majorization-minimization approach,” in ICASSP’2006, 2006.

[11] A. Levin, D. Lischinski, and Y. Weiss, “A closed-formsolution to natural image matting,” IEEE Trans. PatternAnal. Machine Intell., vol. 30, no. 2, pp. 228–242, 2008.

[12] E Gastal and M. Oliveira, “Shared sampling for real-time alpha matting,” Computer Graphics Forum, vol.29, no. 2, pp. 575–584, 2010.

[13] H. C. Stanley and T. Q. Nguyen, “Single image spa-tial variant out-of-focus blur removal,” in IEEE Inter-national Conference on Image Processing, ICIP 2011,2011.

[14] J. G. Nagy and D. P. O’Leary, “Restoring images de-graded by spatially variant blur,” SIAM J. Sci. Comput.,vol. 19, pp. 1063–1082, July 1998.

[15] J. Jia, “Single image motion deblurring using trans-parency,” in Proc. IEEE Conf. Computer Vision andPattern Recognition CVPR ’07, 2007, pp. 1–8.

[16] S. Dai and Y. Wu, “Motion from blur,” in Proc. IEEEConf. Computer Vision and Pattern Recognition CVPR2008, 2008, pp. 1–8.

1677