9
Depth Estimation from Blur Estimation Tim Zaman Delft University of Technology 1 Introduction When images are captured with a small depth of field, objects that are away from the focal plane are out of focus and are perceived as blurry. This effect usually occurs at relatively large apertures or when the focal plane is close to the lens. An image that includes objects in focus and out of focus could therefore be segmented in terms of depth. If we are able to measure the size of this blur, we can also produce a three dimensional depth map of the same image. In this report, blur estimation will be performed using methods proposed by Elder and Zucker (1998) and Hu and Haan (2006). The algorithms of both methods will be described, and tests from the papers will be repeated using a synthetic image (Fig. 6) and a real home brew photograph (Fig. 1). 2 Blur estimation using the Elder and Zucker method A method of edge detection and blur estimation has been proposed by Elder and Zucker (2006). Their method performs a edge detection using local scale control, and uses reliable scales to detect valid edges suited for blur estimation. To illustrate their algorithm, we will consider the 2D signal from Fig. 2 that we call b(x). In Fig. 3 the different steps of the algorithm for our sample signal are displayed. They define the center of a edge the point where the gradient is largest. A first derivative steerable Gaussian filter is applied, leading to our 1 st derivative signal. This signal reaches its maximum in the direction of the 1 st derivative θ M where its derivative in turn has a zero crossing. This is indicated in Fig. 3 as point c. To find the size d of the edge, we can take the distance between the locations of the largest and smallest gradient of the 1 st derivative, or the zero crossings of the 3 rd derivative function. The response of the gradient could be caused to noise alone. Therefore, they apply thresholds c 1 and c 2 on respectively the amplitudes of the 1 st and 2 nd derivative. Any value smaller than these thresholds will be discarded and con- sidered undetectable. These thresholds are obtained by using parameters as the chance for a Type I error in the image, the image noise s n and the standard deviation of the Gaussian derivative filter. This does indeed mean that a priori knowledge is necessary for this algorithm. The image is iterated multiple times by applying different values for the Gaus- sian derivative filters, because our images often have multiple values of blur.

Depth Estimation from Blur Estimation - Tim Zaman Estimation from Blur Estimation Tim Zaman Delft University of Technology 1 Introduction When images are captured with a small depth

Embed Size (px)

Citation preview

Depth Estimation from Blur Estimation

Tim Zaman

Delft University of Technology

1 Introduction

When images are captured with a small depth of field, objects that are awayfrom the focal plane are out of focus and are perceived as blurry. This effectusually occurs at relatively large apertures or when the focal plane is close tothe lens. An image that includes objects in focus and out of focus could thereforebe segmented in terms of depth. If we are able to measure the size of this blur,we can also produce a three dimensional depth map of the same image. In thisreport, blur estimation will be performed using methods proposed by Elder andZucker (1998) and Hu and Haan (2006).The algorithms of both methods will be described, and tests from the papers willbe repeated using a synthetic image (Fig. 6) and a real home brew photograph(Fig. 1).

2 Blur estimation using the Elder and Zucker method

A method of edge detection and blur estimation has been proposed by Elderand Zucker (2006). Their method performs a edge detection using local scalecontrol, and uses reliable scales to detect valid edges suited for blur estimation.To illustrate their algorithm, we will consider the 2D signal from Fig. 2 that wecall b(x).In Fig. 3 the different steps of the algorithm for our sample signal are displayed.They define the center of a edge the point where the gradient is largest. A firstderivative steerable Gaussian filter is applied, leading to our 1st derivative signal.This signal reaches its maximum in the direction of the 1st derivative θM whereits derivative in turn has a zero crossing. This is indicated in Fig. 3 as point c.To find the size d of the edge, we can take the distance between the locations ofthe largest and smallest gradient of the 1st derivative, or the zero crossings ofthe 3rd derivative function.The response of the gradient could be caused to noise alone. Therefore, theyapply thresholds c1 and c2 on respectively the amplitudes of the 1st and 2nd

derivative. Any value smaller than these thresholds will be discarded and con-sidered undetectable. These thresholds are obtained by using parameters as thechance for a Type I error in the image, the image noise sn and the standarddeviation of the Gaussian derivative filter. This does indeed mean that a prioriknowledge is necessary for this algorithm.The image is iterated multiple times by applying different values for the Gaus-sian derivative filters, because our images often have multiple values of blur.

2

Fig. 1. The sample image. A Nikon FTn with its lens in focus and its body out offocus. Behind it a Nikon F4 that is even more out of focus.

Fig. 2. The signal we will use to describe blur estimation using various algorithms.

The value of this derivative filter limits the values of blur it can detect, and byusing multiple filters a wider blur scale can be measured. If the thresholds aremapped on the 1st and 2nd derivative maps, we obtain minimum reliable scalemaps. From out zero crossings in our image on the 2nd derivative map, we cannow obtain the edges. Similarly, we can compute the blur size d if we move fromthe edge in the direction of the gradient θM until we find a zero crossing of the3rd derivative map. In this way, we can compute the blur size for each edge.

3 Blur estimation using the Hu and Haan method

A straightforward method for blur estimation has been proposed by Hu andHaan (2006). In their approach, they reblur the signal to be determined twicewith Gaussian kernels σa and σb to determine the local blur σ of the signal.Similarly to the previous section, we illustrate their algorithm using the 2D signalfrom Fig. 2 that we call b(x). For clarity, the steps taken are displayed in a block

3

Fig. 3. The edge signal and its consecutive derivatives for the Elder and Zucker method.Indicated are the zero crossing of the 2nd derivative; the center of the blur at pointc, and the zero crossings of the 3rd derivative which define the blur distance d. Thebottom subplots show the local scale control thresholds.

diagram in Fig. 4. The signal is convolved with a Gaussian kernel with differentstandard deviations σa and σb leading to two signals ba(x) and bb(x). To makethe blur independent of amplitude and offset, ratio r(x) is computed.

r(x) =b(x)− ba(x)

ba(x)− bb(x)(1)

The difference ratio will now peak where the difference between b(x) and thereblurred versions is large. This will happen at points where the signal changessignificantly in amplitude, exactly the points where the blurring has had themost impact. In this local signal, only the point where r(x) is largest is of inter-est, because it will define σ of the entire area. This is because we assume thatlocally the blur is the same for that area. Therefore we will apply a maximumfilter with a certain window which results in rmax(x). This maximum ratio canmathematically be solved using σ, σa and σb. If we assume:

σa, σb � σ (2)

Then we can fill in and write out the gaussian functions in (1) for their maximumpeak location, we can rewrite the equation to solve for our blur estimation:

σ ≈ σa · σb(σb − σa) · rmax(x) + σb

(3)

4

The intermediate steps and signals computed from our 2D sample signal aredisplayed in the graphs in Fig. 5. We can see that it computes the smallest blursize locally around the edge as σ ≈ 1.1. We can see that the rest of the signalcontains little information. Therefore, the ratio is low. This will lead to a blursize close to that when rmax(x) → 0, which shows the largest distinguishableblur of the signal. Though, if the blur approaches this value, (2) does not holdany more and the blur estimation will become increasingly invalid. Due to thesmall amounts of computations necessary this is a fast and effective way of esti-mating blur. It does not need any preprocessing (f.e. edge detection or multipleiterations) and is therefore fairly simple and fast. The computation time of thismethod was approximately 6 times faster than the Elder and Zucker approachfor similar results.

Fig. 4. The block diagram of the Hu and Haan method.

Fig. 5. All steps of the Hu and Haan algorithm in plots.

5

4 Depth estimation using blur estimation

The main origins of blur are objects being out of focus, shadows casted byobjects, or objects having a physical surface that is perceived as blur. An objectout of focus will produce a blur because it is too far away from the focal plane.This already hints to distance or depth. Although a shadow that is cast by anobject from a light source that is not a point-source could also hint to distanceand physical size, in this report we will stick to blur that is caused by theformer.The amount of blur that is in a part of such an image increases with depth.Therefore, if we can estimate the amount of blur, we can estimate the relativedepth. If we would know all the camera’s parameters it would even be possibleto measure absolute depth using equations stated by Pentland (1987) in thispaper on depth-from-defocus :

D ≈ F · v0v0 − F − σ · f

(4)

In this equation, D is the distance from the lens to the point of interest, v0 thedistance between lens and focal plane, F is the focal length, f the aperture num-ber of the lens and σ the Gaussian standard deviation or blur size. So equation(4) directly relates the estimated blur the to a absolute depth estimation.

5 Experiment on synthetic image

The experiments in the paper of Elder and Zucker (1998) (p. 709 Fig. 7) willbe repeated for different standard deviation levels sn of Gaussian noise usingMatlab. The synthetic image used is displayed in Fig. 6. Two different levels ofwhite Gaussian noise have been imposed on this image with standard deviationssn = {0, 1, 3} leading to the three images for our experiment.When we run the image through Elder and Zucker’s algorithm, we obtain theedge location and the distance d, as depicted in Fig. 7. In this image we cansee the map of the 2nd and 3rd derivatives in the direction of the 1st derivative.Indicated are the zero crossings for both derivatives. We can see that the zerocrossings on the 2nd derivative map correctly indicate the edge line, and the zerocrossing for the 3rd derivative map show the increase of the blur size. From theblur distance d the blur scale σ is calculated.For the three different images, the computed blur scale is shown in Fig. 8. Wecan see that the blur is very accurately estimated for a noiseless image. For thenoisy images, the performance of the estimation becomes worse especially whenthe blur size and distance d increase.

6 Depth segmentation on a photograph

Similarly to the experiments in the paper of Elder and Zucker (1998) (p. 713Fig. 10) a depth segmentation will be performed by using the estimated blur

6

Fig. 6. Synthetic image with zero noise and a linear blur grade, ranging from σb = 1px to σb = 26.6 px.

Fig. 7. The 2nd and 3rd derivative magnitude maps in the direction of the gradientwith their zero crossings indicated by lines. On the right the sample image with thecombined zero crossings imposed.

levels using both methods mentioned in this report. The sample image used istaken by the author and depicted in Fig 1.The results using the Elder and Zucker method are shown in Fig. 9. Shown arethe minimum reliable scale (mrs) maps of both derivatives, and the segmentsfor three different blurs (or equivalently, depths). In the mrs maps a light colormeans no reliable scale is found and a darker color means a smaller reliablescale has been found. We can see it has not found any reliable scale for the plainbackground and the parts with little local variance. In the segmentations, we cansee it has successfully found the focal plane around the segmentation of σ = 0.5,behind that, more out of focus is the first camera, and even more behind thatwith a larger distance from our focal plane the background is also found.When we use the Hu and Haan method on this same photograph as shown in Fig.10 we find similar results. Mostly due to the maximum filter the resolution of thisdepth map in the image plane is significantly less. This is evident from the blurmap. We can also see maps of the preceding steps necessary for the computationof this final blur map. Finally for comparison with the previous method, theblur map has been thresholded for values around σ = 0.5 and σ = 1. If we then

7

Fig. 8. Estimated vs. actual blur scale along the edge for the sample image with im-posed three Gaussian white noise levels with standard deviation sn = {0, 1, 3}

.

compare these segmented parts between the two methods, we can see that theirresults are fairly similar for this image.The largest difference is that it becomes apparent that the Hu and Haan methodis fairly course, while the Elder and Zucker method seems to keep the blurrededges fairly well in tact. There was also a large difference in computation time,0.6s for the Elder and Zucker method and 0.1s for the Hu and Haan method.As a final result, I have made a three dimensional image as shown in Fig. 11.Subjectively I can state that the depth map for both methods feels correct, andcan possibly be made even more accurate with proper post processing.

Fig. 9. Depth segmentation using blur estimation by using the Elder and Zuckermethod. Shown are the minimum reliable scale maps for the 1st and 2nd derivative.The three images on the bottom row indicate segmented thresholded depths in theimage for σb = {8, 1, 0.5}.

8

Fig. 10. Depth segmentation using blur estimation by using the Hu and Haan method.imA and imB are the two blurred images with σa = 4 and σb = 6. Also shown are themaps of the numerator imorig − ima and denumerator ima − imb in the computationof r, which is also shown. In the blur map, a lighter color is a larger sigma, true whitebeing the maximum. Besides the blur map, two images are segmented for σ = (0.5, 1.5)and σ = (0, 0.5).

7 Discussion

Blur estimation as used for the described purposes is only possible when there isa small depth of field, with objects that lie outside that depth of field. Objectscloser to the lens from the focal plane will blur similarly as objects further awayfrom it. Hence we would not be able to detect if an object is either in front orbehind of the focal plane. This is all given that we use a camera that indeed isable to produce a small depth of field. Many digital photographic sensors aresmall with small lenses and will therefore produce an image with a very largedepth of field, on which blur estimation will not have much effect.As stated before, we can not differ the described blur from blur caused by thepenumbra of a shadow or a physical blur (as a texture for example). Therefore,misregistration can occur in even the best blur estimation algorithm.Elder and Zucker do not specifically state how to determine the distance from thecenter of a blurred signal to its edge. Although this is evident in 2D, doing thisin 3D in the direction of the gradient is more difficult if we consider the fact that

9

Fig. 11. Three dimensional sideview depthmaps of the testimage using both the indi-cated algorithms.

the positions of the zero crossings are in fact discrete. Finding the exact distancebetween these points would be complex and will probably be computationallyexpensive.

References

J.H. Elder and S.W. Zucker: Local Scale Control for Edge Detection and Blur Estima-tion, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol 20, No7, (1998)

H. Hu and G. de Haan: Low Cost Robust Blur Estimator, Proceedings ICIP 2006,Atlanta

A.P. Pentland: A New Sense for Depth of Field, IEEE Transaction on Pattern Analysisand Machine Intelligence, Vol 9, No 4, (1987)