3
1986 OPTICS LETTERS / Vol. 29, No. 17 / September 1, 2004 Rapid supersampling of multiframe sequences by use of blind deconvolution James N. Caron Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 Received February 5, 2004 Under certain conditions, multiframe image sequences can be processed to produce images that achieve greater resolution through image registration and increased sampling. This technique, known as supersampling, takes advantage of the spatiotemporal data available in an undersampled imaging sequence. In this effort the image registration is replaced by application of a fast blind-deconvolution technique to remove the motion blur in the upsampled average of the image sequence. This method produces a supersampled image with significantly decreased computational requirements compared with common methods. The method and simu- lated test results are presented. © 2004 Optical Society of America OCIS codes: 100.1830, 100.3010, 100.6640. Obtaining increased spatial information from the temporal characteristics of a multiframe image se- quence, known as supersampling or superresolution, has received significant attention in published litera- ture. 1–4 The objective has been to gain additional information from an image sequence by recovering higher spatial characteristics than can be obtained from a single image. Images produced at one sam- pling frequency are sampled at a higher frequency and combined. If the conditions permit, image resolution can be improved up to the optical cutoff frequency of the imaging system. Much research has been focused on the special case in which frame-to-frame differences are purely translational. In these cases the supersampling processes follow the same general approach. The sampling frequency of each frame is increased to the desired sampling size. The translation errors of each frame are estimated through comparison with a chosen reference frame. The frames are realigned to the supersampled grid and averaged to create the supersampled image. The process of determining the translation errors is a major challenge and can be computationally intensive. Here I present an alternative approach. As before, each frame is upsampled. Then without realignment the frames are averaged together. This produces a single image that has complicated motion blur embedded in it. This motion blur is removed with a blind-deconvolution algorithm. Blind-deconvolution techniques can also be computationally intensive and require signif icant user input. However, a recent innovation, the self-deconvolving data reconstruc- tion algorithm (SeDDaRA), allows fast and effective identification of an unknown blur function, allowing restoration of the image. 5–7 Thus it is possible to produce a supersampled image without measuring frame-to-frame translation errors and in a timely fashion. To demonstrate the supersampling and blind- deconvolution technique, an imaging simulation was produced that generates not only frame-to-frame mo- tion but also motion blur within each frame. The in- frame motion blur is important since it occurs in most imaging sequences and is not removed with image reg- istration algorithms. However, blind-deconvolution techniques can remove at least some of this type of blur in addition to the frame-to-frame motion blur. A multiframe sequence was created by applying 256 random translations, calculated with a first-order au- toregressive expression, to a still image. Each seg- ment of 16 images was averaged to form a 16-frame video sequence with motion blur and random noise. As described, the frames were upsampled and averaged together. The averaged image contains a complicated motion blur that can be removed with SeDDaRA. A mathematical representation of the averaged image in frequency space G u, v is Gu, v F u, vDu, v 1 W u, v , (1) where u, v are the coordinates in frequency space, F u, v is the real scene, Du, v represents the blur function, and W u, v is a noise term. For this study a pseudoinverse filter was used to filter the inf luence of the blur from the image. The deconvolution is given by F u, v Gu, vD u, v jDu, vj 2 1 C 2 , (2) where parameter C 2 is typically chosen by trial and error. This filter is fast, easy to apply, and provides a good approximation of the Weiner filter. 8 – 11 Without explicit knowledge of Du, v the function must be estimated through blind deconvolution. Re- search on blind deconvolution extends back several decades, but studies by Ayers and Dainty 12 spurred an increase in activity in the astronomical commu- nity. 13 All methods require some prior knowledge of either the scene, 14 the scene statistics, 15,16 or the shape of the blur function. 17 Most of these techniques are iterative and can be applied only within certain restraints. In contrast, the SeDDaRA approach can be applied to all images, provided a suitable represen- tation of the desired spatial frequency can be found. 5 0146-9592/04/171986-03$15.00/0 © 2004 Optical Society of America

Rapid supersampling of multiframe sequences by use of blind deconvolution

  • Upload
    james-n

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Rapid supersampling of multiframe sequences by use of blind deconvolution

1986 OPTICS LETTERS / Vol. 29, No. 17 / September 1, 2004

Rapid supersampling of multiframe sequences by use ofblind deconvolution

James N. Caron

Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706

Received February 5, 2004

Under certain conditions, multiframe image sequences can be processed to produce images that achieve greaterresolution through image registration and increased sampling. This technique, known as supersampling,takes advantage of the spatiotemporal data available in an undersampled imaging sequence. In this effortthe image registration is replaced by application of a fast blind-deconvolution technique to remove the motionblur in the upsampled average of the image sequence. This method produces a supersampled image withsignificantly decreased computational requirements compared with common methods. The method and simu-lated test results are presented. © 2004 Optical Society of America

OCIS codes: 100.1830, 100.3010, 100.6640.

Obtaining increased spatial information from thetemporal characteristics of a multiframe image se-quence, known as supersampling or superresolution,has received significant attention in published litera-ture.1 –4 The objective has been to gain additionalinformation from an image sequence by recoveringhigher spatial characteristics than can be obtainedfrom a single image. Images produced at one sam-pling frequency are sampled at a higher frequency andcombined. If the conditions permit, image resolutioncan be improved up to the optical cutoff frequency ofthe imaging system.

Much research has been focused on the specialcase in which frame-to-frame differences are purelytranslational. In these cases the supersamplingprocesses follow the same general approach. Thesampling frequency of each frame is increased tothe desired sampling size. The translation errors ofeach frame are estimated through comparison witha chosen reference frame. The frames are realignedto the supersampled grid and averaged to create thesupersampled image. The process of determining thetranslation errors is a major challenge and can becomputationally intensive.

Here I present an alternative approach. As before,each frame is upsampled. Then without realignmentthe frames are averaged together. This producesa single image that has complicated motion blurembedded in it. This motion blur is removed with ablind-deconvolution algorithm. Blind-deconvolutiontechniques can also be computationally intensive andrequire signif icant user input. However, a recentinnovation, the self-deconvolving data reconstruc-tion algorithm (SeDDaRA), allows fast and effectiveidentification of an unknown blur function, allowingrestoration of the image.5 – 7 Thus it is possible toproduce a supersampled image without measuringframe-to-frame translation errors and in a timelyfashion.

To demonstrate the supersampling and blind-deconvolution technique, an imaging simulation wasproduced that generates not only frame-to-frame mo-tion but also motion blur within each frame. The in-

0146-9592/04/171986-03$15.00/0 ©

frame motion blur is important since it occurs in mostimaging sequences and is not removed with image reg-istration algorithms. However, blind-deconvolutiontechniques can remove at least some of this type ofblur in addition to the frame-to-frame motion blur.

A multiframe sequence was created by applying 256random translations, calculated with a first-order au-toregressive expression, to a still image. Each seg-ment of 16 images was averaged to form a 16-framevideo sequence with motion blur and random noise.

As described, the frames were upsampled andaveraged together. The averaged image contains acomplicated motion blur that can be removed withSeDDaRA. A mathematical representation of theaveraged image in frequency space G�u, v� is

G�u, v� � F �u, v�D�u, v� 1 W �u, v� , (1)

where �u, v� are the coordinates in frequency space,F �u, v� is the real scene, D�u, v� represents the blurfunction, and W �u, v� is a noise term.

For this study a pseudoinverse f ilter was used tofilter the inf luence of the blur from the image. Thedeconvolution is given by

F �u, v� �G�u, v�D��u, v�jD�u, v�j2 1 C2

, (2)

where parameter C2 is typically chosen by trial anderror. This filter is fast, easy to apply, and providesa good approximation of the Weiner filter.8 – 11

Without explicit knowledge of D�u, v� the functionmust be estimated through blind deconvolution. Re-search on blind deconvolution extends back severaldecades, but studies by Ayers and Dainty12 spurredan increase in activity in the astronomical commu-nity.13 All methods require some prior knowledgeof either the scene,14 the scene statistics,15,16 or theshape of the blur function.17 Most of these techniquesare iterative and can be applied only within certainrestraints. In contrast, the SeDDaRA approach canbe applied to all images, provided a suitable represen-tation of the desired spatial frequency can be found.5

2004 Optical Society of America

Page 2: Rapid supersampling of multiframe sequences by use of blind deconvolution

September 1, 2004 / Vol. 29, No. 17 / OPTICS LETTERS 1987

The SeDDaRA process assumes the transfer func-tion has the form

D�u, v� � �KGS �jG�u, v� 2 W �u, v�j��a�u, v�, (3)

where a�u, v� is a tuning parameter and KG is a real,positive scalar chosen to ensure that jD�u, v�j # 1.Application of the smoothing filter S � � assumes thatD�u, v� is a slowly varying function. Assumptions forthis calculation are explained in a previous paper.5

After some derivation, a�u, v� is found to be

a�u, v�

�ln�KGS jG�u, v� 2 W �u, v�j� 2 ln�KF 0S �jF 0�u, v�j��

ln�KGS jG�u, v� 2 W �u, v�j�,

(4)

where F 0�u, v� is an alternative truth image thatsatisfies

KF 0S �jF 0�u, v�j� � KFS �jF �u, v�j� . (5)

The presence of a smoothing filter greatly relaxes thiscondition. In relation (4), KG and KF 0 must be deter-mined such that jD�u, v�j # 1. This condition is satis-fied if we set KG � 1�max�S �jG�u, v�j�� and KF 0 �1�max�S �jF 0�u, v�j��.

After D�u, v� has been extracted from the averagedimage, both functions are inserted into relation (2) toremove the blur.

Figure 1 (left) shows a portion of the 600 3 600truth image used for the simulation. The maximumallowable translation of 3.0 pixels produced an aver-age frame-to-frame motion of 1.2. The frames weredownsampled to 256 3 256 by use of bicubic interpo-lation and were rotated 90±. The frames were thenupsampled to 512 3 512. The frames were averaged,as shown in Fig. 1 (right), and processed with blind de-convolution (Fig. 2).

There is signif icant evidence of supersampling inthe image. The image is sharper, and small objectshave greater contrast against their backgrounds. Ex-amination of the images at a f iner scale supports thisclaim. Figure 3 displays a portion of the image along-side the canal for each step in the process. The origi-nal, nondegraded image is in Fig. 3(a). Figure 3(b) isa single frame at the lower sampling frequency, i.e.,the image produced by the simulated camera. Thefamiliar stair-step pattern is evident. Figure 3(c) isthe average of all frames at the final sampling fre-quency. Figure 3(d) is the fully processed result show-ing a sharper edge than the averaged image and lessof a stair-step pattern along the edge. The other fea-tures closely match their respective shapes in the truthimage.

The final image has artifacts produced by the in-terpolation process that manifest themselves as ring-ing from edges and lines not parallel to image edges.Since these anomalies are not spatially invariant, thedeconvolution amplif ies these artifacts. These arti-facts are not expected to occur in a real, nonsimulatedsequence.

For comparison, the image sequence was super-sampled by use of a phase correlation method to mea-sure the translation errors. The realigned imageswere then averaged together. A portion of the resultis shown in Fig. 4. Figure 4(b) shows obvious blurwhen compared with the supersampled and blind-deconvolution method shown in Fig. 4(a). Applicationof an edge f ilter [Fig. 4(c)] removes some blur butshows less evidence of supersampling. Applicationof blind deconvolution to Fig. 4(b) shows a result[Fig. 4(d)] that is similar to Fig. 4(a) but requiressignificantly more computations.

Fig. 1. (Left) Left side of the original or truth image usedin the simulation. This is an aerial view of Norfolk, Vir-ginia, taken by the Ikonos satellite. (Right) Right side ofthe average of 16 translated frames upsampled to the f inalsampling frequency.

Fig. 2. Fully processed image produced by applying ablind deconvolution to the averaged image.

Page 3: Rapid supersampling of multiframe sequences by use of blind deconvolution

1988 OPTICS LETTERS / Vol. 29, No. 17 / September 1, 2004

Fig. 3. Portion of the image for each step of the super-sampling process. (a) Truth image. (b) Single frame ofthe multiframe sequence at the lower sampling frequency.(c) Average of the translated frames at the f inal sam-pling frequency. (d) Result of the blind deconvolutionprocess.

Fig. 4. Comparison of two supersampling techniques.(a) Supersampling with blind deconvolution. (b) Super-sampling with a phase correlation method. (c) Applicationof an edge filter to (b). (d) Application of blind deconvo-lution to (b).

This method of supersampling presents a distinct ad-vantage over other techniques when pixel-sized trans-lations are expected. Being able to bypass the processof estimating the frame-to-frame translation errors andrealignment not only saves computation time but alsoeliminates the risk of increased blur that would resultfrom miscalculations. There is an expectation thatthe effectiveness of this approach will decrease with in-creased pixel motion. That transition point is depen-dent on the degree of motion and the signal-to-noiseratio of the images. However, the motion blur withina frame will also increase, making the image registra-tion process equally difficult. For this case a combi-nation of blind deconvolution and image registrationwould be required to regain the maximum amount ofinformation from the image sequence.

James N. Caron’s e-mail address is [email protected].

References

1. M. Elad and Y. Hel-Or, IEEE Trans. Image Process.10, 1187 (2001).

2. S. Borman and R. L. Stevenson, in Proceedings of the1998 Midwest Symposium on Circuits and Systems(Institute of Electrical and Electronics Engineers,New York, 1998), pp. 374–378.

3. D. R. Gerwe and D. J. Lee, Opt. Eng. 41, 2238 (2002).4. J. M. Schuler and D. A. Scribner, Opt. Eng. 38, 801

(1999).5. J. N. Caron, N. M. Namazi, and C. J. Rollins, Appl.

Opt. 41, 6884 (2002).6. J. N. Caron, N. M. Namazi, R. L. Lucke, C. J. Rollins,

and P. R. Lynn, Jr., Opt. Lett. 26, 1164 (2001).7. J. N. Caron, “Signal processing using the self-

deconvolving data reconstruction algorithm,” U.S.patent (February 15, 2001).

8. A. K. Jain, Fundamentals of Digital Image Processing(Prentice-Hall, Englewood Cliffs, N.J., 1989).

9. N. Weiner, The Extrapolation, Interpolation, andSmoothing of Stationary Time Serves with Engineer-ing Applications (Wiley, New York, 1949).

10. C. W. Helstrom, J. Opt. Soc. Am. 57, 297 (1967).11. D. Slepian, J. Opt. Soc. Am. 57, 918 (1967).12. G. R. Ayers and J. C. Dainty, Opt. Lett. 13, 547 (1988).13. D. G. Sheppard, B. R. Hunt, and M. W. Marcellin,

J. Opt. Soc. Am. A 15, 978 (1998).14. D. Kundur and D. Hatzinakos, IEEE Trans. Signal

Process. 46, 375 (1998).15. T. J. Holmes, J. Opt. Soc. Am. A 9, 1052 (1992).16. T. J. Schulz, J. Opt. Soc. Am. A 10, 1064 (1993).17. A. S. Carasso, SIAM (Soc. Ind. Appl. Math.) J. Appl.

Math. 61, 1980 (2001).