of 12 /12
Mutual Information Refinement for Flash-no-Flash Image Alignment Sami Varjo 1 , Jari Hannuksela 1 , Olli Silv´ en 1 , and Sakari Alenius 2 1 Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P.O. Box 4500, FI-90014 University of Oulu, Finland [email protected],[email protected],[email protected] 2 Nokia Research Center, Tampere, Finland [email protected] Abstract. Flash-no-flash imaging aims to combine ambient light images with details available in flash images. Flash can alter color intensities radically leading to changes in gradient directions and strengths, as well as natural shadows possibly being removed and new ones created. This makes flash-no-flash image pair alignment a challenging problem. In this paper, we present a new image registration method utilizing mutual in- formation driven point matching accuracy refinement. For a phase corre- lation based method, accuracy improvement through the suggested point refinement was over 40 %. The new method also performed better than the reference methods SIFT and SURF by 3.0 and 9.1 % respectively in alignment accuracy. Visual inspection also confirmed that in several cases the proposed method succeeded in registering flash-no-flash image pairs where the tested reference methods failed. Keywords: registration, illumination, flash 1 Introduction Computational imaging is used, not only to create, but also to improve existing digital images. Among the goals of image fusion research are a higher dynamic range for more natural coloring of the scene [12], a larger field of view via image mosaicking [30], and increased information content for super resolution images [18]. All methods combining data from several sources require the input im- ages be spatially aligned, and failing to do so usually result in a anomalies, like ghosting effects, in the final result. Typically capturing images with low ambient light requires long exposure times. This leads easily to image blurring resulting from small movements of the camera if a tripod is not used. Other options are to increase the camera sensor sensitivity, or increase the aperture size, or to use external lighting. Increasing the sensor sensitivity leads easily to noisy images and aperture size adjustment is often limited by the optics. Flash is often used to provide extra illumination on the scene to reduce the required exposure time. While the flash images appear sharp, flash render colors

Mutual Information Refinement for Flash-no-Flash … · Mutual Information Refinement for Flash-no-Flash Image Alignment Sami Varjo 1, Jari Hannuksela , Olli Silv´en , and Sakari

  • Author
    hadung

  • View
    214

  • Download
    0

Embed Size (px)

Text of Mutual Information Refinement for Flash-no-Flash … · Mutual Information Refinement for...

  • Mutual Information Refinement

    for Flash-no-Flash Image Alignment

    Sami Varjo1, Jari Hannuksela1, Olli Silven1, and Sakari Alenius2

    1 Machine Vision Group, Infotech Oulu and Department of Electrical andInformation Engineering, P.O. Box 4500, FI-90014 University of Oulu, Finland

    [email protected],[email protected],[email protected] Nokia Research Center, Tampere, Finland

    [email protected]

    Abstract. Flash-no-flash imaging aims to combine ambient light imageswith details available in flash images. Flash can alter color intensitiesradically leading to changes in gradient directions and strengths, as wellas natural shadows possibly being removed and new ones created. Thismakes flash-no-flash image pair alignment a challenging problem. In thispaper, we present a new image registration method utilizing mutual in-formation driven point matching accuracy refinement. For a phase corre-lation based method, accuracy improvement through the suggested pointrefinement was over 40 %. The new method also performed better thanthe reference methods SIFT and SURF by 3.0 and 9.1 % respectivelyin alignment accuracy. Visual inspection also confirmed that in severalcases the proposed method succeeded in registering flash-no-flash imagepairs where the tested reference methods failed.

    Keywords: registration, illumination, flash

    1 Introduction

    Computational imaging is used, not only to create, but also to improve existingdigital images. Among the goals of image fusion research are a higher dynamicrange for more natural coloring of the scene [12], a larger field of view via imagemosaicking [30], and increased information content for super resolution images[18]. All methods combining data from several sources require the input im-ages be spatially aligned, and failing to do so usually result in a anomalies, likeghosting effects, in the final result.

    Typically capturing images with low ambient light requires long exposuretimes. This leads easily to image blurring resulting from small movements of thecamera if a tripod is not used. Other options are to increase the camera sensorsensitivity, or increase the aperture size, or to use external lighting. Increasingthe sensor sensitivity leads easily to noisy images and aperture size adjustmentis often limited by the optics.

    Flash is often used to provide extra illumination on the scene to reduce therequired exposure time. While the flash images appear sharp, flash render colors

  • that are often unnatural compared to ambient light. Dark areas may appear toolight and light areas too dark. This also often alters the directions and strengthsof gradients in the two frames. Directional flash also has limited power, oftenyielding uneven lighting on the scene. The foreground and center of the scene areilluminated more than the background and edges. Flash also reflects from shinysurfaces like glass or metal producing artifacts. Previous problems can lead to asituation where the ambient light image has more information in certain areasthan the flash image and the other way around. Automatic camera focus mayalso differ depending on whether the foreground or background of the scene isilluminated, resulting in out of focus blurring in different areas of images.

    The differences between ambient light images and flash images make flash-no-flash image pair alignment a challenging problem. However, these same dif-ferences make the fusion of the flash-no-flash images desirable. While fusionmethods [19, 2] and removal of artifacts introduced by flash have already beenaddressed [1], the flash images typically have been obtained using a tripod, oralignment is assumed to be solved without studying the alignment. For exam-ple, Eisemann and Durand utilize a gradient extraction-based method for imagealignment when describing their flash image fusion with a notion that moreadvanced approaches could be used [5].

    Since flash-no-flash image alignment is new research area, we discuss in thefollowing chapters several popular alignment approaches and their suitability forthe problem. We propose the use of mutual information for refining the pointpair matching and a new alignment method based on that concept. In the exper-iments, we show that the refinement step can improve the alignment accuracyconsiderably, and that the new method performs better than the selected refer-ence methods in alignment accuracy with flash-no-flash image pairs.

    2 Alignment Methods

    The goal in image alignment is to solve the transformation between two or morecaptured images. Alignment methods can be roughly divided into feature andimage based approaches. In the former, the interest points are extracted fromimages and these are matched, based on selected features. Matching point setsare used to solve the transformation. The image based methods use image datafor solving the global transformations.

    2.1 Feature Based Methods

    In feature based approaches, interest points are extracted from input imagesand suitable descriptors are calculated for each of these points. The interestpoints and the descriptors should be invariant to possible transformations sothat matching points can be found. The interest points are typically visuallydistinct in the images, such as corners or centroids of continuous areas. Themapping between images can be solved using least squares type minimizationapproaches, if the point correspondences are well established. Usually nonlinear

  • optimization methods are used to solve transformations in over determined pointsets to achieve more accurate results.

    The Harris corner detector has been widely utilized as an interest point de-tector [10]. Other well known methods for interest points and descriptors arethe Smallest Univalue Segment Assimilating Nuclei (SUSAN) [23], the Scale In-variant Feature Transform (SIFT) [15] and the Speeded Up Robust Features(SURF) [3]. The above methods rely on spatial image information. Frequencydomain approaches such as phase congruency also exist [13]. Maximally StableExtremal Regions (MSER) [16] is an example of interest area detectors.

    Figure 1 presents examples illustrating the problems associated with aligningflash-no-flash images. The Harris corner value for a point is calculated using thegradients to estimate the eigenvalues for the points autocorrelation matrix in agiven neighborhood. Flash clearly alters the gradients in the scenes. Foregroundfeatures are highlighted with flash while the ambient light image picks up thefeatures in the background or behind transparent objects like windows. Interestpoint selection based on gradients can lead to point sets where no or only a fewmatches exist.

    SUSAN is based on assigning a gray value similarity measure for a pixelobserved in a circular neighborhood [23]. This non-linear filtering method givesa high response for points where neighborhood intensities are similar to thecenter pixel value. Interest points like corners and edges have low values whichcan be found by thresholding. With flash-no-flash image pairs, prominent edgescan differ, leading to false matching. Even when there are some corresponding

    Fig. 1. Left: Harris feature response for a flash image (top) and no flash image (bot-tom), Center: SUSAN responses and interest points (yellow) for a flash image (top) andno flash image (bottom), Right: MSER features in a flash (top) and no flash (bottom)images. With MSER images the yellow areas are the found stable regions and the redmarks seed points.

  • edges, the feature points match poorly due to changes in image intensities, andthe approach fails in practice.

    MSER can be considered to be based on the well known watershed segmen-tation algorithm [17]. Compared to watershed, instead of finding segmentationboundaries MSER seeks regions which are stable on different watershed levels.The local minimum and the thresholding level define a maximally stable ex-tremal region. But also here, flash alters the scene too much for this approachto perform reliably. The example image pair in Fig. 1 shows that the extractedareas not only vary in shape and size, but the stable areas also vary in location.

    SIFT key points are located in maximums and minimums in the differenceof Gaussian filtered images in scale space. The descriptors based on gradienthistograms are normalized using local maximum orientation. SURF also relieson gradients when locating and calculating descriptors for the key points. Whilethe flash may relocate shadow edges, also the gradient directions may changesince light areas can appear darker or vice versa. Heavy utilization of gradientstrengths and directions affect their robustness with flash-no-flash image pairalignment.

    2.2 Image Based Methods

    There are several methods where no key points are extracted for image align-ment, but the whole or a large part of the available image is used. Normalizedcross correlation or phase correlation based approaches are the most well knownimage based alignment methods. The Fourier-Mellin transformation has beenutilized widely for solving image translations and rotations [22].

    Mutual information (MI) has proven to be useful for registering multi-modaldata in medical applications [20]. MI is an entropy based measure describingthe shared information content between two signals, originating from Shannonsinformation theory. The typical approach is to approximate the images mutualinformation derivative with respect to all transformation parameters and applya stochastic search to find the optimal parameters [28].

    Hybrid methods of image based and interest point approaches can also bemade. Coarse alignment using an image based method and feature based ap-proaches for refining the result have been used for enhancing low-light imagesfor mobile devices [25] and also for panorama stitching [21].

    3 Proposed Method

    We propose a method where interest point matches found using block phasecorrelation are refined using mutual information as the improvement criteria toovercome the problems discussed earlier. MI considers the joint distributionsof gray values in the inputs instead of comparing the gray values directly. Themutual information can therefore describe the similarity of the two image patchesenabling alignment even with multi-modal imaging cases [20]. Pulli et al. have

  • presented a similar approach but without the MI driven point refinement step[21].

    The alignment is initialized by dividing one of the input images into sub-windows. A single point is selected in each sub-window as the reference pointthat is matched in the other image using phase correlation. This gives the initialset of matching points with good control on the number of interest points, anddistributes them uniformly on the images. Iterative mutual information basedrefinement is used to find more accurate point matching in each of the sub-windows prior to solving the projective mapping between the point sets. Themethod is described and discussed in more detail below:

    1. Solve rough prealignment from low scale images2. Divide input images into sub-windows3. Find a matching point for each sub-window using phase correlation4. Refine the point pair matching using mutual information5. RANSAC for removing the outliers6. Estimate the global transformation

    Prealignment is required since the phase correlation applied for point matchingis done in fairly small windows. These windows must overlap at least by a halffor the phase correlation to work reliably. For example, when point matching inthe subwindows is done using 128 pixel correlation windows, the overlap mustbe at least 64 pixels. Because the tolerance is fairly large, the prealignment canbe done on a rough scale. Here down sampling by a factor of 4 was used.

    Prealignment rotation is solved with the method introduced by Vandewalleet al.[26]. Their method calculates frequency content in images as a functionof the rotation angle by integrating the signal over radial lines in the Fourierdomain. Now the rotation can be efficiently solved with a one-dimensional phasecorrelation while the translation is solved thereafter with conventional 2-D phasecorrelation.

    Sub-Windowing of the input images for point matching is done in two phases.First one image is divided over an evenly spaced grid and a single interest pointis selected for each sub-window defined by the grid. In the second step for pointmatching, the phase correlation windows are established around the found pointcoordinates in both the inputs. This approach divides the points evenly on theimage surface. This way no strong assumptions about the underlying transfor-mation are made, yet it is likely that the point neighborhood contains someuseful information that can be utilized later on in the mutual information basedrefinement step.

    Here, strong interest points in the flash images initial grid are located usingthe Harris corner detector. Other point selection techniques, like a strong Sobelresponse might be utilized as well. Note also that the point is selected only inone input image using this method, and the point matching is done in the otherimage in the vicinity of the same coordinates in the next step.

  • Matching Points with phase correlation is based on the Fourier shift theorem.Let image f2(x, y) = f1(x + x, y + y) and f be the Fourier transformedfunction of f . Then the shift in the spatial domain is presented as a phaseshift in the frequency domain (1). This relationship can be used to solve the

    translation between two image blocks [14]. The cross power spectra of f2 and

    f1 contain the needed phase correlation information, and the translation can besolved either in the spatial or Fourier domain. Taking the inverse transformationfor normalized cross-correlation C and finding the maximum yields, the soughttranslation in the spatial domain (2). f denotes the complex conjugate. Phasecorrelation is capable of producing sub pixel translation accuracy [9].

    f2(wx, wy) = f1(wx, wy)ej(wxx+wyy) , (1)

    C(x, y) = F1

    (

    f1f2

    |f1f2

    |

    )

    , (2)

    (x,y) = arg max(x,y)

    C(x, y) . (3)

    Point correspondences where correlation is below a thresholding value arerejected from the point set. Low correlation can originate from flat image areas,as in the well known aperture problem. Since phase correlation utilizes the phasedifference in images instead of gray values for finding the translation, this makesit less affected by the intensity changes induced by the flash than approachesusing pixel data directly. Further the zero mean unit variance scaling can beused to reduce the lighting variations in the input image blocks [29].

    Point Set Refining is applied to the initial point sets to achieve more ac-curate point matching. The MI is measure based on joint distributions of grayvalues in two inputs. Jacquet et al. discusses in detail about the behavior of theprobabilities defined by the joint distribution stating that when two images arewell aligned, but the intensities of the same structure differ, the set of proba-bilities will not change, but those probabilities may be assigned to shifted grayvalue combinations [11]. Hence, the Shannon entropy will not be affected sinceit is invariant under permutation of elements. This makes the MI an appealingdistance measure for cases where the image contents are heavily distorted bylighting or other non linear transformations.

    The MI of two discrete signals X, and Y is defined in (4) where, p(x) andp(y) are marginal probabilities and p(x, y) is the joint probability mass function.

    MI(X,Y ) =

    xX

    yY

    p(x, y)log2p(x, y)

    p(x)p(y), (4)

    In practice, the MI can be calculated by forming a 2-D joint distribution his-togram for the gray values of the two input images. The histogram is normalizedby the interest area size to give probability distribution p(x, y). Now the row

  • and column sums of the histogram give the marginal gray value probabilitiesp(x) and p(y) for both the input images. These probability estimates can beused to calculate the respective entropies, and finally the mutual informationas presented in (5)(8). The MI value can be normalized by dividing it by theminimum of entropies H(X) and H(Y ) [6] or by their sum [24].

    H(X,Y ) =

    xp(x,y)

    yp(x,y)

    p(x, y) log2(p(x, y)) , (5)

    H(X) =

    xp(x)

    p(x) log2(p(x)) , (6)

    H(Y ) =

    yp(y)

    p(y) log2(p(y)) , (7)

    MI(X,Y ) = H(X) + H(Y ) H(X,Y ) . (8)

    For refining the point matching, the entropy values in one input image arecalculated in 8-neighborhoods, and further the mutual information against theentropy present in the point in the other input image. Matching point loca-tion is adjusted by one pixel if higher mutual information is found in the 8-neighborhood. Iterating this refining step several times allows us to find im-proved point matching in the given search radius. Here the refining was limitedto 8 iterations, while usually only a couple iterations (2-6) were needed.

    Global Transformation described by a homogenous 3x3 matrix with 8-degreesof freedom was solved with non-linear optimization. Despite pruning the pointset by a correlation threshold and the mutual information refining, the point setsmay contain outliers that must be rejected beforehand. Here RANSAC [8] wasused to produce the final point set prior to solving the projective transformation.

    4 Results And Discussion

    Test images were taken with a Nikon D80 digital camera using an internal flash.The original images with a size of 3872x2592 pixels were down sampled by afactor of two for Matlab implementations. The effects of the phase correlationwindow sizes for point matching, as well as the effect of the refinement MI windowsize were studied to optimize the proposed method. Also the proposed methodsaccuracy in flash-no-flash image alignment was compared to three state-of-the-art methods. Figure 2 presents an example of matched points in the flash-no-flashimage pair. The resulting image after fusing the aligned images using the methoddescribed in [2] is also presented.

  • Fig. 2. Matched points in a flash-no-flash image pair using blockwise phase correlationwith mutual information-based point refining and the fused result image.

    4.1 Effects of the Window Sizes

    The mutual information window size had no clearly predictable effect onthe resulting image pair registration accuracy. It appears that the MI windowsize does not have to be very large to achieve the refinement effect. The ap-proach, without using MI refinement improved on average the the global mutualinformation (GMI) by 20.2%, while the utilization of refinement increased theGMI in a range of 30.2 to 31.3 % when the MI window size was varied between25 and 150 pixels (table 1). The best result was achieved with a 75 pixel windowsize that was used later on.

    Table 1. Effect of window size in mutual information refinement on GMI. The bestvalues are in bold numbers.

    Mutual information window sizeimage Unregistered No MI 25 50 75 100 150 200

    1 0,1247 0,1562 0,1550 0,1546 0,1558 0,1559 0,1560 0,15602 0,1216 0,1514 0,1532 0,1540 0,1547 0,1542 0,1544 0,15463 0,1157 0,1535 0,1529 0,1538 0,1539 0,1528 0,1508 0,15084 0,1073 0,1513 0,1516 0,1520 0,1498 0,1466 0,1460 0,14695 0,1072 0,1278 0,1278 0,1278 0,1282 0,1281 0,1281 0,12816 0,1649 0,1804 0,1840 0,1841 0,1840 0,1850 0,1851 0,18537 0,1444 0,1736 0,1720 0,1721 0,1720 0,1705 fail fail8 0,2901 0,3600 0,4107 0,4136 0,4120 0,4142 0,4137 0,41169 0,1171 0,1196 0,1236 0,1231 0,1227 0,1236 0,1206 0,119010 0,0834 0,0875 0,0943 0,0978 0,0981 0,0949 0,0991 0,096411 0,1076 0,1151 0,1141 0,1163 0,1158 0,1149 0,1161 0,114912 0,1007 0,1221 0,1247 0,1244 0,1255 0,1275 0,1277 0,127713 0,1254 0,1461 0,1456 0,1453 0,1463 0,1460 0,1460 0,146714 0,0508 0,1246 0,1239 0,1234 0,1272 0,1267 0,1228 0,120215 0,1182 0,1858 0,1863 0,1866 0,1871 0,1833 0,1862 0,1848

  • The phase correlation window size had a substantial effect on the accuracyof the basic method. Phase correlations using window sizes of 16, 32, 64, 128and 256 were tested. GMI improved 1.0, 2.3 and 7.8 percent when doubling thewindow from 16 to 128 pixels. Doubling the window further did not improve theresults. It is also good to notice that the computational time increases quadrati-cally when the correlation window size is increased. Here a window of 128 pixelswas selected for phase correlation based point matching.

    4.2 Comparison to State-of-the-Art Methods

    The reference SIFT implementation was the vlSIFT from Vedaldi and Flukerson[27]. For SURF, a Matlab implementation based on OpenSURF was used [7].The same RANSAC and homography estimation methods were used as with theproposed method. The described method without the mutual information pointrefinement step was also tested (BlockPhc). In addition, the method by Pulli etal. presented in [21] was applied (PA).

    Table 2 contains the alignment accuracy results. The average root meansquare error (RMSE) for registered images with the proposed method not usingMI refinement, the proposed method, SIFT, SURF, and PA were 0.2857, 0.2816,0.2819, 0.2825, 0.2860, respectively. The average RMSE for unregistered imagepairs was 0.2915. The addition of a mutual information refinement step improvesthe block based phase correlation method accuracy by 41.6 %. The improvementover PA was 44.4 %. The new method yielded also more accurate results than

    Table 2. Comparison of the proposed method (BlockPhc+MI) to a method withouta mutual information refinement step (BlockPhc), SIFT, SURF, and the method byPulli et al. (PA). The best values are in bold numbers.

    RMSEimage Unregistered PA BlockPhc BlockPhc+MI SIFT SURF

    1 0,24499 0,24173 0,23991 0,23949 0,23981 0,244572 0,24433 0,24069 0,23994 0,23849 0,23924 0,240053 0,24980 0,24466 0,24859 0,23854 0,23925 0,243624 0,25386 0,24997 0,25541 0,24238 0,23919 0,241995 0,24582 0,24143 0,24064 0,23970 0,23994 0,240196 0,36013 0,35976 0,36163 0,36021 0,36041 0,360397 0,30986 0,30938 0,30767 0,30762 0,30798 0,307958 0,48984 0,48410 0,48943 0,48378 0,48389 0,485479 0,25545 0,25535 0,26122 0,25663 0,25541 0,2547510 0,32164 0,31953 0,32058 0,31831 0,32050 0,3186511 0,25342 0,25278 0,25262 0,25223 0,25289 0,2520812 0,25861 0,24678 0,25088 0,24210 0,24336 0,2435613 0,25789 0,25137 0,25475 0,25027 0,25023 0,2501614 0,35242 0,35349 0,32054 0,31685 0,31965 0,3175115 0,27389 0,23896 0,24216 0,23799 0,23729 0,23711

  • Fig. 3. Example of visual comparison of alignment results for image pair 14: (left) theproposed method, (middle) SIFT, and (right) SURF.

    SIFT or SURF by 3.0 % and 9.1 % respectively. In two thirds of the test cases,the proposed method was ranked the best. In a quarter of the cases, SURF wasthe best and the remaining best rankings were divided between SIFT and PA.

    Visual inspection of aligned images was also used to confirm the results. Theproposed approach, SIFT and SURF all yielded visually quite similar results.The obvious misalignments were only to the order of a few pixels for most ofthe cases. There where, however, a few cases where the reference methods failedconsiderably more, as is shown in Figure 3. The grayscale version of alignedno-flash images has been subtracted from the reference flash image. The imagesshow that there is considerable misalignment present with both SIFT and SURF.PA failed completely in this case.

    The computational complexity of the proposed method can be estimated tobe similar to SIFT and SURF. It might be also of interest to notice that theimage size affects the computational load only in the prealignment phase. Inthe subsequent steps, the number of handled points with the utilized windowsizes have a more pronounced impact on the execution time than the image size.Compared to SIFT and SURF the number of handled points is fairly small andwell controlled.

  • 5 Conclusions and Discussion

    The presented method that applies mutual information to improve phase correla-tion based interest point matching is a promising approach for the improvementof flash-no-flash image alignment. The achieved improvement in alignment accu-racy using mutual information refinement was 41.6 %. The approach works alsogenerally better than the tested reference methods. The relative improvementwith the proposed method against SIFT was 3.0 %, SURF 9.1 %, and PA 44.4%.

    The average RMSE values suggest that the proposed method is better thaneither SIFT or SURF for flash-no-flash image pair alignment. The best rankingwas achieved in two thirds of the cases. Visual inspection also revealed that noneof the tested methods performed ideally in every case.

    Although the crafted alignment algorithm was applied to phase correlationfor interest point extraction, there is no reason why the mutual information basedrefinement approach would not work also with other point matching methodswhen registering flash-no-flash images. The main requirement is that the corre-spondence is readily established.

    However, while mutual information based point refining improves the reg-istration accuracy, it also requires considerable computational resources. Oneapproach to ease the computational load could be parallelization, since withthe proposed block based approach each point pair can be handled as a separatecase. Modern accelerators may contain tens or even hundreds of processing unitsthat might be utilized to process point pairs simultaneously to achieve remark-able speedup in processing time. Since mutual information is independent ofinput sources, the approach might also be suitable for rigid multi-modal imagealignment, like for infrared-visible image pairs.

    References

    1. Agrawal, A., Raskar, R., Nayar, S.K., Li, Y.: Removing photography artifacts usinggradient projection and flash-exposure sampling. ACM Trans. Graph. 24, 828835(2005)

    2. Alenius, S., Bilcu, R.: Combination of multiple images for flash re-lightning. In:IEEE 3rd Int. Symp. Commun. Contr. Sig. Proc. pp. 322 327 (2008)

    3. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: Speeded up robust features. In: LNCSComputer Vision - ECCV 2006, vol. 3951, pp. 404417 (2006)

    4. Chum, O., Matas, J.: Geometric Hashing with Local Affine Frames, In: IEEE Comp.Soc. Conf. on Computer Vision and Pattern Recognition, 2006 , vol.1, pp. 879- 884,(2006)

    5. Eisemann, E., Durand, F.: Flash photography enhancement via intrinsic relighting.ACM Trans. Graph. 23, 673678 (2004)

    6. Estevez, P.A., Tesmer, M., Perez, C.A., Zurada, J.M.: Normalized mutual informa-tion feature selection. Trans. Neur. Netw. 20, 189201 (2009)

    7. Evans, C.: Notes on the opensurf library. Tech. Rep. CSTR-09-001, University ofBristol (2009)

    8. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fittingwith applications to image analysis and automated cartography. Commun. ACM24(6), 381395 (1981)

  • 9. Foroosh, H., Zerubia, J., Berthod, M.: Extension of phase correlation to subpixelregistration. IEEE Trans. Image Process. 11(3), 188200 (2002)

    10. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings ofthe 4th Alvey Vision Conference. pp. 147-151 (1988)

    11. Jacquet, W., Nyssen, W., Bottenberg, P., Truyen, B., deGroen, P.: 2D image reg-istration using focused mutual information for application in dentistry. Computersin Biology and Medicine 39, 545553, (2009)

    12. Kang, S.B., Uyttendaele, M., Winder, S., Szeliski, R.: High dynamic range video.ACM Trans. Graph. 22, 319325 (2003)

    13. Kovesi, P.: Image features from phase congruency. Journal of Computer VisionResearch pp. 226 (1999)

    14. Kuglin, C., Hines, D.: The phase correlation image alignment method. In: IEEEProc. Int. Conference on Cybernetics and Society. pp. 163165 (1975)

    15. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Com-put. Vis. 60, 91110 (2004)

    16. Matas, J., Chum, O., Urban, M., Pajdla, T.: Robust wide-baseline stereo frommaximally stable extremal regions. In: Proceedings of the British Machine VisionConference. pp. 384393 (2002)

    17. Nistr, D., Stewnius, H.: Linear time maximally stable extremal regions. In: LNCSComputer Vision - ECCV 2008. vol. 5303, pp. 183196 (2008)

    18. Park, S.C., Park, M.K., Kang, M.G.: Super-resolution image reconstruction: a tech-nical overview. IEEE Signal Process. Mag. 20, 21 36 (2003)

    19. Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., Toyama, K.:Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 23,664672 (2004)

    20. Pluim, J.P.W.: Mutual-information-based registration of medical images: A survey.IEEE Trans. Med. Imag., vol. 22 pp. 9861004 (2003)

    21. Pulli, K., Tico, M., Xiong, Y.: Mobile panoramic imaging system. In: ECVW2010,Sixth IEEE Workshop on Embedded Computer Vision (2010)

    22. Reddy, B.S., Chatterji, B.N.: An FFT-based Technique for Translation, Rotationand Scale-Invariant Image Registration. IEEE Trans. Im. Proc. 5, 12661271 (1996)

    23. Smith, S.M., Brady, J.M.: Susan a new approach to low level image processing.Int. J. Comput. Vis. 23, 47-78 (1997)

    24. Studholme, C., Hill, D.L.G., Hawkes, D.J.: Overlap invariant entropy measure of3D medical image alignment. Pattern Recognition. Vol. 32, no. 1, pp. 7186. (1999)

    25. Tico, M., Pulli, K.: Low-light imaging solutions for mobile devices. In: ConferenceRecord of the Forty-Third Asilomar Conference on Signals, Systems and Computers.pp. 851855 (2009)

    26. Vandewalle, P., Susstrunk, S., Vetterli, M.: A frequency domain approach to reg-istration of aliased images with application to super-resolution. EURASIP J. Appl.Signal Process. 2006, 233233 (2006)

    27. Vedaldi, A., Fulkerson, B.: VLFeat: An open and portable library of computervision algorithms. http://www.vlfeat.org/ (2008)

    28. Viola, P., Wells III, W.M.: Alignment by maximization of mutual information. Int.J. Comp. Vis. 137154 (1997)

    29. Xie, X., Lam, K-M.: An efficient illumination normalization method for face recog-nition. Pattern Recognition Letters pp. 609617 (2006)

    30. Xiong, Y., Pulli, K.: Fast panorama stitching on mobile devices. In: IEEE Digestof Technical Papers: Int. Conference on Consumer Electronics. pp. 319320 (2010)