8
Int. J. Electron. Commun. (AEÜ) 63 (2009) 227 – 234 www.elsevier.de/aeue Remote sensing image registration via active contour model Ying Yang , Xin Gao Institute of Electronics, Chinese Academy of Sciences, 7 Shi, No. 19 Beishihuan Xilu, P.O. Box 2702, Beijing 100080, PR China Received 3 August 2007; accepted 3 January 2008 Abstract Image registration is the process by which we determine a transformation that provides the most accurate match between two images. The search for the matching transformation can be automated with the use of a suitable metric, but it can be very time-consuming and tedious. In this paper, we introduce a registration algorithm that combines active contour segmentation together with mutual information. Our approach starts with a segmentation procedure. It is formed by a novel geometric active contour, which incorporates edge knowledge, namely Edgeflow, into active contour model. Two edgemap images filled with closed contours are obtained. After ruling out mismatched curves, we use mutual information (MI) as a similarity measure to register two edgemap images. Experimental results are provided to illustrate the performance of the proposed registration algorithm using both synthetic and multisensor images. Quantitative error analysis is also provided and several images are shown for subjective evaluation. 2008 Elsevier GmbH. All rights reserved. Keywords: Image registration; Geometric active contour; Edgeflow; Edgemap; Mutual information 1. Introduction In many image processing applications it is necessary to register multiple images of the same scene acquired by different sensors. These images may have relative displace- ment, such as translation, rotation, scale, etc. The aim of im- age registration is to find a transformation that aligns images recorded with the same or with different imaging machin- ery in a suitable way. Registering multisensor data enables comparison and fusion of information from different sensory modalities, which often provides complementary informa- tion about the region surveyed. However, due to the differ- ent physical characteristics of various sensors, the problem of registration is inevitably more complex than registration of images from the same type of sensors. Features present in one image might appear only partially in the other image Corresponding author. Tel.: +861058887421. E-mail address: [email protected] (Y. Yang). 1434-8411/$ - see front matter 2008 Elsevier GmbH. All rights reserved. doi:10.1016/j.aeue.2008.01.003 or do not appear at all. Contrast reversal may occur in some image regions while not in others, multiple intensity values in one image may map to a single intensity value in the other image and vice versa. Furthermore, imaging sensors may produce considerably dissimilar images of the same scene when configured with different imaging parameters. Many image registration methods have been proposed over the years [1]. In a whole, these methods can be clas- sified into two categories: area-based methods and feature- based methods. Area-based methods, sometimes called template matching, deal with the images without attempting to detect salient objects. Cross-correlation methods [2,3], Fourier methods [4,5], and mutual information (MI) meth- ods [6,7] are examples in this category. However, area-based methods are not well-adapted to the problem of multisensor image registration since the gray-level characteristics of im- ages to be matched are quite different and the fundamental assumption that the joint intensity probability is maximum when two images are spatially aligned is not always true for any two different modality images. Feature-based methods,

Remote sensing image registration via active contour model

Embed Size (px)

Citation preview

Page 1: Remote sensing image registration via active contour model

Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234www.elsevier.de/aeue

Remote sensing image registration via active contour model

Ying Yang∗, Xin Gao

Institute of Electronics, Chinese Academy of Sciences, 7 Shi, No. 19 Beishihuan Xilu, P.O. Box 2702, Beijing 100080, PR China

Received 3 August 2007; accepted 3 January 2008

Abstract

Image registration is the process by which we determine a transformation that provides the most accurate match betweentwo images. The search for the matching transformation can be automated with the use of a suitable metric, but it can be verytime-consuming and tedious. In this paper, we introduce a registration algorithm that combines active contour segmentationtogether with mutual information. Our approach starts with a segmentation procedure. It is formed by a novel geometric activecontour, which incorporates edge knowledge, namely Edgeflow, into active contour model. Two edgemap images filled withclosed contours are obtained. After ruling out mismatched curves, we use mutual information (MI) as a similarity measureto register two edgemap images. Experimental results are provided to illustrate the performance of the proposed registrationalgorithm using both synthetic and multisensor images. Quantitative error analysis is also provided and several images areshown for subjective evaluation.� 2008 Elsevier GmbH. All rights reserved.

Keywords: Image registration; Geometric active contour; Edgeflow; Edgemap; Mutual information

1. Introduction

In many image processing applications it is necessaryto register multiple images of the same scene acquired bydifferent sensors. These images may have relative displace-ment, such as translation, rotation, scale, etc. The aim of im-age registration is to find a transformation that aligns imagesrecorded with the same or with different imaging machin-ery in a suitable way. Registering multisensor data enablescomparison and fusion of information from different sensorymodalities, which often provides complementary informa-tion about the region surveyed. However, due to the differ-ent physical characteristics of various sensors, the problemof registration is inevitably more complex than registrationof images from the same type of sensors. Features presentin one image might appear only partially in the other image

∗ Corresponding author. Tel.: +861058887421.E-mail address: [email protected] (Y. Yang).

1434-8411/$ - see front matter � 2008 Elsevier GmbH. All rights reserved.doi:10.1016/j.aeue.2008.01.003

or do not appear at all. Contrast reversal may occur in someimage regions while not in others, multiple intensity valuesin one image may map to a single intensity value in the otherimage and vice versa. Furthermore, imaging sensors mayproduce considerably dissimilar images of the same scenewhen configured with different imaging parameters.

Many image registration methods have been proposedover the years [1]. In a whole, these methods can be clas-sified into two categories: area-based methods and feature-based methods. Area-based methods, sometimes calledtemplate matching, deal with the images without attemptingto detect salient objects. Cross-correlation methods [2,3],Fourier methods [4,5], and mutual information (MI) meth-ods [6,7] are examples in this category. However, area-basedmethods are not well-adapted to the problem of multisensorimage registration since the gray-level characteristics of im-ages to be matched are quite different and the fundamentalassumption that the joint intensity probability is maximumwhen two images are spatially aligned is not always true forany two different modality images. Feature-based methods,

Page 2: Remote sensing image registration via active contour model

228 Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234

which extract and match the common structures from twoimages, have been shown to be more suitable for this task[8,9]. The scheme proposed in [10] extracts objects fromLandsast and SPOT images at different scales and matchesthem using their structural attributes such as ellipticity, thin-ness and inclination. In [11] a contour-based method for reg-istering SPOT and Seasat images is proposed. A long coastalline is used as a landmark and the matching is conducted ina coarse-to-fine fashion using a scale space representation.

All the algorithms, which solve image registration prob-lems, ask for an “optimal” deformation which deforms oneimage such that there is an “optimal” correlation to an-other image w.r.t. a suitable coherence or difference mea-sure. The pure minimization of such difference measurestypically leads to an ill-posed problem [12], i.e. by the in-sufficient information provided solely by the available data,or by a desire to reduce noise. One effective method to over-come this problem is regularization approaches. Regulariza-tion methods for image registration (typically by adding aconvex energy functional based on gradients), without addi-tional knowledge, are an artificial way to make the problemwell posed.

The focus of this paper is two folds: one is to introducea geometric active contour framework that allows us to in-terleave powerful level-set based formulations of a feature-based methodology; the other is find the maximal MI be-tween two edgemap images given a transformation, whichshows increase in speed and accuracy than finding the max-imal MI between two images directly.

The rest of the paper is organized as follows. Section 2presents image registration formulation and the registrationapproach combining the active contour model and mutualinformation. The experimental result is given in Section 3,followed by the concluding remarks.

2. The active contour model

In this section, we first give the definition of the imageregistration problem. Then we introduce a novel geodesicactive contour model with the edge information as edge indi-cator function to obtain edgemap images of reference imageand sensed image. We put forward an approach to estimateMI between two edgemap images given a transformation.Finally, we search the parameter space to find the maximalMI so that the best registration between two images can beobtained.

2.1. Image registration formulation

Given are two images, a reference image I : � and a sensedimage I : �, of the same object obtained from the same ordifferent imaging modalities. We assume that in continu-ous variables the images can be represented by compactlysupported functions I, I : Rn −→ R. The goal of image

registration is to determine a transformation Tp in a waythat the transformed sensed image matches the reference im-age, where P is a set of transform parameters. For a func-tional D(I, I , Tp), which measures the disparity betweenthe transformed image and the reference image, the imageregistration problem can be identified, in the way, with aminimization problem:

P ∗ = arg minP

D(I, I , Tp). (1)

2.2. The active contour model

The central idea behind active contour model is to evolvea curve or surface based on energy minimization methodunder the influence of image dependent forces, regularityconstraints and certain user-specified constraints. The origi-nal active contour model, also known as snakes, which wasintroduced by Kass et al. [13], is a linear model, and thus anefficient and powerful tool for object segmentation and edgeintegration. There is however an undesirable property thatcharacterizes the model. It depends on the parameterization.The model is not geometric.

2.2.1. The geodesic active contour modelThe geodesic active contour model (GAC) was introduced

by Caselles et al. [14], as a geometric alternative for thesnakes. The model is derived from a geometric functional,where the arbitrary parameter is replaced with a Euclideanarclength ds = |�C/�p| dp. The energy functional is

E(C) =∫ L(C)

0f (C) ds, (2)

where L(C) is the total Euclidean length of the curve, and fis an edge map of the image, such as the gradient magnitudeof a smoothed version of the image intensity I.

One may add an additional force that comes from an areaminimization term, known as the balloon force [15]. Thisway, the contour may be directed to propagate outwards byminimization of the exterior. The energy functional with theadditional area term is

E(C) =∫ L(C)

0f (C) ds + �

∫C

da, (3)

where � is a real constant making the contour shrink orexpand to the object boundaries at a constant speed in thenormal direction and da is an area element, e.g.,

∫C

da =∫ L(C)

0 N ×C ds. The Euler–Lagrange equation as a gradientdescent process is

dC

dt= (f (C)K − 〈∇f, N〉 − �)N , (4)

where K is the Euclidean curvature and N is the unit inwardnormal.

Page 3: Remote sensing image registration via active contour model

Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234 229

2.3. The level set method

Subsequently, the geometric evolution is implemented bythe Osher–Sethian level set method [16]. It is a numericalmethod that works on a fixed coordinate system and takescare of topological changes of the evolving interface. Ac-cording to level set theory, a geometric active contour canbe represented by the zero level set of a real-valued function� : � ⊂ Rn −→ R which evolves in an image I0 accordingto a variational flow in order to segment the object from theimage background. The corresponding geodesic active con-tour model written in its level set formulation is given by

��

�t= |∇�|(fK + �) + ∇f · ∇�. (5)

2.4. Incorporating Edgeflow into GAC

The defining characteristic of Edgeflow is that the direc-tion of the edge vectors point towards the closest edges at apredefined spatial scale in an image. An example of Edge-flow vector field is shown in Fig. 1. Assume that s = 4� isthe spatial scale at which we are looking for edges. Let I�be the smoothed image with a Gaussian of variance �2. Fol-lowing [17], the prediction error along � at pixel location(x, y) is defined as

Error(�, �) = |I�(x + 4� cos �, y + 4� sin �)

− I�(x, y)|. (6)

The Edgeflow field is calculated as the vector sum:

�S(�) =∫ �−�/2

�+�/2[Error(�, �′) cos(�′)Error(�, �′)

sin(�′)]T d�′. (7)

After this vector field is generated, the vectors are propa-gated towards the edges. The propagation ends and edgesare defined when two flows from opposing directions meet.For both x and y components of the Edgeflow vector field,the transitions from positive to negative (in x and in y direc-tions, respectively) are marked as the edges. The boundariesare found by linking the edges. In [18], the author solved aPoisson equation to find the edge function g:

�∇ · �S = −�g. (8)

After scaling to the interval [0, 1], g has values around zeroalong the edges and values close to 1 on flat areas of the im-age. This edge function slows the constant expansion aroundregion boundaries. If the curvature term is multiplied withg, this will then reduce the smoothing effect of the curvatureat the edge locations, which is also desired. Therefore, theEdgeflow-based geodesic active contours written in level setformulation are given by

��

�t= |∇�|(gK + �) + �S · ∇�. (9)

Fig. 1. Demonstration of Edgeflow vector field: (a) an image offlower. White rectangle marks the zoom area. (b) Edgeflow vectorscorresponding to the zoom area.

The computations can be performed only within a limitednarrow band [19] around the zero level set. To decide if acurve has converged or not, we need to define a convergencecriterion. Our convergence criterion is closely linked withour implementation. We take the narrow band size as sixpixels. If the curve stays within this narrow band for a largenumber of iterations (we choose this number to be 30), weconclude that the curve has converged. As a failsafe, after3000 iterations the curve is stopped regardless.

2.5. Similarity measure and parameter space

2.5.1. Mutual information (MI)The concept of mutual information represents a measure

of relative entropy between two sets, which can also be de-scribed as a measure of information redundancy [20]. Fromthis definition, it can easily be shown that the MI of two

Page 4: Remote sensing image registration via active contour model

230 Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234

images is maximal when these two images are perfectlyaligned. Therefore, in the context of image registration, MIcan be utilized as a similarity measure which, through itsmaximum, will indicate the best match between a referenceimage and an input image. Experiments [21] show that MIenables one to extract an optimal match with a much betterprecision than cross-correlation.

If A and B are two images to register, pA(a) and pB(b)

are defined as the marginal probability distributions, andpA,B(a, b) is defined as the joint probability distribution ofA and B. Then MI is defined as

I (A, B) =∑

s

∑b

pA,B(a, b) ∗ log

(pA,B(a, b)

pA(a) ∗ pB(b)

),

(10)

where M is the sum of all the entries in the histogram, see[22]. The histograms are computed using original gray levelsor gray levels of pre-processed images, such as edge gradientmagnitudes or wavelet coefficients.

In this work a histogram with 64 bins is used, since itproduces a significantly smoother MI surface than the 256-bin histogram. The reduced number of bins dramatically im-proves the runtime for MI registration. The joint histogramis obtained by the following computation. The transformedreference image is obtained using cubic B-spline interpola-tion [23]. The gray values of the input image and the trans-formed reference image are linearly rescaled into the range[0, 255]. The gray values (a, b) of those pairs of pixels,which lie in the same position, are then used to build thehistogram, using the following update law:

hA,B

([a4

],

[b

4

])−→ hA,B

([a4

],

[b

4

])+ 1, (11)

where (a, b) = (I (x, y), I (Tp(x, y))) for 0�a, b�255.Note that [z] represents the integer part of, and a 64-binhistogram is produced.

The cost of computing the MI of two images depends bothon the number of data points or pixels in each image, N, andalso on the number of bins used to form the histogram. Ifboth images have the same number of pixels, N, the com-putational cost of computing the histogram is O(N). Thecomputational cost relative to the number of histogram bins,K used in the computation, is O(K2).

2.5.2. Image registration using MIRegistration is a process through which the correct trans-

formation is determined. Registration using MI is a methodof maximization of similarity measures. It uses MI as thesimilarity measure and aligns images by maximizing MI be-tween them under different transformations. Edgemap im-ages of reference image and sensed image are produced byour novel geodesic active contour model using (9), respec-tively. As a pre-step before estimating MI of two edgemapimages, impossible matches should be eliminated first. Wechoose the inflection points as features where the curve

curvatures are zeroes. It is well known that inflections areaffine invariant [24]. The change of curvature sign in theneighborhood of an inflection point is also helpful to ruleout impossible matches. Curvature maxima are not used, asthey are not affine invariant. A robust method to find thecorrect inflection points under noisy conditions is describedin [25]. In our method, edgemaps of two segmented imagesare regarded as two random variables. Then, MI is used tomeasure the similarity or correlation between these two ran-dom variables. The more similar or correlated the edgemapsare, the more MI they have.

In this paper, we use 2-D affine transform as transforma-tion model. We may represent Tp by

Tp(x, y) =(

a b tx

c d ty

0 0 1

)(x

y

1

). (12)

Thus, we can write Tp(x, y) = Qp(x y 1)T, where we de-fine Qp to be the transformation matrix given above, forP ={a, b, c, d, tx, ty}T. More general transformations suchas thin-plate spline (TPS) are more effective if nonlineardistortion exists between the images.

We have put forward an approach to estimate MI betweentwo edgemap images of reference image and sensed imagegiven a transformation. Now we need to search the parame-ter space to find the maximal MI so that the best registrationbetween two images can be obtained. Although the parame-ter space is not very large (6-D), it turns out time-consumingto search the whole parameter space pixel-wisely. To resolvethe contradiction between processing time and accuracy, weuse a hierarchical approach: coarse and fine registration.

In coarse registration, we search the parameter space witha relatively large step to find the maximal MI, indicatingan approximate alignment. Then in fine registration, wefind the best transformation in the subspace around the pa-rameters found in the coarse registration. The best trans-formation with maximal MI can usually be found usingthis strategy. But it should be noticed that, if the search-ing step is too large, the algorithm may be trapped in lo-cal maxima and the best registration cannot be found. If thesearching step is too small, the computational requirementswill be heavy. So this is a tradeoff between efficiency andaccuracy.

3. Experimental results

We first test the performance of our algorithm in con-trolled experiment of image registration, where the truetransform parameters are known a priori. This will allowobjective measurement for the accuracy of our algorithm. Inmost of the experiment, the average geometric registrationerror for the final registered image is less than one pixel.We then compared our approach with one of the previousimage registration methods. Registration accuracy of thealgorithms is compared. Finally, our technique is tested and

Page 5: Remote sensing image registration via active contour model

Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234 231

compared using remotely sensed imagery from differentsensors, where the true transform is not known a priori.

The accuracy of image registration can be measured bythe average registration root mean square error defined asfollows:

RMSerr =√√√√ 1

N

∑i

∑j

‖(xi, yj ) − (x′, y′)ij‖2 (13)

for (x′, y′)ij = Terr(xi, yj ) with Terr = Tp′(Tp)−1; where Tp

represents the correct (“Ground Truth”) transformation andTp′ is the estimated transform, ‖·‖ is the Euclidean distanceand N is the total number of pixels in the image.

3.1. Images registration with known transforms

Fig. 2 shows the result of registering two 354∗354 binaryimages. The original and the transformed images are shown

Fig. 2. Binary image registration: (a) the original binary images;(b) the transformed binary images; (c) the edgemap image of (a);(d) the edgemap image of (b), and (e) the registered image.

in Fig. 2(a) and (b). The transform is composed of shearingin direction by a factor of �=1.0, 5◦ rotation, scaling of 0.9and 1.2 in X and Y direction, 10 and 20 pixels translation inX and Y direction, respectively. The transformation matrix is

Qp =(0.8966 0.8181 10.0000

0.1046 1.3000 20.00000 0 1

).

The transformed image is bi-linearly interpolated. Fig. 2(c)and (d) show two corresponding edgemap images, respec-tively. The estimated transform is

Qp′ =(0.8930 0.8198 10.2216

0.1026 1.3012 20.20000 0 1.0000

).

Q−1p′ is then applied to the transformed image and the reg-

istered image is shown in Fig. 2(e). The average geometricregistration RMSerr calculated using (14) is 0.0987 pixel.In this case, active contour technique works well, whichdemonstrates the robustness of our approach.

Fig. 3 shows the result of registering two images. The200∗200 center of this image is extracted and utilized as the“Reference Image.” “Input image” is artificially created byrotating, scaling and translating the original image and thenextracting the 200 ∗ 200 center of the transformed image,which consists of 10◦ rotation, scaling factor of 0.8 and 1.4in X and Y direction, 10 and 20 pixels translation in X and Ydirection, respectively. The resulting transformation matrixis

Qp =(0.7878 −0.1389 10.0000

0.2431 1.3787 20.00000 0 1

).

Fig. 3(d) and (e) show two corresponding edgemap images,respectively. The estimated transform is

Qp′ =(0.7876 −0.1392 10.2210

0.2429 1.3787 20.27100 0 1

).

Fig. 3(f) shows the registered image. The average geometricregistration RMSerr calculated using (14) is 0.7444 pixel.This test demonstrates the robustness of our algorithm to thereal remote sensing image.

3.2. Comparison with previous method

The performance of our approach is compared with acontour-based alignment method proposed in [26], whichextracts contour information from each of two images, cor-relates salient features of the contours, and finds optimaltransform parameters for aligning the images. For compari-son, both methods are applied to registering the two imageswhich Fig. 4(a) is the same as Figs. 3(b) and 4(b) is arti-ficially created by rotating and translating the original im-age Fig. 3(a) and then extracting the 200 ∗ 200 center ofthe transformed image, which consists of 10◦ rotation, 20

Page 6: Remote sensing image registration via active contour model

232 Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234

Fig. 3. Binary image registration: (a) the original image; (b) thereference image extracted from (a); (c) the input image extractedfrom the transform of (a); (d) the edgemap image of (b); (e) theedgemap image of (c); and (f) the registered image.

Fig. 4. Comparison presented technique with Hui Li’s registration:(a) the reference image extracted from Fig. 3(a); (b) the inputimage extracted from the transform of Fig. 3(a).

Fig. 5. Remote sensing image registration with unknown trans-forms: (a) the reference image; (b) the input image; (c) theedgemap image of (a); (d) the edgemap image of (b); (e) Checker-board Mosaiced image using proposed method; and (f) registrationresult from Vision Lab, UCSB.

Page 7: Remote sensing image registration via active contour model

Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234 233

Fig. 6. Remote sensing image registration with unknown trans-forms: another example: (a) the reference image; (b) the inputimage; (c) the edgemap image of (a); (d) the edgemap image of(b); (e) Checkerboard Mosaiced image using proposed method;and (f) registration result from Vision Lab, UCSB.

and 10 pixels translation in X and Y direction, respectively.Therefore, the affine transform is given by

Qp =(0.9848 −0.1736 20.0000

0.1736 0.9848 10.00000 0 1

).

We obtain the estimated transformation matrix as

Qp′ =(0.9851 −0.1738 20.1874

0.1737 0.9848 10.22010 0 1

).

The transformation matrix estimated by using the approachdescribed in [26] is

Qp′ =(0.9833 −0.1733 19.2619

0.1733 0.9833 9.28010 0 1

).

For our approach, the average geometric registration error is0.7217 pixel. For the approach proposed in [26], the averagegeometric registration error is 0.9298 pixel.

3.3. Image registration with unknown transforms

Next we apply our algorithm to multitemporal data andremotely sensed imagery from different sensors, where thetrue transform is not known a priori. Fig. 5(a) and (b) showtwo multisensor images. Some temporal differences exist inaddition to the intensity pattern differences. Fig. 5(c) and(d) show two corresponding edgemap images, respectively.Since no good ground truth is available, we evaluate the re-sult visually. For visual inspection, the two registered im-ages are superimposed on each other in Fig. 5(e), with al-ternate patches from either image shown in a checkerboardpattern. The continuity of textures where the two images doshare similar features shows the accuracy of the registration.For comparison, we also provide here the registration resultfrom Vision Lab, UCSB, as Fig. 5(f).

Fig. 6 show another example. It can be observed that theimage intensity patterns are quite different. Fig. 6(c) and (d)show two corresponding edgemap images, respectively. Forvisual inspection, the two registered images are superim-posed on each other in Fig. 6(e). The quality of registrationis revealed by the continuity of the river. For comparison,we also provide here the registration result from Vision Lab,UCSB, as Fig. 6(f).

4. Conclusions

We proposed an image registration framework based onthe concept of active contours and MI. Accurate registrationresults have been demonstrated with controlled experimentas well as remotely sensed imagery from different sensors.Computation time may be reduced by applying optimizationalgorithm, such as SPSA [27], to MI registration [21].

Acknowledgements

The authors would like to thank Prof. Zou Mouyan andProf. Qin Shu for their fruitful discussions.

References

[1] Zitova B, Flusser J. Image registration methods: a survey.Image Vision Comput 2003;21(11):977–1000.

Page 8: Remote sensing image registration via active contour model

234 Y. Yang, X. Gao / Int. J. Electron. Commun. (AEÜ) 63 (2009) 227–234

[2] Roche A, Malandain G, Ayache N. Unifying maximumlikelihood approaches in medical image registration. Int JImag Syst Technol 2000;11:71–80.

[3] Roche A, Pennec X, Malandain G, Ayache N. Gigidregistration of 3-D ultrasound with MR images: a newapproach combining intensity and gradient information. IEEETrans Med Imag 2002;20(10):1038–49.

[4] Castro ED, Morandi C. Registration of translated and rotatedimages using finite Fourier transform. IEEE Trans PatternAnal Mach Intell 1987;9:700–3.

[5] Lehmanm TM. A two stage algorithm for model-basedregistration of medical images. In: Proceedings of theinternational conference on pattern recognition ICPR’98,Brisbane, Australia, 1998. p. 344–52.

[6] Viala P, Wells WM. Alignment by maximization of mutualinformation. Int J Comput Vis 1997;24:137–54.

[7] Maes F, Collignon A, Vandermeulen D, Marchal G, Sutens P.Multimodality image registration by maximization of mutualinformation. IEEE Trans Med Imag 1997;16(2):187–98.

[8] Goshtasby A, Stockman G, Page C. A region-based approachto digital image registration with subpixel accuracy. IEEETrans Geosci Remote Sensing 1986;24:390–9.

[9] Flusser J, Suk T. Degraded image analysis: an invariantapproach. IEEE Trans Pattern Anal Mach Intell 1998;20:590–603.

[10] Ventura A, Rampini A, Schettini R. Image registration byrecognition of corresponding structures. IEEE Trans GeosciRemote Sensing 1990;28:305–14.

[11] Hu Y, Maitre H. A multiresolutoin approach for registrationof a SPOT image and a SAR image. Proceedings of theinternational geoscience remote sensing symposium, May1990. p. 635–8.

[12] Aubert G, Deriche R, Kornprobst P. Computing optical flowvia variational techniques. SIAM J Appl Math 1999;60(1):156–82.

[13] Kass M, Witkin A, Terzopoulos D. Snakes: active contourmodels. Int J Comput Vision 1988;1:321–31.

[14] Caselles V, Kimmel R, Sapiro G. Geodesic active contours.Int J Comput Vis 1997;22(1):61–79.

[15] Cohen LD. On active contour models and balloons. CVGIP:Image Understanding 1991;53(2):211–8.

[16] Osher S, Sethian J. Fronts propagation with curvaturedependent speed: algorithms based on Hamilton–Jacobiformulations. J Comput Phys 1988;79:12–49.

[17] Ma WY, Manjunath BS. Edgeflow: a technique for boundarydetection and image segmentation. IEEE Trans Image ProcessAugust 2000;1375–88.

[18] Sumengen B, Variational image segmentation and curveevolution on natural images. PhD thesis, University ofCalifornia, Santa Barbara, September 2004.

[19] Adalsteinsson D, Sethian J. A fast level set method forpropagating interfaces. J Comput Phys 1995;118:269–77.

[20] Thevenaz P, Unser M. Optimizatl information formultiresolution image registration. IEEE Trans Image Process2000;9(12):2083–99.

[21] Cole-Rhodes A, Johnson KL, LeMoigne J, Zavorin I.Multiresolution registration of remote sensing imagery byoptimization of mutual information using a stochasticgradient. IEEE Trans Image Proc 2003;12(12):1495–511.

[22] Maes F, Collignon A, Vandermeulen D, Marchal G, Suetens P.Multimodality image registration by maximization of mutualinformation. IEEE Trans Med Imag 1997;16.

[23] Unser M, Aldroubi A, Eden M. B-spline signal processing:part 1 – theory. IEEE Trans Signal Process 1993;41:821–33.

[24] Mundy JL, Zisserman A. Toward a new framework for vision.In: Mundy J, Zisserman A, editors. Geometric invariance incomputer vision. Cambridge, MA: MIT Press; 1992. p. 1–38.

[25] Xia M, Liu B. Image registration by “Super-Curves”. IEEETrans Image Proc 2004;13(5):720–32.

[26] Li H, Manjunath BS, Mitra SK. A contour-based approachto multisensor image registration. IEEE Trans Image Proc1995;4(3):320–34.

[27] Spall JC. Multivariate stochastic approximation using asimultaneous perturbation gradient approximation. IEEETrans Automat Contr 1992;37(3):332–41.

Ying Yang received the B.Sc. degreein Communication Engineering fromSichuan University, Chengdu, China, in2005. He is currently working towardthe M.Sc. degree in Communicationand Information System at Instituteof Electronic, Chinese Academy ofScience (IECAS), Beijing, China. Hisresearch interests are in the areas of

computer vision and include image/texture segmentation, imageregistration, and level set techniques.

Xin Gao received the Ph.D. degree in mathematics from BeijingNormal University, in 2001. He held a postdoctoral position atInstitute of Remote Sensing Application, Chinese Academy ofScience, Beijing, China from 2002 to 2004. He was employed asan Assistant Professor in 2003 at Institute of Electronic, ChineseAcademy of Science (IECAS), Beijing, China and as a Professorsince 2006. His research interests include Numerical PDE for im-age processing, image segmentation, remote sensing, and parallelcomputing.