13
Tracking Growing Axons by Particle Filtering in 3D+t Fluorescent Two-Photon Microscopy Images Huei-Fang Yang, Xavier Descombes, Charles Kervrann, Caroline Medioni, Florence Besse To cite this version: Huei-Fang Yang, Xavier Descombes, Charles Kervrann, Caroline Medioni, Florence Besse. Tracking Growing Axons by Particle Filtering in 3D+t Fluorescent Two-Photon Microscopy Images. ACCV - Asian Conference on Computer Vision, Nov 2012, Daejeon, South Korea. 2012. <hal-00740966> HAL Id: hal-00740966 https://hal.inria.fr/hal-00740966 Submitted on 11 Nov 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destin´ ee au d´ epˆ ot et ` a la diffusion de documents scientifiques de niveau recherche, publi´ es ou non, ´ emanant des ´ etablissements d’enseignement et de recherche fran¸cais ou ´ etrangers, des laboratoires publics ou priv´ es.

Tracking Growing Axons by Particle Filtering in 3D+t ... · Tracking Growing Axons by Particle Filtering in 3D+t Fluorescent Two-Photon Microscopy Images. ACCV - Asian Conference

Embed Size (px)

Citation preview

Tracking Growing Axons by Particle Filtering in 3D+t

Fluorescent Two-Photon Microscopy Images

Huei-Fang Yang, Xavier Descombes, Charles Kervrann, Caroline Medioni,

Florence Besse

To cite this version:

Huei-Fang Yang, Xavier Descombes, Charles Kervrann, Caroline Medioni, Florence Besse.Tracking Growing Axons by Particle Filtering in 3D+t Fluorescent Two-Photon MicroscopyImages. ACCV - Asian Conference on Computer Vision, Nov 2012, Daejeon, South Korea.2012. <hal-00740966>

HAL Id: hal-00740966

https://hal.inria.fr/hal-00740966

Submitted on 11 Nov 2012

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinee au depot et a la diffusion de documentsscientifiques de niveau recherche, publies ou non,emanant des etablissements d’enseignement et derecherche francais ou etrangers, des laboratoirespublics ou prives.

Tracking Growing Axons by Particle Filtering in3D + t Fluorescent Two-Photon Microscopy

Images

Huei-Fang Yang1, Xavier Descombes1, Charles Kervrann2, Caroline Medioni1,and Florence Besse1

1 MORPHEME, INRIA/I3S/IBV, 06903 Sophia Antipolis Cedex, France2 SERPICO, INRIA Rennes, 35042 Rennes Cedex, France

Abstract. Analyzing the behavior of axons in the developing nervoussystems is essential for biologists to understand the biological mecha-nisms underlying how growing axons reach their target cells. The analy-sis of the motion patterns of growing axons requires detecting axonal tipsand tracking their trajectories within complex and large data sets. Whenperformed manually, the tracking task is arduous and time-consuming.To this end, we propose a tracking method, based on the particle filter-ing technique, to follow the traces of axonal tips that appear as smallbright spots in the 3D + t fluorescent two-photon microscopy images ex-hibiting low signal-to-noise ratios (SNR) and complex background. Theproposed tracking method uses multiple dynamic models in the proposaldistribution to predict the positions of the growing axons. Furthermore,it incorporates object appearance, motion characteristics of the growingaxons, and filament information in the computation of the observationmodel. The integration of these three sources prevents the tracker frombeing distracted by other objects that have appearances similar to thetracked objects, resulting in improved accuracy of recovered trajectories.The experimental results obtained from the microscopy images show thatthe proposed method can successfully estimate trajectories of growingaxons, demonstrating its effectiveness even under the presence of noiseand complex background.

1 Introduction

Analyzing how growing axons correctly reach their target neurons is essentialfor biologists to better understand the development of a nervous system. Theadvances in fluorescence imaging technology, such as two-photon microscopy [1],have generated amounts of imaging data that enable the analysis of the dy-namic behavior of developing neurons in real time and in three dimensions (3D).Analysis of the properties of axon growth requires detecting axonal tips thatcan appear as small bright spots and tracing their trajectories in low signal-to-noise ratios (SNR) image sequences. Figure 1 shows some examples of part ofmaximum intensity projection (MIP) images of the fluorescent two-photon mi-croscopy volumes. As can be seen, in addition to high level of noise, the obtained

2 H.-F. Yang, X. Descombes, C. Kervrann, C. Medioni, F. Besse

volumes contain some bright spots that have appearances similar to the axonaltips and other biological structures that increase the difficulty in tracing the ax-onal tips. These factors make it difficult to manually identify the growing axonaltips and follow their trajectories. Besides, manual tracing by human experts toobtain the trajectories of axons is tedious and time-consuming. Therefore, devel-oping automated tracking methods that produce reliable trajectories for furtheranalysis is highly required.

Numerous methods have been proposed in the literature for tracking biolog-ical spot-like particles. A great majority of the tracking methods are based ontwo-step approach: the object detection/localization step and the data associa-tion step [2–4]. The detection step aims at locating the objects of interest (e.g.the bright spots) by utilizing intensity thresholding [5], wavelet transform [6], orGaussian fitting methods [7]. Since the detection step has been performed, thedata association step solves the object correspondence problem between consecu-tive detection images [5]. Generally, these two steps are performed independently.Therefore, because the detection results may contain missing or incorrectly de-tected objects, the data association step tends to produce wrongly associatedtrajectories.

Recently, Bayesian tracking approaches have received much attention andhave been successfully applied to spot tracking [8–10]. One advantage of these ap-proaches is that they integrate the detection and data association steps in a uni-fied framework, which produces more accurate trajectories. Moreover, they candeal with nonlinear and non-Gaussian tracking problems. The Bayesian trackingis usually implemented by using a sequential Monte Carlo (SMC) method, thatis the so-called particle filtering (PF) [11]. Particle filtering techniques use priorknowledge of the dynamics of the objects to predict the object state modes anduse all the available information (noisy measurements) to compute optimally theposterior probability density function (pdf).

In this paper, we propose a tracking method, based on the particle filteringtechnique, to follow the trajectories of the individual growing axons in the flu-orescent two-photon microscopy image sequences. The proposed method incor-porates the important characteristics of growing axons. First, multiple dynamicmodels are used to better predict the locations of axonal tips. Furthermore,the proposed method incorporates information related to object appearance,motion characteristics of the growing axons, and filaments in the computationof the likelihood function. The integration of these three sources prevents thetracker from being distracted by other objects that have appearances similar tothe tracked objects. It ensures that the tracker can successfully follow the tracesof the individual axonal tips, resulting in obtaining more accurate trajectories.

The remainder of this paper is organized as follows. Section 2 presents anoverview of the particle filtering framework, and Section 3 elaborates on theproposed particle filtering based tracking method that incorporates the charac-teristics of growing axons. The experimental results obtained from synthetic dataand a 3D + t fluorescent two-photon microscopy data set are shown in Section 4,followed by conclusions and future work in Section 5.

Tracking Growing Axons by Particle Filtering 3

(a) MIP at time 86 (b) MIP at time 90

(c) close-up of (a) (d) close-up of (b)

Fig. 1. Examples of part of maximum intensity projection (MIP) of the fluorescenttwo-photon microscopy volumes. (a) and (b) show part of MIPs from different timesand demonstrate the dynamics of the growing axons. (c) and (d) are the close-ups of(a) and (b), respectively. The axonal tips are visualized as small bright spots, indicatedby red circles. The 3D microscopy volumes are of low SNR ratios and contain otherbiological structures and some bright spots that have appearances similar to the axonaltips. These factors increase the difficulty in identifying and tracking the axonal tips.

2 Particle Filtering

The Bayesian tracking aims at recursively estimating the posterior probabilitydensity function, p (st|z1:t), which describes the state st at time t given a seriesof (noisy) measurements z1:t from time 1 to time t. The estimation consists oftwo steps: prediction and updating.

Prediction The prediction step relies on the Chapman-Kolmogorov equation,that is,

p (st|z1:t−1) =

∫p (st|st−1) p (st−1|zt−1) dst−1, (1)

where p (st|st−1) decides the prior dynamics.

Updating The update step uses the Bayes’ rule to modify the prior densityand obtain the posterior density when a measurement zt is available:

p (st|z1:t) ∝ p (zt|st) p (st|z1:t−1) , (2)

4 H.-F. Yang, X. Descombes, C. Kervrann, C. Medioni, F. Besse

where p (zt|st) represents the likelihood function (observation model).This recursive estimation of the posterior pdf is intractable, and the particle

filtering is used as a numerical approximation. The particle filtering [12] uses

a set of Ns samples, each of which associates with a weight,{

sit, wit

}Ns

i=1, to

represent the posterior p (st|z1:t):

p (st|z1:t) ≈Ns∑i=1

witδ(st − sit

), (3)

where δ ( · ) is the Dirac delta function, and∑Ns

i wit = 1. These samples arepropagated through time to approximate the posterior pdf at the subsequentsteps via a proposal distribution q (st|st−1, zt). The importance weight of eachsample, wit, is updated as follows:

wit ∝ wit−1p(zt|sit

)p(sit|sit−1

)q(sit|sit−1, zt

) . (4)

The final state at time t can be computed as the maximum a posteriori (MAP)estimator:

st = argmaxsit

p(sit|zt

)∀i = 1 . . . Ns . (5)

When applied to the tracking problems after a few iterations, the particlefiltering has the degeneracy problem that only few particles carry significantweights. A resampling procedure is then required to generate a new set of equallyweighted samples based on the importance weights.

3 Proposed Method

The section details the proposed method by starting with the description of themultiple dynamic models used in the proposal distribution. Then we presentthe observation model that integrates information related to object appearance,object motion, and filaments.

3.1 Multiple Dynamic Models

Before describing the dynamic models, the state of an axonal tip is first intro-duced. The state of an axonal tip at time t is represented by st = (xt yt zt xt yt zt)

T,

where xt, yt, and zt denote the position of the tip in the Cartesian coordinateand xt, yt, and zt represent the velocities in the x, y, and z directions. To betterpredict the position of the axonal tip at time t, the proposal distribution containstwo dynamic models, defined as

q (st|st−1, zt) = π1p (st|st−1) + π2p (st|st−1, st−2) , (6)

Tracking Growing Axons by Particle Filtering 5

where p (st|st−1) = N(st|st−1, σ2

1

)and p (st|st−1, st−2) = N

(st|st−1 + vt−1, σ

22

)are the first-order Markov transition and the second-order Markov transition, re-spectively, where vt−1 is the velocity at time t−1, given by vt−1 = (xt−1 yt−1 zt−1).The notation N

(:, σ2

)represents a Gaussian with variance σ2. The first-order

Markov transition models the phenomenon that the growing axons sometimesstop growing for a short period of time and then resume the growth process,and the second-order transition captures the typical motion behavior of growingaxons, which tends to follow the growing direction at time t− 1. The weights π1and π2 , 1− π1 control the mixture for each dynamic model.

3.2 Observation Model

The proposed observation model combines information about object appearance,object motion, and filaments in order to better follow the axonal tips.

Appearance Likelihood The appearance likelihood aims at modeling the priorknowledge about the image formation process in the microscopy imaging systems(point spread function (PSF)). Now, let Iidealt (x, y, z) be the ideal intensity,without the effect of noise, at point (x, y, z) at time t, and is given by [13]

Iidealt (x, y, z) =∑i

Pi,t (x, y, z) + bt, (7)

where Pi,t is the object profile, and bt is the background intensity. Here, Gaussianfunctions are used to approximate the PSFs of the microscopy.

However, because of the noise generated during the image acquisition process,the observed intensity Iobst is the addition of the ideal intensity Iidealt and thenoise Nt that is assumed to be additive and Gaussian:

Iobst = Iidealt +Nt. (8)

To compute the likelihood, the residual image between the observed image andthe ideal image is first obtained by

Rt = Iobst − Iidealt . (9)

After the residual image is obtained, the appearance likelihood is therefore com-puted as

p (Rt|st) =1

Z1exp (−α1||Rt||2) , (10)

where α1 is a constant, and Z1 is a normalizing constant.

Motion Model It is established in biology that axons reach their target cellsin the developing nervous system by the guidance of molecular gradients [14, 15].Accordingly, the growing axons do not abruptly change their growing directions.In other words, the turning angles of the growing axons between consecutive time

6 H.-F. Yang, X. Descombes, C. Kervrann, C. Medioni, F. Besse

sequences are small, leading to smooth trajectories. The likelihood is designedto model this characteristic of the movement patterns of growing axons. Assumethat rt is the motion direction vector at time t. The angle difference, dt, betweenthe motion direction vectors at time t and at time t− 1 is given as

Dt = arccos (rt, rt−1) . (11)

Therefore the motion likelihood is defined as

p (Dt|st) =1

Z2exp (−α2Dt) , (12)

where α2 is a constant, and Z2 is a normalizing constant.

Filament Model Frangi’s vessel enhancement filter [16] is applied to enhancethe filaments in the images. The basic idea of Frangi’s method is to use theeigenvalues of the Hessian (second-order information) to measure the vesselness.Now, let λk represent the k-th smallest eigenvalue obtained from a Hessian ma-trix (i.e., |λ1| ≤ |λ2| ≤ |λ3|), and the eigenvalues of an ideal tubular structure ina 3D volume has the following properties: (1) |λ1| ≈ 0, (2) |λ1| � |λ2|, and (3)λ2 ≈ λ3.

To quantitatively distinguish the tubular-like structure from blob-like andplate-like structures, two geometric ratios RA and RB are defined. They are

RA = |λ2||λ3| and RB = |λ1|√

|λ2λ3|, where RA is for distinguishing between plate-

like and line-like structures, and RB accounts for the deviation from a blob-like structure. Besides, because background pixels have small magnitudes of theeigenvalues, a measurement S is used to distinguish the tubular-like structures

from background and is defined as S =√∑

j λ2j .

Finally, a vesselness function Vδ (p) that integrates the above 3 measurements,RA, RB , and S, at scale δ is given by

Vδ (p) =

{0 if λ2 > 0 or λ3 < 0(

1− exp(−RA(p)2

2γ2

))exp

(−RB(p)2

2β2

)(1− exp

(−S(p)

2

2c2

)),

(13)where γ, β, and c are parameters that control the sensitivity of the line filter tothe measures RA, RB , and S, respectively, and p denotes a voxel.

Equation (13) is analyzed at different scales to measure the vesselness inan image so the final estimate of the vesselness is the maximum filter responseamong all the scales, that is

V (p) = maxδmin≤δ≤δmax

Vδ (p) , (14)

where δmin and δmax are the minimum and maximum scales at which relevantstructures are expected to be found. Figure 2(b) shows the MIP image by apply-ing Frangi’s vessel enhancement filter. The tubular-like structures are enhanced

Tracking Growing Axons by Particle Filtering 7

(a) MIP of the original volume (b) MIP of applying Frangi’s filter to (a)

Fig. 2. MIPs of the original volume and of the Frangi’s filtered volume. The tubular-likestructures are enhanced and the other structures including background are suppressedin (b) after Frangi’s filter is applied to (a). Note that the filter is applied to 3D volumes,and the MIPs of these 3D volumes are shown here for the visualization purpose.

and the others are suppressed when compared to the original MIP shown inFigure 2(a).

Let l be the trajectory (a line segment) that an axon moves from time t− 1to time t and Vl,t be the average value of the measured vesselness through thistrajectory. The filament likelihood is therefore given by

p(Vl,t|st

)=

1

Z3exp

(−α3

(1− Vl,t

)), (15)

where α3 is a constant, and Z3 is normalizing constant.

Joint Likelihood Assuming the independence of object appearance, objectmotion, and filament measurement, the joint likelihood for the observation modelis defined as

p (zt|st) = p (Rt|st) p (Dt|st) p(Vl,t|st

). (16)

4 Experimental Results

The experiments were carried out on synthetic data and a fluorescent microscopydata set to evaluate the performance of the proposed method. The performance isquantitatively measured by comparing the computer generated tracking resultsto the ground truth based on evaluation metrics.

4.1 Experiments on Synthetic Data

Data To validate our method, a synthetic data set, with a size of 100×100×35,that simulates the movement of growing axons is generated. Different levels ofGaussian noise with standard deviation (STD) of 5 and 7 were added to thesynthetic data, resulting in a total of 2 image sequences. Figures 3(a) and 3(b)show the generated synthetic images corrupted by Gaussian noise with STD = 5and STD = 7, respectively.

8 H.-F. Yang, X. Descombes, C. Kervrann, C. Medioni, F. Besse

(a) STD = 5 (b) STD = 7 (c) STD = 5 (d) STD = 7

Fig. 3. Generated synthetic images with different levels of Gaussian noise and trackingresults. (a) and (b) show the synthetic images by adding Gaussian noise with STD = 5and STD = 7, respectively. (c) and (d) show the tracking results in red and the groundtruth in green. The results produced by the proposed method are consistent with theground truth. Note that the contrast of images is enhanced for visualization.

Results To quantitatively measure the performance of the proposed trackingmethod, the tracked trajectories were compared to manually generated groundtruth by measuring the root mean square error (RMSE) [10] of positions betweenthe tracked trajectories and the ground truth. The RMSE is defined as

RMSE =

√√√√ 1

K

K∑k=1

MSE2k , (17)

where K is the number of independent runs, and

MSE2k =

1

M

M∑m=1

1

Tm

∑t∈Tm

‖rm,t − rkm,t‖2, (18)

where rm,t is the true position of object m at time t, rkm,t is the estimatedposition, and Tm is the number of frames where object m exists.

Figures 3(c) and 3(d) show the comparison between the tracking results gen-erated by the proposed method, shown in red, and the ground truth, shown ingreen. By visual comparison, the computer generated tracks are consistent withthe ground truth.

Using RMSE as the performance measure metric, the localization errors forthe synthetic data set corrupted by Gaussian noise with STD = 5 and STD = 7were 2.5565 and 2.6174 pixels, respectively, which indicates that the proposedmethod can correctly follow the movement of growing axons.

4.2 Experiments on Fluorescent Two-Photon Microscopy Data

Data A 3D+t fluorescent two-photon microscopy data set of the intact Drosophilafly brains was used to evaluate the performance of the proposed tracking method.The volume size at each time is 1012× 548× 16 (x× y× z) voxels, and 170 such

Tracking Growing Axons by Particle Filtering 9

(a) Original image (b) Objects (c) Background

Fig. 4. Obtained moving objects and partial background by using the HullkGroundsoftware. As can be seen, the original frame (a) contains noise and complex background,and the background subtraction step decomposes the original image into an imagecontaining moving objects (b) and an image containing slightly moving background(c).

volumes are generated. Each volume has a size of 133.75×72.36×12 µm3, with aresolution of 0.132×0.132×0.8 µm3/voxel. To analyze the dynamics of axons inthe developing nervous system, the trajectories of the individual growing axonshave to be tracked.

Preprocessing The fluorescent microscopy data set was obtained by imagingthe living tissue at different time points so the subsequent volumes had to bealigned to obtain the correct motion patterns of growing axons. In addition, theraw images contain noise and complex background that have to be removed inorder to obtain more accurate trajectories. Hence, prior to the tracking, twopreprocessing steps, registration and background subtraction, were performed.

Registration Motion2D [17], a software to estimate 2D parametric motion modelsin an image sequence, was used to register the images in order to eliminate theliving sample’s drift problem caused during the image acquisition process. Toregister the 3D+t image sequence, the MIP image at each time was first obtained,and the registration process was performed to estimate an affine motion modelfor each pair of consecutive MIP images. The obtained motion models were thenused to register each individual frame. Though a 3D registration method thatregisters consecutive volumes may be more appropriate for this case, we observedthat this 2D registration approach was able to give satisfying results.

Background Subtraction As demonstrated in Figure 1, the obtained images arenoisy and contain other biological structures (background) that may affect thetracking performance. To remove noise and partial background, the HullkGroundsoftware [18], which performs background subtraction by convex hull estimation,was used to decompose the registered data set into two dynamic components:(1) an image sequence containing moving objects and (2) an image sequenceshowing the slightly moving background. Figures 4(b) and 4(c) show the imagecontaining moving objects and the image containing the background, respec-tively. As can be seen, because of the high complexity of the background, onlypartial background is removed from the original image shown in Figure 4(a). As

10 H.-F. Yang, X. Descombes, C. Kervrann, C. Medioni, F. Besse

(a) (b) (c)

Fig. 5. Visual comparison between the tracking results of the proposed method and themanually created ground truth in 2D and 3D. The red trajectories are produced by theproposed method, and the green are the ground truth manually created by the expert;both are overlaid on the MIPs (a) and (b) and visualized in 3D (c). The computergenerated tracks are consistent with the ground truth in general, with minor differencesbetween the estimated positions and the ground truth positions. The differences arecaused by the noise and by the effect of complex background.

a result, the image shown in Figure 4(b) still contains some parts of the back-ground. Even so, the proposed tracking method still yielded better results onthe moving objects images than the original images.

Tracking Results The proposed tracking method was applied to the trackingof growing axons in the image sequence containing moving objects, which wasobtained by background subtraction. The initial positions of the axonal tips weremanually set by the user, and the tracking algorithm followed the traces of thegrowing axons in the subsequent images. In the experiments, a set of 200 sampleswas used in the particle filter. The weights (i.e. π1 and π2 in Equation (6)) forthe two dynamic models were set to 0.5. The parameters α1, α2, and α3 in theobservation model were set to 0.2, 2.5, and 0.25, respectively. Figure 5 showsthe 2D and 3D visual comparisons between the tracked trajectories producedby the tracking algorithm, shown in red, and the ground truth tracks, shown ingreen. As can be seen, the computer generated trajectories are consistent withthe manually created ground truth in general, with minor differences betweenthe estimated positions and the true positions, which is caused by the noiseand complex background. This indicates that the proposed method was able tosuccessfully follow the traces of axonal tips.

Using RMSE as the performance measure metric, for all the tracks, the overalllocalization error of the proposed method is 4.56 pixels.

The mean computation time using a Matlab implementation of the proposedmethod on a laptop with Intel Core i5 CPU (2.53 GHz) and 4GB memorywas 1.8926 seconds for tracking one axonal tip at a time step. Note that thiscomputation time does not include the time required for reading frames andapplying Frangi’s filter.

Tracking Growing Axons by Particle Filtering 11

Performance Comparison Between Different Likelihood CombinationsTo investigate how each likelihood term in the observation model affects thetracking results, we evaluated the tracking performance with 3 different likeli-hood combinations: (1) appearance and motion likelihoods and (2) appearanceand filament likelihoods, and (3) appearance, motion, and filament likelihoods.The appearance likelihood is commonly used in most of the tracking methodsand hence is retained in each combination. Hence, the main focus is on the in-vestigation of how motion and filament likelihoods affect the performance. Thelocalization errors for the 3 combinations in order were 7.51, 7.87, and 4.83pixels. The integration of appearance, motion, and filament likelihoods in theobservation model increases the localization accuracy.

5 Conclusions and Future Work

This paper presented a tracking method, based on the particle filtering tech-nique, to follow the trajectories of the individual growing axons in fluorescentmicroscopy images. The proposed method uses multiple dynamic models in theproposal distribution in order to predict the positions of the axonal tips. Besides,information related to object appearance, motion, and filament is integrated inthe computation of the likelihood. The fused information prevents the trackerfrom being distracted by the other objects that have appearances similar to theaxonal tips; consequently, the proposed tracking method can successfully esti-mate the trajectories of the axonal tips in the image sequences that are noisyand have complex background. The experimental results obtained from a 3D + tfluorescent two-photon microscopy image sequence demonstrate the effectivenessof the proposed method.

Future work includes designing a detection method to automatically initializethe tracking process and quantitatively analyzing the tracked trajectories togather statistical parameters of the growing axons. Current detection methodsusually produce many false positives in detecting the axonal tips and thus adetection algorithm that is suitable for task is required. For analysis of theaxons, the parameters, such as curvatures, turning angles, and growing lengths,will be extracted and used for mathematically modeling the growing patterns ofaxons. We expect that this statistical analysis and modeling will provide valuableinsight into a better understanding of the characteristics of growing axons in thedeveloping nervous system.

Acknowledgement. This work was partially supported by ARC DATA (IN-RIA).

References

1. Helmchen, F., Denk, W.: Deep tissue two-photon microscopy. Natural Methods 2(2005) 932–940

12 H.-F. Yang, X. Descombes, C. Kervrann, C. Medioni, F. Besse

2. Genovesio, A., Liedl, T., Emiliani, V., Parak, W.J., Coppey-Moisan, M., Olivo-Marin, J.C.: Multiple particle tracking in 3-D+t microscopy: method and applica-tion to the tracking of endocytosed quantum dots. IEEE Trans. Image Processing15 (2006) 1062–1070

3. Matov, A., Applegate, K., Kumar, P., Thoma, C., Krek, W., Danuser, G.,Wittmann, T.: Analysis of microtubule dynamic instability using a plus-end growthmarker. Nature methods 7 (2010) 761–768

4. Sbalzarini, I.F., Koumoutsakos, P.: Feature point tracking and trajectory analysisfor video imaging in cell biology. J. Structural Biology 151 (2005) 182 – 195

5. Apgar, J., Tseng, Y., Fedorov, E., Herwig, M.B., Almo, S.C., Wirtz, D.: Multiple-particle tracking measurements of heterogeneities in solutions of actin filamentsand actin bundles. Biophysical J. 79 (2000) 1095–1106

6. Olivo-Marin, J.C.: Extraction of spots in biological images using multiscale prod-ucts. Pattern Recognition 35 (2002) 1989–1996

7. Zhang, B., Zerubia, J., Olivo-Marin, J.C.: Gaussian approximations of fluorescencemicroscope point-spread function models. Applied Optics 46 (2007) 1819–1829

8. Cardinale, J., Rauch, A., Barral, Y., Szekely, G., Sbalzarini, I.F.: Bayesian im-age analysis with on-line confidence estimates and its application to microtubuletracking. In: Proc. IEEE Int’l Symp. Biomedical Imaging. (2009) 1091–1094

9. Smal, I., Meijering, E.H.W., Draegestein, K., Galjart, N., Grigoriev, I.,Akhmanova, A., van Royen, M.E., Houtsmuller, A.B., Niessen, W.J.: Multipleobject tracking in molecular bioimaging by rao-blackwellized marginal particle fil-tering. Medical Image Analysis 12 (2008) 764–777

10. Smal, I., Draegestein, K., Galjart, N., Niessen, W.J., Meijering, E.H.W.: Particlefiltering for multiple object tracking in dynamic fluorescence microscopy images:Application to microtubule growth analysis. IEEE Trans. Med. Imaging 27 (2008)789–804

11. Isard, M., Blake, A.: CONDENSATION - conditional density propagation forvisual tracking. Int’l J. Computer Vision 29 (1998) 5–28

12. Arulampalam, M.S., Maskell, S., Gordon, N.J., Clapp, T.: A tutorial on particlefilters for online nonlinear/non-gaussian Bayesian tracking. IEEE Trans. SignalProcessing 50 (2002) 174–188

13. Chenouard, N., de Chaumont, F., Bloch, I., Olivo-Marin, J.C.: Improving 3Dtracking in microscopy by joint estimation of kinetic and image models. In: MedicalImage Computing and Computer-Assisted Intervention, MIAAB workshop. (2008)

14. Goodhill, G.J., Gu, M., Urbach, J.S.: Predicting axonal response to moleculargradients with a computational model of filopodial dynamic. Neural Computation16 (2004) 2221–2243

15. Krottje, J.K., van Ooyen, A.: A mathematical framework for modeling axon guid-ance. Bull Math Biol 69 (2007) 3–31

16. Frangi, A.F., Niessen, W.J., Vincken, K.L., Viergever, M.A.: Multiscale vesselenhancement filtering. In: Medical Image Computing and Computer-Assisted In-tervention. (1998) 130–137

17. Odobez, J.M., Bouthemy, P.: Robust multiresolution estimation of parametricmotion models. J. Vis. Comm. and Image Representation 6 (1995) 348–365

18. Chessel, A., Cinquin, B., Bardin, S., Salamero, J., Kervrann, C.: Computationalgeometry-based scale-space and modal image decomposition: application to lightvideo-microscopy imaging. In: Conf. Scale Space and Variational Methods. (2009)770–781