Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

Embed Size (px)

Citation preview

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    1/90

    Tracking of Radar-Detected Precipitation-Centroids

    using Scale-Space Methods for Automatic Focussing

    The SARTrE Algorithm

    Diplomarbeit im Fach Meteorologie

    Vorgelegt der

    Mathematisch-Naturwissenschaftlichen Fakultat

    der Rheinischen Friedrich-Wilhelm-Universitat zu Bonn

    Jurgen Lorenz Simon

    October 12, 2004

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    2/90

    Versicherung

    Hiermit versichere ich, dass ich die vorliegende Arbeit selbststandig verfasst, und keine an-deren als die angegebenen Hilfsmittel und Quellen benutzt, sowie Zitate kenntlich gemachthabe.

    1

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    3/90

    Preface

    When starting with my Diploma Thesis, the idea for developing a Tracking Algorithm wasonly part of a much more ambitious plan of developing a Nowcasting algorithm. During thecourse of working on the topic, it dawned on me that creating a working Tracking algorithmwas not a trifle, but a veritable task in itself. Moreover, when I read about the applicationof Scale-Space methods for the solution of Tracking problems in other than the meteoro-logical field, I got interested in Scale-Space Theory itself. Realising, that both could beinterconnected in a beneficial way also for meteorological applications, I was diverted fromthe original plan and began to investigate the topic more deeply. During the short time ofthis work, the simplicity and beauty of the Scale-Space appealed to me, and although I onlyhave taken but a first glance, the multitude of possibilities it seems to offer to all sort ofproblems concerned with deriving information, which can be linked to scale, is overwhelm-ing. The problem of scale has somehow always interested me - when I was a youth I wasfascinated by Fractals, especially because of the fact of self-similarity of their structures at

    small and large scales. And, although I learned a lot during the course of writing this work,the discovery of Scale-Space theory itself was among the biggest rewards for me.

    Thanks

    I would like to express my gratitude towards my Mother, for unbroken faith in me over thewinding and often erratic course of my life. Special Thanks to Prof. G. Heinemann foraccepting the proposal of this thesis in the first place, showing patience or exerting pressureand providing numerous valuable hints and constructive criticism, which helped to improvethe quality of the work a lot. All of my friends for giving me support, lending an ear orleaving me alone when appropriate. Gordon Dove for optimisation hints, general suggestionsas well improving my English. Mark Jackson for cheering up. Maren Timmer for helpingwith the pedagogic aspects and moral support. D. Meetschen and Eva Heuel for providingsoftware and data as well as advise. Very Special Thanks to my girlfriend for moral supportand standing back when I needed the time, much obliged.

    Bonn, 14th of March, 2004.

    2

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    4/90

    Contents

    1 About Radar Meteorology 5

    2 Radar Data 6

    2.1 Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Clutter Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    3 Digital Image Processing Basics 19

    3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 Spatial Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    3.2.1 Using Masks for Convolution . . . . . . . . . . . . . . . . . . . . . . . 203.2.2 Types of Masks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    3.3 Neighbourhood Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.1 Arithmetic Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.2 Maximum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.3 Median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    3.3.4 Percentile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.4 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    3.4.1 Absolute or Adaptive? . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.5 Other Filters Used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    3.5.1 Isolated Bright Pixel Filtering . . . . . . . . . . . . . . . . . . . . . . . 263.5.2 Speckle Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    4 Scale Space Theory 29

    4.1 Basics Conception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.2 Short Introduction to Gaussian Scale Space . . . . . . . . . . . . . . . . . . . 30

    4.2.1 Effective Width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2.2 Extension to 2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324.2.3 Isotropic Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    4.3 Blobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.3.2 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334.3.3 Edge Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3.4 Holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.3.5 Area Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

    4.4 Scale Space Representation in 2D . . . . . . . . . . . . . . . . . . . . . . . . . 374.5 Blob Detection in Scale-Space Images . . . . . . . . . . . . . . . . . . . . . . 404.6 Automatic Detection of Prevalent Signals . . . . . . . . . . . . . . . . . . . . 43

    3

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    5/90

    5 Tracking and Scale Space 47

    5.1 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    5.2 Centroid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.2.1 Geometric Centre of Boundary . . . . . . . . . . . . . . . . . . . . . . 495.2.2 Centre of Reflectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.2.3 Scale Space Centre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    5.3 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.4 Tracking Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.5 Visualisation of Tracking Data . . . . . . . . . . . . . . . . . . . . . . . . . . 575.6 Estimation of Quality, False Alarm Rates . . . . . . . . . . . . . . . . . . . . 58

    6 Case Studies 59

    6.1 Tracking at Fixed Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.2 Tracking at Automatically Selected Scale . . . . . . . . . . . . . . . . . . . . 676.3 Tracking at higher velocities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

    6.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766.4.1 Linear Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . 766.4.2 Percentile Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    7 Discussion and Outlook 81

    A Programming Techniques 83

    A.1 Object Oriented Programming (OOP) . . . . . . . . . . . . . . . . . . . . . . 83A.2 Objective-C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83A.3 Libraries and Third Party Software Used . . . . . . . . . . . . . . . . . . . . . 84A.4 Macintosh Programming and Tools . . . . . . . . . . . . . . . . . . . . . . . . 84

    B Software and Data 85

    4

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    6/90

    Chapter 1

    About Radar Meteorology

    RADAR is short for Radio Detection and Ranging. As many other great inventions (Transis-tor, Penicillin, X-Rays, ... ) it was discovered by a fortunate combination of sheer luck andawareness. In R.E.Rineharts Book, Radar for Meteorologists[13], the discovery is describedas follows:

    ...In September 1922, the wooden steamer Dorchester plied up the PotomacRiver and passed between a transmitter and receiver being used for experimen-tal U.S. Navy high-frequency radio communications. The two researchers con-ducting the tests, Albert Hoyt Taylor and Leo C. Young, had sailed on shipsand knew the difficulty in guarding enemy vessels seeking to penetrate harboursand fleet formations under darkness. Quickly putting the serendipitous findingtogether, the men proposed using radio waves like a burglar alarm, stringingup an electromagnetic curtain across harbour entrances and between ships. But

    receiving no response to the suggestion, and with many demands on their time,the investigators let the idea whither on the vine.

    From that first incidence to the modern Radar systems used in civil and military purposestoday, a long time has passed. Radar is now an every day tool, used to detect and guideair-planes or ships, detect distances between cars in automatic control systems or even todetect objects hidden underground. The first Radars used for meteorological purposes whereobtained from the military after WWII, whose by then well developed equipment becameavailable for civil use. Another great development for meteorological applications was thedevelopment of the Doppler Radar, which allows not only for detection of objects by theirreflected radiation, but also for detecting their speed radially to the Radars site throughthe Doppler effect.

    The way modern Radars work, is by alternatively emitting a bundled pulse of energy(ray) and detecting the portion of radiation reflected from objects in its path in short timeintervals. Through the speed of light and the time interval, the range of the ob ject from theradar can be estimated. By changing the radars azimuth and/or elevation angle, two- oreven three-dimensional images of reflectivity can be obtained. By measuring the phase shiftbetween back-scattered- and emitted radiation, a radial velocity can be measured. For agood introduction into the history, theoretical and technical details of Radar, use RhinehartsBook.[13].

    5

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    7/90

    Chapter 2

    Radar Data

    Owing to the way radar data is obtained, its natural format is organised into rays foreach scanned angle, and within the rays a set of range gates, one for each time interval theback-scattered radiation was sampled at. The natural coordinate system thus is planar polarcoordinates. In reality, the plane is more often than not a shallow cone, since the radar beamoften has an elevation from the perfect horizontal. However, the natural coordinate systemfor the data is polar coordinates. These coordinates can be transformed into Cartesiancoordinate systems, where usually the origin is chosen to represent the radar site. This iscalled a Plan Position Indicator (PPI) display. This term goes back to the beginning ofradar meteorology, when the PPI was indeed an oscilloscopes display with the radar beamtaking sweeps, leaving detected targets in its wake.

    2.1 Coordinate Systems

    Two views are frequently used in this work: Plain Polar Coordinates and Cartesian Coor-dinates:

    Plain Polar Coordinates :This form of display is a simple approach to get a first glance of the data containedin a scan. The rays are plotted in ascending order of their azimuth angles from left toright, one pixel for each ray. The range gates within each ray are plotted vertically,starting with 0 at the bottom increasing in distance upwards. Again, one pixel pergate. That way a plain look on the data can be achieved which is good enough toperform simple filtering tasks which dont have to take actual distances into account,like thresholding or cluttermap sampling / filtering. The natural resolution for thistype of image is number of rays x number of range gates. An example of this view is

    given in Fig.2.1.Cartesian Coordinates :

    Simple Cartesian Transformation from Polar coordinates is relatively easy. However,the process gets more complex when introducing interpolation, which makes up forthe lack of sampled values and the increased size of the sampled volumes, as well asthe difference in height as the radar beam progresses outward. Both modes were usedin this thesis and are also available as options in the software developed alongside it.1. When using simple projection, the values are written into the Cartesian display

    1Thanks to D.Meetschen, Meteorological Institute Bonn for providing Interpolation Code.

    6

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    8/90

    Figure 2.1: Plain Polar Coordinates

    Azimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57.

    without considering whether there was already a value plotted at that point. (lastwins). See Figure 2.2 for an example. Figure 2.3 for the same data in interpolatedform.

    Figure 2.2: Projection onto Cartesian CoordinatesAzimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57, Cartesian

    Projection.

    7

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    9/90

    Figure 2.3: Interpolation onto Cartesian CoordinatesAzimuth Scan, 28. Sep. 1999, 9:36 GMT+2, Range 50km, Elevation 2.57, Cartesian

    Interpolation.

    8

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    10/90

    2.2 Values

    Reflectivity data from the X-Band radar installed in Bonn used in this work comes inunsigned char values, which evaluates to a range of integers: [0..255]. The reflectivity iscalculated by using the formula: Z[dBZ] = 31.5dBZ + 0.5 Z[byte]. For the most partof the data processing, this conversion is omitted though, because the byte valued formatproves advantageous in terms of grayscale representation. Also, the data contains time-stamp and angular properties for each ray and overall scans. For optically matching a givengrey value back to reflectivity value, the following legend may be referenced:

    Figure 2.4: Reflectivity Legend

    9

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    11/90

    2.3 Clutter Filtering

    Clutter is radiation reflected off static ground targets like trees, buildings, hills etc. This ismainly due to the fact, that the geometric properties of the radar beam are far from ideal.The radar beam has, viewed across its axis, multiple local maxima (lobes) of radiation.While the absolute maximum, the main lobe, contains most of the energy, some energy isemitted in the secondary maxima, called the side lobes, whose axes point away from themain axis. Thus, even when placing the radar on a raised point with clear line of sight(for the main beam), the side lobes will produce ground clutter. In the light of this fact,it is understandable that clutter is mostly found in the near range around the radar. Ofcourse there is also a dependency on the orographic circumstances of the radar site whichdiffers from site to site. Clutter may well be among the most intense reflectivity in thedata, since the objects giving rise to clutter are often of significantly higher density andpossess better reflective properties than most meteorological targets do, except maybe hail.Thus, in order to obtain a more meteorologically relevant view on the data, its desirable to

    find means to filter clutter out. One strong indicator of clutter is a target being stationary(trees, buildings, mountains, large radio antennas, etc.). Doppler Radar can identify clutterwith relative ease for the absence of radial movement. Although the X-Band Radar in Bonnis capable of detecting Doppler velocities now, that was not always the case. The radarwas modernised in 1998 and enabled for Doppler detection then. The data chosen for thethesis is from before that time, and thus different approach of distinguishing clutter fromthe real targets was required. Apart from adopting the cluttermap approach, a method ofstochastical decision making and weighed interpolation was developed.

    The chosen approach is based on a concept known as cluttermaps. A cluttermap is amap of the radar surroundings containing reflectivity values for days, where no significantmeteorological targets where detected. It is reckoned that the signal on such days will mainlybe due to clutter. In the course of this thesis, the cluttermap was created using a linked list,where the positions are indicated by azimuth angle and range gate number. The collectionof a cluttermap is done as follows: Given a suitable scan, the rays in the scan are tracedindividually. Whenever a byte value exceeds 0 (or a suitable threshold), it is looked up inthe cluttermap. Should the coordinate (angle, gate number) exist already in the cluttermap,the reflectivity value found in the scan is added to the reflectivity value of that cluttermappoint (node) and a counter indicating how many scans have contributed to this specificpoint is increased, allowing for calculating a simple average of the found reflectivity valueslater. Should the point not exist in the cluttermap, a new node is created, initialised withthe scans value for that point and inserted into the linked list at this spot. The choiceof using a linked list instead of a using a full array is based on two thoughts: First, onlyvalues which actually contain something are stored. Clutter does not by a long shot fill aradar scan, it is relatively sparse. Second: The azimuth angles in rays of different scans

    are not perfectly constant. Although the azimuth angles where rounded somewhat prior tocollecting the cluttermap, 2 the problem remains in principle. By choosing the linked listapproach the cluttermap simply gets more dense should additional azimuth angles appear.

    After adding a few scans, the cluttermap contains at each point the sum of reflectivity

    2To ease the processing, the azimuth angles, which originally come in a precision of 102 are rounded tothe next digit down in precision to 101. The maximum angular error made thus is: = 0.05(deg).At the maximum range of 100km for extended azimuth scans, this angular error translates into a maximumdislocation error of r = 100kmrad(0.05) 87.3m. This was found tolerable for this process, since mostlythe clutter can be found in a range of 0-25km, where the error according to the same evaluation is about21m. The rounding error of 5% seems acceptable for the purpose.

    10

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    12/90

    and the number of scans taken into account for each position. For days with great changesin weather conditions it can be necessary to create more than one cluttermap (or at least

    use more scans) to account for the impact of different weather conditions on the path ofthe radar beam. For days with more stationary conditions, one cluttermap suffices and lessscans are required. For practical purposes, it has proven advantageous to obtain a newcluttermap for each day, provided sufficiently event-free intervals can be found in the data.

    How can this cluttermap be leveraged to reduce clutter in scans? Remember that thecluttermap contains those positions in the scan, which have been found to be cluttered inclear conditions, the number of scans indicating so and the summed up clutter reflectivityvalues.

    A first approach might be to simply subtract the average clutter reflectivity at eachposition in the cluttermap from the reflectivity found in the scan to be corrected. Thisapproach is based on the assumption that the overall reflectivity at a cluttered position isthe sum of the reflectivity of the meteorological target plus the clutters reflectivity (simplesuperposition). Consider this basic form of the radar equation for multiple targets:

    Pr =PtG

    22

    (4)3

    i

    iR4i

    (2.1)

    where Pr is the average received power, Pt the transmitted power, G is the gain for theradar, the radars wavelength. The sum on the right contains i, the i-th targets scat-tering cross section and its distance to the radar, Ri. The backscattering cross section iscalculated by taking the shape (diameter facing the radars direction), dielectric propertiesand the radars wavelength into account. According to this equation, in the absence ofany meteorologically relevant targets, the clutters back-scattered power could be measured

    and in the aftermath being subtracted from the measurement, since it seems to be additive(through the sum on the right hand side). However, in practice this path leads to big errors,ripping holes into the radar image. Hows this? For a start there is the fact that the pathof the radar beam is heavily influenced by atmospheric fields like temperature and humidity.Thus stationary ground targets appear to be moving in the radars view because of that.In addition, the radar beam is somewhat attenuated by travelling through a medium filledwith backscattering targets. These effects of energetic and directional obfuscation render thesimplistic superposition approach somewhat useless. In spite of the cluttermap information,the problem of determining how much radiation at a given point in a sample is owed toclutter persists.

    In what other way could the information in the cluttermap aid us? Could it be possibleto leverage the cluttermap for estimating at least the likelihood of a point being cluttered?

    And should the likelihood be high, could we apply a correction based on more informationthan just the cluttermap? The following paragraph develops a method for doing just that.

    Stochastic Ray-Interpolation Filter

    Suppose a sample of a meteorological target, which is known to be at least partially dueto ground clutter. A human observer would find it relatively easy to identify clutter bylooking at a sequence of images, identify the stationary bits and take the structure of theclutter into account. When clutter and other targets are present in the same area, thehuman observer would still be able to tell clutter from other objects to some extend, by

    11

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    13/90

    his collected experience. One chief aspect in this decision making process would surely becontinuity, the larger structure of the objects seen. The presented method tries to take that

    concept into account when distinguishing clutter from non-clutter. Knowledge about thestationary targets is collected in the aforementioned cluttermap. In order to get a view ofthe structure of detected objects, the scan is considered ray-wise. The main assumption isas follows:

    The more the measured reflectivity at a given coordinate deviates from the av-erage cluttermap value, the more likely the value is to be correct.

    Assume a cluttermap C = {C(, m)| [0, 360), m [1, Ngates]}. Ngates is the numberof range gates the radar produces in a ray. Further let a radar scan consist ofNrays rays atangles n. Each of these rays contains Ngates range gates: Z = {Z(n, m)|n [1, Nrays]; m [1, Ngates]} The method works by traversing all points (nodes) of the cluttermap and comparethem to corresponding points in the scan. What interests us is the likelihood of the pointunder consideration (Z(n, m)) being obfuscated by clutter (C( = n, m)). An estimate isproposed in the following form:

    Pclutter(Z(n, m)) = erfc(2|Z(n, m) C( = n, m)|

    255) (2.2)

    3 Should the probability Pclutter exceed a pre-set threshold Pcrit, the point in the scan isassumed to be heavily contaminated by clutter and thus in dire need of correction. 4

    Now that a decision has been made, the samples value needs correction. In order to takethe continuity of the data along the ray into account, the data is modelled as a polynomialg of order N in the index coordinate within a certain range upwards (further away from theradar site) - and downwards (closer to it) of each range gate under consideration. Should thedownward range cross the origin (the radar site), samples from the diametrically opposite ray(or the ray closest to being diametrical) are taken into account. For the sake of simplicity,assume a fixed ray angle and consider only the range gate coordinate m:

    g(m) =Nj=0

    ajmj (2.3)

    To obtain the coefficients aj , a least-squares-fit is done, taking a range of K values up anddown the ray into account: Let f(m) denote the sampled value at the given range gate m.Then the least-squares-fit through m is obtained by defining the squared error function I:

    I =

    m+Km=mK

    (g(m) wmf(m))2 (2.4)

    where wi are weights on the observations. Since we want to minimise the error by adjustingthe coefficients, we differentiate I for each aj

    I

    aj=

    aj

    m+Km=mK

    (g(m) wmf(m))2 0 (2.5)

    3the factor 2 in the argument of the error function serves the purpose of extending the range of theargument a bit, thus making fuller use of the value range of the error function, yielding more distinguishableresults. The value 255 is owed to the fact that the range of possible values is [0 , 255] and serves to normalisethe argument.

    4This formula was in its basic form derived by inspiration. The Gaussian error function was chosensimply for its mathematical properties. (See Fig.2.5). The closer the sampled value is to the cluttermapvalue, the smaller the argument of the Gaussian error function, the closer the result (the likelihood) getsto 1. Note that this approach introduces one parameter, the threshold likelihood Pcrit.

    12

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    14/90

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.5 1 1.5 2

    erf(x)

    erfc(x)

    Figure 2.5: Gaussian Error FunctionsGaussian Error Function erf(x) and Complementary Gaussian Error Function erfc(x).

    Carrying out the differentiation for coefficient aj , replacing g with its definition and re-

    ordering gives:Mi=0

    ai

    m+Km=mK

    mjmi =

    m+Km=mK

    wmf(m)mj (2.6)

    By shifting the indices so that [mK, m+ K] transforms to [0, 2K+ 1], defining a transitionwhich maps n m(n) and considering each j, this can be written in matrix form as:

    G a = v (2.7)

    Where G denotes the matrix containing the elements

    Gij =2K+1

    n=0mi(n)mj(n) (2.8)

    vector a the polynomial coefficients (a0...aN) and the vector v the observations with

    vi =

    2K+1n=0

    wm(n)f(m(n)) m(n)i (2.9)

    Now the coefficient vector a can be determined by inverting the matrix G.

    The observations are weighed through the wi, according to a scheme, which is basedon their creditability with respect to clutter. For each point of the observation (fm), an

    13

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    15/90

    estimate is made how likely it is for that point to be influenced by clutter (Cm). A weighingscheme is proposed to make use of the following function:

    wm = erf(2|fm Cm|/255). (2.10)

    This way, values that exhibit a higher probability of being cluttered receive less credit, ex-pressed through w, than less cluttered ones. See again Fig. 2.5 for the Gaussian errorfunction. Since the abscissa as defined by the range gate indexing was chosen to have itsorigin on the range gate under consideration, the evaluation of the fit value for this specialpoint is simplified to the value of the coefficient a0.

    With this procedure, a device is present to correct clutter in radar data. Given a clut-termap C and a scan Z, each point in Z is checked against C and, if Pclutter exceeds aselectable threshold Pcrit, the point in Z is replaced by the fitted value a0.

    For the following three examples, the parameters where chosen as follows: Pcrit = 0.9,K = 20, M = 3. All scans were taken from July, 12th 1999. The shown scans in Figure2.6 were used to collect the cluttermap. Figures 2.8 to 2.12 show a correction for the ray ofangle 0, 5th range gate. Note that the corrected value used in each situation is the value ofthe fit at x = 0 (Corresponding to a0 by construction).

    Figure 2.6: Cluttermap ScansTwo scans from July,12th 1999, constituting the cluttermap. Left: 10:01, Right: 10.06.

    14

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    16/90

    Figure 2.8 shows clutter and fitting procedure for a situation, where no larger structureis present in current sample (red curve) in the vicinity of the clutter (green curve). Since in

    that situation the difference between cluttermap and sampled values is small and no largerstructure is present in the ray to indicate proper signal, the resulting fit is close to 0 overall.

    The situation changed in Figure 2.10. A large precipitation signal has wandered into thecentre from the Northeast and is partially covering the cluttered area. It can be seen howthe presence of the larger structure in the ray pulls up the weights and sample values, thusraising the fitted value.

    In Figure 2.12 the precipitation echo has wandered Southwest even further and now cov-ers the clutter completely. The large structure present in the ray pulls up the fit from bothsides. Also very clearly visible is how the weights react with the change from cluttered tonon-cluttered areas.

    This procedure is not fully mature yet. It still leaves small holes in the precipitation.Since these holes dont pose a problem for subsequent stages, the quality was deemed goodenough to be useful for the course of this work. At an early stage during developmentthe whole procedure was tried using simple linear regression, which basically boils down tosetting the order of the interpolation polynomial to 1. It turns out that the linear approachis too crude. Since a larger structure with a distinct curvature should be captured and notonly the next few points, the simple linear process tends to underestimate the reflectivity alot, resulting in holes or artificial low level plateaus.

    15

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    17/90

    Figure 2.7: Cluttermap Correction 1

    Left: 10:06 No Correction. Right: Corrected

    0

    50

    100

    150

    200

    -20 -15 -10 -5 0 5 10 15 20

    Z[bytevalue],we

    ight[100*weight]

    range gate distance

    samples_0.5clutter_0.5

    weights_0.5weighed_samples_0.5

    fit_0.5

    Figure 2.8: Ray Interpolation Example: Clutter OnlyShowing the fit for July, 12th, 10:06. The fit was done for Azimuth Angle 0 and Range

    Gate No.5.

    16

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    18/90

    Figure 2.9: Cluttermap Correction 2

    Left: 12:31 No Correction. Right: Corrected

    0

    20

    40

    60

    80

    100

    120

    140

    160

    180

    -20 -15 -10 -5 0 5 10 15 20

    Z[bytevalue],we

    ight[100*weight]

    range gate distance

    samples_0.6clutter_0.6

    weights_0.6weighed_samples_0.6

    fit_0.6

    Figure 2.10: Ray Interpolation Example: Clutter partially covered by another eventShowing the fit for July, 12th, 14:31. The fit was done for Azimuth Angle 0 and Range

    Gate No.6.

    17

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    19/90

    Figure 2.11: Cluttermap Correction 1

    Left: 13:46 No Correction. Right:Corrected

    0

    50

    100

    150

    200

    -20 -15 -10 -5 0 5 10 15 20

    Z[bytevalue],we

    ight[100*weight]

    range gate distance

    samples_89.2clutter_89.2

    weights_89.2weighed_samples_89.2

    fit_89.2

    Figure 2.12: Ray Interpolation Example: Clutter completely covered by another eventShowing the fit for July, 12th, 15:46. The fit was done for Azimuth Angle 89 and Range

    Gate No.2.

    18

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    20/90

    Chapter 3

    Digital Image Processing Basics

    The data produced by the radar system in its original form is not very suitable for sub-sequent stages of the edge and object detection processing. It needs transformation ontothe Cartesian plane and a couple of filtering operations first. Since the data can be viewedupon as a natural grayscale image, its only natural to refer to methods for processing digitalimagery as appropriate for the treatment of this data. This section introduces some basicconcepts and methods used in the course of this work.

    The algorithms devised for processing digital images are legion. They range from simplepixel-wise appliances (like thresholding) to algorithms taking into account the whole imagedata, like Fourier transformations. It would be well beyond the scope of this work to givean authoritative overview, so only the techniques used will be taken into consideration. Foran extensive discussion of the topic see Gonzales/Woods,Digital Image Processing[1], fromwhere all digital image processing techniques were taken, except for the ones developed bythe Author himself.

    3.1 Definitions

    An image in the sense of image processing is a set of rectangular matrices of evenly di-mensioned values, which define properties for each pixel in each cell of the correspondingmatrices. The combination of all information determines the appearance of the pixel in theresulting image. A good example are well known RGB images, which need 3 matrices con-taining the colour information for red, green and blue for each pixel. Since the algorithmsused to process these matrices are more often than not identical for each information matrix,the most widely used image used when explaining digital image processing procedures is agrayscale image. It only needs one matrix containing the pixel values from a defined realm

    of values. Radar data from the X-Band radar in Bonn comes in a range of unsigned char[0..255], and can thus be looked upon as a natural grayscale image. All following proce-dures will make use of that convention. Another helpful construction for the purpose ofprocessing is defining the image as a function f(x, y) which yields the grayscale value atpixel coordinates (x, y).

    19

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    21/90

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    22/90

    neighbourhood to straight lines. A better solution though is choosing the weights accordingto the number of values under consideration. The mask shown in Figure 3.2 calculates the

    arithmetic average of a 3x3 neighbourhood. As a general guideline, the sum of the weightshas to be 1 for averaging. The result of averaging is demonstrated in Figure 3.3 and Figure

    Figure 3.2: Arithmetic Mean Averaging MaskA simple averager.

    3.4. One of the biggest disadvantage of this method is the blurring, which makes edgesconsiderably harder to locate. We will introduce a more subtle method of averaging later,the Gaussian Blur filter.

    Figure 3.3: Averaging Example, UnfilteredUncorrected radar data from July 12th, 1999 12:31 transformed into Cartesian coordinates

    at a 200x200 resolution.

    21

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    23/90

    Figure 3.4: Averaging Example, FilteredResult of convoluting the image once with the averager shown in fig.3.2. Notice how the

    bright spots have been averaged out and some of the smaller gaps have been filled.

    Derivative Masks

    As stated in Gonzales/Woods Digital Image Processing[1] p197, if the averaging process canbe viewed upon as an analogue to integration and this smoothes images, the opposite can beexpected for differential masks. Since differentiation on a two - dimensional domain yieldsa vector, and the magnitude of the gradient is the length of that vector, calculating thegradient by using masks requires two masks, one for the x and one for the y direction:

    f= f =

    fxfy

    |f| = f =

    (f/x)2 + (f/y)2

    Now let a 3x3 neighbourhood around a given point be numbered as indicated in fig.3.1:Then the gradient can be approximated as:

    f |(z7 + z8 + z9) (z1 + z2 + z3)| + |(z3 + z6 + z9) (z1 + z4 + z7)|

    where the first term corresponds to the approximate gradient in y, Gy and the second term toits counterpart in x, Gx. This scheme gives rise to a pair of masks known as Prewitt Operatorsin image processing, which can be seen in Fig.3.5. Another form of differential operators,known as Sobel Operators, have the advantage of enhancing the axis-oriented values over thediagonal elements, providing a smoother result than the Prewitt operator. The two SobelOperators are shown in fig.3.6. Generally, differential masks have their coefficients sum upto 0. For in-depth information on the topic of the presented operators, see [1].

    22

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    24/90

    Figure 3.5: Prewitt OperatorsThe Prewitt Operator for x (left) and y (right) direction correspondingly.

    Figure 3.6: Sobel OperatorsThe Sobel Operator for x (left) and y (right) direction correspondingly.

    3.3 Neighbourhood Averaging

    As the name implies, a pixel under a neighbourhood averaging process is replaced by somemean value of its surrounding pixels. In this case the size of the convolution makes a bigdifference. Bigger influence radii tend to smear values around more than small ones, andthe specific types of averaging contain or discard the fine structure of the image more orless. In this work, four types of averaging have been taken into account.

    3.3.1 Arithmetic Mean

    [1] The arithmetic mean of the values in a sample is calculated and replaces the original pixelvalue. This method has the same weakness as the median (see below), and often median andaverage are indeed identical. It tends to underestimate the brightness in sparsely populatedareas of the image and blurs the data considerably. For an example of applying this method(3x3 neighbourhood equivalent to a radius of 1 pixel) see Fig.3.4

    3.3.2 Maximum

    The pixel value is replaced by the maximum of the values found in the sample. Its a verygood filter for enhancing structural views of the data and fill gaps, but it destroys a lot of

    23

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    25/90

    the fine grain structure. Its the steam-hammer among the presented methods, but good forboundary finding in weak data.

    3.3.3 Median

    [1] The Median of a sample of values is defined as the 0.5 percentile of these values. Itsthe one value in the sample above which half of the values reside above, and the other halfof the values reside below it in the range of values in that sample. An example of using amedian filter on a 3x3 neighbourhood on the data presented in fig.3.3 is shown in fig.3.7.

    3.3.4 Percentile

    The best suited averaging method found in the course of this work was the percentile filter.A predefined percentile is chosen and for each sample the original pixel value is replacedby the given percentile. A carefully chosen percentile value has all desirable properties of

    the maximum filter, yet contains the fine grain structure of the data a lot better than allother methods. Its computationally more intensive, since an interpolation is done for eachsample, but in practical application this difference was found to be imperceptible and theresults justify the extra effort involved. Note that the maximum filter is the 100% percentileand the median is the 50% percentile.

    Figure 3.7: Median Averaging ExampleJuly 12th, 1999 12:31, Median Averaging in a 3x3 neighbourhood.

    3.4 Thresholding

    Thresholding denotes the process of limiting the range of possible values for the purpose ofdifferentiating between the background and foreground of a given image. Often the termthresholding is used synonymously for a highpass filter, where all values must lie above acertain value to pass the filter. Thresholding can just as well mean the reverse (Lowpass),

    24

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    26/90

    Figure 3.8: Percentile Averaging ExampleJuly 12th, 1999 12:31, Percentile Averaging 80% in a 3x3 neighbourhood.

    or a combination (Bandpass). For the purpose of this work, only a highpass filter wasimplemented and used.

    3.4.1 Absolute or Adaptive?

    Absolute Thresholding constitutes a static value or range of values, which are allowed topass. It is most applicable in situations, where the range of data is known to be of interest ina a-priori predefined range. Adaptive Thresholding is a process, where the threshold valueis not set in advance, but defined on the fly as a certain portion of a dynamic range. Whichof these two variants is used, depends a great deal on the problem under consideration. Forradar data with a closely defined range of values (as provided by the X-Band Radar in Bonn)static thresholding has proven to give a good result on average. As a convention, let Tabs

    denote absolute threshold values in dBZ, Trel denote adaptive thresholds in percent.

    25

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    27/90

    3.5 Other Filters Used

    3.5.1 Isolated Bright Pixel FilteringSingle Points are isolated pixels, which differ considerably in brightness from their immediatesurroundings. Since they cause trouble in later stages of the object detection namely inthe Gaussian scale space analysis a procedure was devised to remove those. For each pointthe difference with all points in a 4x4 - neighbourhood are considered. If more than twoexceed the chosen maximum gradient, the pixel is assumed to be either an isolated pointof strong reflectivity or part of a line - like structure of that type. Thus it is replaced bya simple arithmetic average of its surrounding pixels. Otherwise it passes unchanged. Thefollowing figures illustrates this using a max. gradient of 100/pixel (100dBZ/km) 1

    Figure 3.9: Single Bright Spots on plain polar image

    Uncorrected data from July 11th, 11:01. Single bright spots are clearly visible

    Figure 3.10: Single Bright Spots removedCorrected using max. gradient 100/pixel, 68 (0.17%) of the original pixels were corrected.

    1The assumption of a gradient of 100dBZ/km indicating fallacious measurements is based on [2]

    26

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    28/90

    3.5.2 Speckle Filtering

    Speckle is defined as small particles, which are randomly distributed on the image. It canbe thought of as dust, scratches or other small scale noise in e.g. a photograph. In thecontext of this work, speckle is defined as small scale objects which need removal in orderto not disturb higher layers of processing. Especially when using Gaussian blur filtering,small scaled, yet highly intense spots in the data can get spread out widely in the process,resulting in noise in the scale space. Therefore, the following procedure was devised in orderto get rid of it.

    A pixel radius is chosen along with a minimum coverage percentage. Each pixel in theimage is then the midpoint of a disk with said radius and the coverage is calculated. Sincethe data consists solely of bright Blobs on a dark background, and this background is de-fined as a pixel of value 0, the coverage is simply the number of non-zero pixels divided bythe overall number of pixels taken into consideration. If that number is equal or exceeds

    the chosen percentage, the pixel is considered part of a large enough structure and passesthe filter. If, on the other hand, the coverage around that point is smaller than the chosenpercentage, it doesnt make it into the result. The parameters, however, need to be chosenvery carefully, since too high a threshold for a given radius results in removal of too manyboundary points from originally sufficiently large structures. A good combination of valueswas to be found a radius of 11 pixels needing a coverage of at least 10% to make it throughat a resolution of 200x200 pixels. Fig.3.11 and Fig.3.12 illustrate the method.

    Note the small remains of speckle near unfiltered areas in the top right hand area. Ifa very small scale object lives near enough a bigger one, enough points from the adjacentbigger structure make it into the area of influence for the smaller one, keeping it alive throughthe filter. Since this is limited to a fraction of the radius of influence, the errors introducedare not of importance Another configuration fooling the filter are dense, yet singular spots,

    which keep each other alive. A remainder which is owed to this configuration can be seenright in the centre of the de-speckled image. Overall, however, the presented method deliversgood enough results for the subsequent processing stages.

    27

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    29/90

    Figure 3.11: Image with SpeckleJuly 13th 1999, 11:41. The shown image was produced by applying a cluttermap correction,

    removing bright spots, thresholding at 12.5bDZ and projecting on the Cartesian plane using a

    resolution of 200x200 pixels. In the centre, remains of the cluttermap correction can be seen

    Figure 3.12: Despeckled ImageThe same data after applying a speckle filter of radius 11 pixels and minimum coverage of 10%. 72

    values were removed.

    28

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    30/90

    Chapter 4

    Scale Space Theory

    4.1 Basics Conception

    The conceptual foundations of scale space theory are very intuitive. Consider a single tree.On a fine scale it exhibits leaves and twigs. Looking at the tree from a little further offrenders the concept of describing the tree by its twigs somewhat pointless. The detailsobservable would rather be branches and trunk. Backing off even further reveals the treesoverall shape, which might roughly be a cylindrical or spherical shape. At the scale of a for-est, however, even the unit tree seems inappropriate. The level of detail used to describesomething depends largely on the scale of the perceived object.

    Although this concept is conceptually very easy to understand, it has been looking formathematical approach in terms of signal processing for some time, despite the fact that all

    necessary mathematical concepts needed were ready by the mid 1800s.[5] It is interestingthat, although the scale-space idea in the western hemisphere usually is said to have ap-peared first in a paper by A.P.Witkin[6], 1983 or an unpublished report by Stansfield (1980),Weickert points out that the first Gaussian Scale-Space formulation has been proposed byTaizo Ijima in Japan, 1959. Two theories of scale-space have developed surprisingly inde-pendent of each other in Japan and the Western World. A comparison of the two theorieswas done by Weickert in his paper Scale Space was discovered in Japan [5], which is alsoa good, compact introduction into the general ideas of the theory.

    Within the confines of any given image1 the concept ofscale becomes somewhat relative.Lindeberg states in his book Scale Space Theory in Computer Vision [3]: The extent ofany real world object is determined by two scales, the inner scale and the outer scale. Theouter scale of an object or a feature may be said to correspond to the (minimum) size of a

    window, that completely contains the object or the feature, while the inner scale may looselybe said to correspond to the scale at which substructures of the feature or object begin toappear.

    Scale Space Theory is a mathematical model, which strives to give a robust and usabledescription of the property scale.

    1image in this work is used synonymously to 2-D signal representations.

    29

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    31/90

    4.2 Short Introduction to Gaussian Scale Space

    This section basically subsumes Lindeberg,1994, Chapter 2. Consider a one dimensionalimage F : IR IR. Now a scale parameter t IR+ is introduced. Small values of t shallrepresent finer, larger values coarser scales. Then the image F is abstracted into coarserand coarser scales by gradually increasing t, resulting in a family F(x, t) of images, param-eterized by t. This family is called the scale space representation of the image, L(x, t). Itcontains information of each object in F at each considered scale. This has some similaritywith the wavelet approach. As opposed to wavelets, the scale space representation doesshrink in size as the scale parameter increases. Scale Space is useless for data compression.

    How does the abstraction take place? For an illustration, a one-dimensional signal isinstructive. Again, let F : IR IR. The scale-space representation L of F starts atscale 0 (the original image) and images at coarser scales are given by convolution with ascale-space kernel g:

    L(x, 0) = F(x) (4.1)

    L(x, t) = g(x, t) F (4.2)which is calculated in the form of a convolution of F with g:

    L(x, t) =

    =

    g(, t)F(x )d (4.3)

    Although many possible scale-space kernels are conceivable 2, the Gaussian kernel g(, t) 3has by far the most important stance in the field of scale-space theory:

    g(x, t) =12t

    ex2/2t (4.4)

    It has a number of desirable properties. (See Lindeberg,1994). First of all, its normalised in

    the sense, that xIR

    g(x, t)dx = 1 (4.5)

    It has a semi-group property, which results in the fact that the convolution of a Gaussiankernel with a Gaussian kernel is another Gaussian kernel:

    g(., t1) g(., t2) = g(., t + s) (4.6)which has a technically important implication for scale-space representations: A scale-spacerepresentation L(x, t2) can be computed from a scale-space representation L(x, t1) witht1 < t2 through convolution with a Gaussian kernel g(., t2 t1):

    L(x, t1) = g(., t2 t1) L(x, t1) (4.7)

    This is the cascade smoothing property of the scale-space representation. Furthermore, it isseparable in N dimensions such that a N-dimensional Gaussian kernel g : IRN IR canbe written as

    g(x, t) =Ni=1

    g(xi, t) (4.8)

    which takes the order of processing operations needed for computing convolution masks inthe spatial domain down considerably.

    2The two properties to make a kernel useful, being unimodal and positive3g(, t) meaning g(x, t) x IR

    30

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    32/90

    4.2.1 Effective Width

    In practical applications the Gaussian kernel is calculated until a certain distance from itsorigin, its effective width xmax. In this work, this distance was determined for each scaleas the point xmax(t), at which the value of g(xmax(t), t) had decayed to 0.01% of g(0, t).The value of 0.01 was called the decay g and is adjustable in the software, although it wasmostly left at its default of 0.01. Thus, the width of the kernel operator was be calculatedthrough:

    g =g(xmax, t)

    g(0, t)=

    12t

    ex2

    max/2t

    12t

    e0= ex

    2

    max/2t (4.9)

    and thusxmax(t) =

    2 t ln g (4.10)

    which is also the width of the mask used to calculate the kernel.4

    Since the width of the kernel is expressed in image coordinates, where the basic unit is onepixel. For relating xmax to distances in [m], the resolution of the image has to be taken intoaccount.

    How does convolution with a Gaussian kernel affect the data? Figure 4.1 shows a scalespace representation of random data, which as been modulated by a sinus. Scale increasesfrom bottom to top:

    4In literature on scale-space, the effective width is often deduced from the thought, that the weightedaveraging introduced by the Gaussian kernel is similar to measure the signal at point x through a circularaperture of characteristic length =

    t, so for example in Lindeberg,1994.

    31

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    33/90

    Figure 4.1: 1-D Scale Space Representation

    Scale increases from 0 (bottom) to 0.8 (top).

    Notice how the small scaled, random signal gets less and less important as scale increases.The structure that remains is the larger-scaled, sinusoidal variation.

    Of course gauss filtering in its own right is a well known technique for de-noising noisydata and nothing new. However, in the context of scale space, the noise is not an unwantedpart to be filtered out, but just the property of the given signal at the scale where its visible.The scale space representation is constituted by the whole family of curves, parameterisedby t at different levels of detail.

    4.2.2 Extension to 2DThe extension into a higher dimension is straightforward. The image function is extendedto F : IR2 IR and the Gaussian kernel looks like:

    g(r, t) =12t

    e|r|2/2t

    where r IR2. The convolution of F with g(r) is the integral over the whole domain:

    L(r, t) =

    IR2

    g(, t)F(r )d

    32

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    34/90

    The Scale-Space Representation of a 2D image is a 3D space, where the scaled versions ofF stack up along the t axis in L(x, t) Outlines of structures in scale-space appear to be

    upside-down domes or mountains.

    4.2.3 Isotropic Diffusion

    It is interesting to see how the scale parameters letter came to be t. This has historicalreasons: It was observed that this smoothing process has a physical counterpart: heatdiffusion. It is often mentioned in literature about scale space, that the Gaussian kernel isa solution of the heat diffusion equation. (Lindeberg,1994, pp43)

    tL =1

    22L

    with initial condition L(x, 0) = f(x). Indeed, one of the first ideas concerned with abstract-

    ing image details goes back to the Perona-Malik-Filter (P.Perona, J.Malik 1985), whoseunderlying idea was to take a given signal and let it diffuse through an isotropic mediumfor a certain time t and then observe the results. Because the linear diffusion process hasthe disadvantage of dislocating edges, further attempts have been made using an anisotropicmedium, which is for instance more diffusive at edges (areas of high gradients) than in areasof shallow gradients in order to preserve the structure of the prevalent edges better. Thisis called inhomogenous linear diffusion. Both types, as well as their non-linear siblings, arepresented in a compact manner in [4].

    4.3 Blobs

    4.3.1 Definition

    Grayscale imagery is composed of areas of different brightness. Blobs are areas in the imagewhere a desired property remains relatively stable and which is somewhat distinguishedfrom its surroundings. In grayscale images, the two candidates are bright Blob on darkbackground and its evil twin: dark Blob on bright background. In the case of radar data inthe given representation this is particularly easy: we have only bright areas against a darkbackground since only the bright areas are of interest.

    4.3.2 Edge Detection

    A Blob in the given data is also determined by an edge. And edge is determined by an area ofhigh gradient and a zero crossing in the Laplacian. This is best illuminated by a little walkacross an edge, coming from an area of relatively homogenous low intensity (low gradient)and heading for a bright area. In the region of the transition the gradient increases until the

    point in the transition where the change of intensity declines and a relatively homogenous,albeit brighter region of intensity is entered. Figure 4.2 illustrates this:

    33

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    35/90

    f(x)gradient(x)laplace(x)

    Figure 4.2: Intensity, gradient, Laplacian at edgesIntensity making a transition from low (left) to high (right) values.

    Figure 4.3: Mexican Hat Laplacian Mask

    34

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    36/90

    Notice how the gradient reaches its maximum on the middle of the slope. Observe howthe Laplacian changes sign in the process. There are two basic techniques for obtaining the

    location of edges using derivative operators: gradient maxima and Laplacian zero crossings.For the course of this work, the Laplacian zero crossing was used and approximated byusing the mask shown in Fig.4.3, which is a second order derivative of a Gaussian smoothingoperator (see [1], chapter 7), a so called Mexican Hat operator. Only points with negativeLaplacian were considered as candidates for edge points. That way the edge is actuallylocated inside the bright Blobs. A demonstration of this can be seen on Fig.4.4.

    Figure 4.4: Laplacian mask, detected edgesEdges detected by the Laplacian mask detector after the data has been put through a low

    smoothing Gaussian filter. Edges are already linked and coloured accordingly.

    For the following procedure let F be the original image. F is first smoothed using aGaussian kernel g(, t) 5 in order to prevent the very noise sensitive Laplacian from goingnuts, resulting in a smoothed image G, and then the Mexican hat edge detection, denotedby MH is applied. This is somewhat double done, since the Mexican Hat operator wasconstructed with a smoothing property itself, but the results are nonetheless usable.

    G = g(, t) FE = MH G

    An edge point is every point in E less than zero.

    4.3.3 Edge Linking

    The edge points alone are not very useful, they have to be linked into chains enclosing ob-jects. For convenience, all points classified as edge points are taken from the edge detectiondata output and stored in a simple list containing the location of the edge point in imagecoordinates. A recursive scheme was used to detect closed chains in this list. For each ofthese closed chains, a Blob object was created.

    Let all N points in Esatisfying the edge criteria be collected into a list S = {n1, n2,...,nN}.Every node ni is composed of the location in image coordinates and a pointer to the next

    5g(, t) means Gaussian convolution kernel with scale t

    35

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    37/90

    entry, ni+16

    Starting with an empty boundary node list b1, the first node n0 S, is added to b1.Then the immediate 8-Neighbourhood of n0 is searched in S. Every point found to be adirect neighbour is considered to be part of the boundary b1 and added, if it hasnt beenadded already. Then this new found friend is subjected to the same treatment. This processcontinues until no more new points can be added to b1. Afterwards, b1 is removed from Sand the process starts all over again, this time with b2, until S is empty. This results in Kclosed boundaries:

    b1 = {n1, n2,...,nN1}b2 = {n1, n2,...,nN2}

    ...bK = {n1, n2,...,nNK}

    where the sets bj are orthogonal in space and their junction is S:

    S =i

    bi

    For each now closed boundary bj a Blob object Bj is created and the boundary is storedwithin for future use.

    4.3.4 Holes

    As said before, the data under consideration only contains bright Blobs on dark background.Nevertheless it is quite common to have areas of no signal completely enclosed by areasbearing significant signal. These spots are called holes in the context of this work and theypose a problem: since the edge detection algorithm finds the boundaries between the holeand the surrounding bright area like any other transition, spurious Blobs are generated. Inorder to remove these, each combination of Blobs is checked: Consider two boundaries bjand bk. If

    bj bk = bkwhere denotes complete geometrical inclusion. 7, then bk is considered to be a hole andremoved from the list of Blobs. This has to be done since the following area samplingalgorithm would be fooled by holes and run astray.

    4.3.5 Area Sampling

    In order to obtain the measured values that are actually contained within the found bound-aries, one must first know where inside actually is. To this end, a gradient walk was applied:Consider a boundary bj . For each point ni in bj the gradient in L(., t), which was used to

    obtain the boundaries, is calculated. A step is taken into that direction. This is repeateduntil a point p0 is encountered which is not a boundary point. It is assumed that the insideis found. From p0 the inside is traversed horizontally, once in direction of increasing, oncein direction of decreasing x coordinates, each until a point in bj is encountered. En route,for each coordinate in the walk, the according value in F (the original data) is added tothe Blob Bj s area values Aj

    8. The resulting image Aj contains only those values from F,

    6this is called a forward linked list.7Every point in bk is checked whether it is completely confined within bj . If true, it is added to the result

    of the operator.8Aj is an image of the same dimensions as F where all values have initially been set to 0

    36

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    38/90

    which lie inside, but not on, the boundary bj .

    4.4 Scale Space Representation in 2D

    Since images say more than words, lets take a look at how 2D signals evolve under scale-space transformations. The sequence shown in Figures 4.5 and 4.6 shows the scale spacerepresentation of the azimuth scan data from September 8th, 1998 19:38. The original imagewas subjected to a thresholding at Tabs=12.5dBZ, single bright spots where removed andthe result projected onto the Cartesian plane at a resolution of 400x400 pixels. For everysubsequent image, scale parameter t was doubled.

    Please observe that the brightness shown has been adjusted to represent the whole rangeof values of a Gaussian blurred image. Having used the fixed value grayscale mapping would

    have made the Blobs almost invisible, because the Gaussian kernel not only smoothes theimage, but also levels the values somewhat down, the higher the scale, the lower the resultingsignal. It is clearly visible how scaling up dismisses more and more of internal details ofthe signal and at large scales, only a rough description of the original shape remains visible.The internal scale of the image shown could roughly be estimated to lie around 128.

    37

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    39/90

    Figure 4.5: Scale Space Representation 1

    Scale-space representation of the azimuth scan, 8th of September 1998 at scales 0 (originalimage), 2,4,6,8,16 and 32.

    38

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    40/90

    Figure 4.6: Scale Space Representation 2

    Selected Points in the scale-space representation continued for scales 64,128,256,512,1024and 2048.

    39

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    41/90

    4.5 Blob Detection in Scale-Space Images

    The problem posed by images under Gaussian scale space transformation for detecting ob-jects, is clearly the absence or massive dislocation of clean edges. Since the Gaussian blurtends to smooth the edges out, artificial edges have to be re-introduced. How can this bedone? A simple approach would be to subject L(x, t) to a thresholding procedure. Since theGaussian kernel g(, t) tones the values down more and more with increasing scale t, it is agood idea to use adaptive thresholding. The following series repeats the process in the pre-vious section on the same data, but this time each slice from the scale-space representationis subjected to an adaptive thresholding at Trel=20%. This value will subsequently also bereferred to as cut-off value.9. After thresholding. the edge detection introduced in section4.3.2 onwards was applied. See Figures 4.7 and 4.8 for results.

    It is clearly visible that the resulting boundaries settle around prevalent structures in theoriginal data by observing their scale-space representation. The number of detected Blobs

    K decreases with increasing scale t, as could be expected.

    9which means the lowest 20% of the data are trashed (set to 0)

    40

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    42/90

    Figure 4.7: Edge Detection in Scale Space Images 1

    Thresholded Edge Detection at scales 2,4,8. Left:Scale Space Representation. Right:resulting boundaries on original data.

    41

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    43/90

    Figure 4.8: Edge Detection in Scale Space Images 2

    Thresholded Edge Detection at scales 16,32 and 64. Left:Scale Space Representation.Right: resulting boundaries on original data.

    42

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    44/90

    4.6 Automatic Detection of Prevalent Signals

    As could be seen in the previous section, an increasing scale parameter t leads to prevalenceof the most significant and dampening of the less significant features. The scale, at whichthe prevalent features remain while the insignificant disappear, does vary considerably fromimage to image. It depends a great deal on the complexity of the scenery. Prevalent, inthe scale-space context, is always to be seen in the context of the scale of the present imagefeatures. This means, that an approach based on a similar level of detail (in scale spaceterms) in subsequent images can not work properly with a fixed scale. Thus, an automaticprocess capable of distinguishing the prevalent from the insignificant Blobs would be highlydesirable. The question is though: how can prevalent be defined in terms of scale-space?

    Consider the following idea: Given the fact that (in general) the number of detectedBlobs decreases as the scale parameter t increases, could it be reckoned that Blobs survivingthe upscale process for a given number of repetitions are the prevalent Blobs?

    Figure 4.9: Automatic Scale Detection, Original ImageAzimuth Scan used for demonstrating automatic scale space analysis. The data has been

    thresholded at Tabs=12.5dBZ,contrast-stretched,single-point filtered with a gradient of

    25dBZ/330m and interpolated onto the cartesian plane.

    This idea shall be used for the following procedure. Starting with a low scale parametert0, the number of Blobs is detected. The scale is increased by a fixed increment t and thenumber of Blobs found now is compared to the previous number. This process is repeateduntil the number of detected Blobs stabilises over Nmax iterations. The parameter Nmaxdetermines the required scale-space persistence for any given object needed to be classified

    43

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    45/90

    as prevalent. The resulting automatically selected scale is chosen to be the scale parametert of the first scale space representation slice L(., t) of the stable series in order to conserve

    maximum detail. The complete set of parameters required thus, is the start scale t0, thescale increment t and the scale-space persistence Nmax. The Blobs to be considered persis-tent thus are required to remain distinguishable over an effective scale difference of Nmaxt.

    Figure 4.9 shows an azimuth scan from July 1999. An extensive signal is present onthe west side, the upper east side is populated by smaller, scattered signals. Figure 4.10illustrates the automatic upscale process for Nmax of 1,2,4,6,8,10. Depending on the Nmaxsetting, different Blobs prevail or structures merge into larger Blobs as expected.

    44

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    46/90

    Figure 4.10: Automatic Scale Detection Results

    Blob Boundaries detected at Nmax of 1,2,4,6,8 and 10

    45

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    47/90

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    48/90

    Chapter 5

    Tracking and Scale Space

    Tracking means: to extract data about movement from subsequent sets of data. The move-ment need not be physical movement between two points in time, other parameters changingbetween to images may be suitable (for example Tracking of objects under scale transfor-mations).

    The speciality of the SARTrE1 tracking tools lies in the ability to be able to automati-cally select features worth tracking in the context of all objects in any given snapshot, andthe correlation procedure, which takes histograms of Blob content (signature) into account.The focus of attention is drawn to the salient image structures by the applying of the auto-matic detection procedure presented in Section 4.6.

    There exist a couple of Tracking algorithms based on different principles to obtain infor-mation about what happened between time t and t + t:

    Centroid-Tracking :can be applied if the trackable data can be decomposed into distinct objects undersome criteria. A centroid - a designated point - for each object is assigned. Subsequentimages are analysed with the goal of finding the same object at its new position, andthe displacement of the object between the two images is estimated as the displacementof its centroid. Of course the problem of correlating objects from one image to anotheris dependant on the nature of the image or object and the criteria used. A certaingrey value, a geometrical shape or another suitable form of signature may be used.Often, the search is narrowed by some a-priori or otherwise obtained informationabout maximum possible object velocity and size of the object, restricting the searchwindow in a subsequent image. It was first applied in meteorology by Barclay andWilk (1970). A recent adoption of this form of tracking is the Trace3D algorithm,developed in Karlsruhe by J.Handwerker,2002.[9].

    Statistical Cross-Correlation :is not concerned with individual objects as such, but the extraction of flow patternsin image series. This is achieved by defining a box size and statistically correlate allpossible boxes at t to all possible boxes at time t + t. The boxes getting the highestcorrelation are connected. The resulting field of displacement vectors is dependant on

    1The abbreviation SARTrE is short for Scale Adaptive Radar Tracking Environment. The Environmentmentioned refers to the reusable software libraries developed for this work.

    47

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    49/90

    the box size as well as on the data. Statistical box correlation suffers from ambiguitiesinherent in the correlation process and is often highly sensitive to changes in box

    size. For an illustration of the ambiguity problem, see E.Heuel 2004[14]. An exampleof this type is the TREC algorithm (Rhinehard 1981)[10] which was improved byL.Li,W.Schmid, J.Joss 1994 (COTREC) [11] through directional post-processing byapplying the continuity equation to the vector field delivered by TREC, the resultswhere used for Nowcasting. This was the basis for the improved algorithm developedat the ETH Zurich by S.Mecklenburg, 2000[12].

    Tracer Tracking :A special form of semiautomatic Tracking is applied when the object under observationexhibits little clue as to its motion. For instance determining flow patterns and veloci-ties in fluids. In this case, a tracer is picked or introduced and the motion of the traceris tracked instead. An example is the estimation of rotational velocities in a Tornadoby tagging debris carried by it and following it through a series of high-resolution film

    frames. In the context of Radar Meteorology, this form of indirect tracking has no realsignificance.

    For the course of the work the natural approach to track precipitation seemed to trackBlob centroids. As signature the histogram of reflectivity within each Blob was chosen.The correlation was performed using a weighting scheme including spatial displacement,histogram size and -shape (via Kendalls Tau correlation).

    5.1 Histogram

    A histogram of reflectivity values contains the counts of each value from the range of (dis-crete) values possible. In our case, the range was chosen to be the natural range as present

    in the data, where values range from [0..255]. Each Blob area A was scanned and the foundvalues counted up. As an example the histograms of the Blobs detected in Figure 5.3 wereare shown in Figure 5.2

    48

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    50/90

    Figure 5.1: Histograms, Detected BlobsAzimuth Scan, July 12th 1999, 13:01. Four distinct Blobs detected with identifiers

    #1,#2,#3,#4

    5.2 Centroid5.2.1 Geometric Centre of Boundary

    When saying that the centroid of the object is used for determining its displacement, thequestion was left open what the centroid actually is. At first glance, the geometric centroidof the boundary points comes to mind. Assume a boundary b from a Blob B containing Npoints xi = (xi, yi).

    xcentre =1

    N

    Ni=1

    xi (5.1)

    ycentre =1

    N

    N

    i=1

    yi (5.2)

    This has a big drawback: since Blobs tend to change in shape, yet may stay relatively intactin terms of overall size and (foremost) in position, the geometric centre of the boundarypoints might yield spurious movement. Although that option was left in the software forpedagogic purposes, it is not a good choice. Two other candidates proved to be a lot morestable:

    49

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    51/90

    0

    20

    40

    60

    80

    100

    120

    140

    160

    180

    80 100 120 140 160 180

    N(Z)

    Z [byte value]

    h200

    0

    20

    40

    60

    80

    100

    120

    140

    160

    180

    80 100 120 140 160

    N(Z)

    Z [byte value]

    h477

    0

    20

    40

    60

    80

    100

    120

    140

    160

    180

    80 100 120 140 160 180

    N(Z)

    Z [byte value]

    h760

    0

    20

    40

    60

    80

    100

    120

    140

    160

    180

    80 100 120 140 160

    N(Z)

    Z [byte value]

    h7255

    Figure 5.2: HistogramsHistograms of the four Blobs in Fig.5.3. Top Left:#1, Top Right:#2, Bottom Left:#4,

    Bottom Right:#3.

    5.2.2 Centre of Reflectivity

    To give the centroid a bit more anchoring towards the actual data, the reflectivity can be

    viewed as a distribution of a mass about the area of the Blob. Having the reflectivity2

    playthe role of density or mass, it is possible to calculate a centroid based on the well-knownformula for centre-of-mass equation for mass-point-distributions. Assume N values in the

    2as measured by F(x, y) and also present in A for each Blob separately

    50

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    52/90

    area A(x) of a Blob B at locations x1...xN, where x = (x, y) IR2.

    Asum =Ni=1

    A(xi) (5.3)

    xcentre =1

    Asum

    Ni=1

    xiA(xi) (5.4)

    ycentre =1

    Asum

    Ni=1

    yiA(xi) (5.5)

    where Asum is the sum of reflectivity in the area and A(xi) the value measured at eachpoint xi. This way the centroid follows the distribution of reflectivity, wherever the boundarymight be.

    5.2.3 Scale Space Centre

    With the introduction of the scale-space methods another possibility of marking the centroidappeared, which is closely linked to the centre of reflectivity. Instead of basing the weighingin Eq. 5.3 on the originally sampled reflectivity values, the scale-space centroid picks thescale-space representation L(., t) from the Edge-Detection stage. Eq.5.3 is applied again,only this time the values are sampled from the Gaussian scale space representation L(x, t)instead of A(x) directly:

    Lsum =Ni=1

    L(xi, t) (5.6)

    xcentre = 1Lsum

    Ni=1

    xiL(xi, t) (5.7)

    ycentre =1

    Lsum

    Ni=1

    yiL(xi, t) (5.8)

    51

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    53/90

    Figure 5.3: CentroidsAzimuth Scan, July 7th 1999, 20:31. Different Methods to obtain Centroid. Top Left:

    Data with boundaries, Top Right: geometrical. Bottom Left: Reflectivity. Bottom Right:Scale Space.

    5.3 Correlation

    Tracking consists of recording the displacements and histogram developments of Blobsthrough time. At time t0 there is of course nothing to match against, and the currentBlobs are provided with unique IDs and stored in a collection. At all subsequent times,however, correlations task is to transfer the IDs from old Blobs to new Blobs identified astheir successors. The Tracks are based on subsequent Blobs with the same ID.

    52

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    54/90

    Consider two images at two different points in time, t1 and t2 = t1+t, F(t1) and F(t2).(also called snapshots). A critical time difference can be set, which determines the maximum

    time between two snapshots to attempt correlation. If the time difference t between thetwo snapshots exceeds tmax, the correlation is omitted and the new Blobs simply replacethe previous Blobs by assigning entirely fresh IDs. This is useful in situations with fastmoving objects and sparse data. In such situations it is best to lift the pencil and start over,instead of producing errors in the resulting tracks.

    Assume t is within reasonable limits and the snapshots have yielded a number of Blobs,Bprev and Bnew. For each new Blob bnewi Bnew a table is calculated, which contains a setof values with respect to each old Blob bprevj Bprev:

    mid displacement: dR :This is simply the distance between the centroids of bnewi and b

    prevj in metres.

    displacement correlation value R :After all displacements have been calculated, they are normalised by the maximumdisplacement value found in all correlations and fed into a complementary Gaussianerror function, resulting in values nearer to 1 the closer the argument gets to 0. Theresulting value ranges from ]0..1] and is named R.

    histogram size difference: dH :|H| is defined by the number of values apart from zero, that went into the histogram.That means simply the sum of all counts for all classes except class 0. The differencebetween the histogram sizes, d|H|, is calculated for each pair bnewi and bprevj .

    histogram size correlation: H :is obtained by normalising the differences d|H| with the highest present difference andfeeding this value to the complementary Gaussian error function again. As usual, thisyields a value which approaches 1 as d|H| approaches 0. This value is called H

    histogram shape correlation K :The Kendall rank correlation is a statistical correlation suitable for data, which onlyhas only one criteria: it should be rankable. The ranks are then correlated in categoriesofconcordant or discordant alone. No assumption about the parameters of the under-lying distribution is made and none of its parameters are estimated. (non-parametriccorrelation). Kendalls Tau is described in Numerical Recipes in C[7], Chapter 14.Basically, the correlation compares data by counting the occurrences of higher in rank(concordant, aka con), lower in rank (discordant aka dis) or equal (a tie). If the tie oc-curs in x, the count goes to an extra counter (extrax), if it occurs in y, its an (extray).If the tie occurs in both, its not counted at all.The basic formula to calculate Kendalls Tau according to Numerical Recipes in C[7]is:

    K =conall disall

    conx + disx + extrax

    cony + disy + extray(5.9)

    How does this apply to the histograms? Each histogram consists of value counts (yi)in the 256 classes of possible values (xi). In x every value will be a tie, since all classesare present in both histograms at all times (by construction). This leaves only theyis of the two histograms in b

    newi and b

    prevj to be compared, which are the counts

    for the classes xi and these usually differ. Using Kendalls Tau yields a parameter,which is not bound to the absolute numerical values of the histograms compared, but

    53

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    55/90

    merely their difference in shape. ranges from 1 (completely anti-correlated) to +1(completely correlated).

    coverage, previous by new :This value is not used for correlation, but for determining merges and splits (see below).Consider two arbitrary Blobs bi and bj and their respective areas Ai and Aj . Let thecoverage operator 3 be defined as:

    bi bj = |{Ai(x, y) : Ai(x, y) > 0 Aj(x, y) > 0 (x, y) Ai}||{(x, y) Ai : Ai(x, y) > 0}| (5.10)

    or in human-readable form: how many percent of the area covered by bi is covered bybj as well? Clearly, if that value reaches 1, bi is completely covered by bj , if the valueis 0 they are completely distinct (in terms of covered ground). This coverage value forthe current pair is computed as bprevj bnewi and used to check for merges.

    coverage, new by previous :This is just the same operator applied in reverse order: bnewj bprevi and its used fordetecting splits.

    When all correlative values of all possible pairs have been calculated the values R,Hand K are summed up with weights in order to obtain an overall correlation value for eachpair:

    ij = wRRij + wHH

    ij + wKK

    ij (5.11)

    where the subscript index j denotes the new, the superscript index i the previous Blobinvolved. The purpose of the weights wR,wH and wK is to have a device to put more em-phasis on one or another during operation. For the most parts of the work, they were allset to 1, but in some situations the Tracking accuracy could be improved, depending on thesituation in the data-sets, by putting more weight on one or the other. By setting one of

    the weights to zero, it is even possible to eliminate the according aspect completely from theTracking. Assuming all weights to be at their default value 1, the overall correlation indexranges from 1 (total anti-correlated Kendall-, no spatial or histogram size correlation)to +3 (perfect match). The following procedure needs no adjustment when the weights arechanged, because it works on a strictly relative principle.

    The actual matchmaking 45 is made by traversing the ij in descending order and pairthe Blobs bnewj with b

    previ accordingly. If the b

    previ was already matched to a new Blob, the

    next lowest ij without a match is chosen. Pairing means, to assign the ID of bprevi to b

    newj .

    Before the match is made official, a couple of constraints have to be obeyed first:

    maximum velocity vabsmax :The value vmaxabs is one fixed parameter of the Tracking process, which is mandatory. It

    limits the displacement of the centroid in the time t between the two images. Sincethat time isnt always the same, a simple maximum range constraint wouldnt work.If the velocity resulting from the displacement of the centroids of two Blobs, whichwere matched by the correlation, exceeds vmaxabs , then the match is rejected and thenew Blob is given a fresh ID.

    3read: covered by4From Websters Revised Unabridged Dictionary (1913): Matchmaking Matchmaking a. Busy in

    making or contriving marriages; as, a matchmaking woman.5I hear its particulary alive still in some areas of Ireland, where it is considered a honest pastime for

    elderly folk. (Rem. of the Author)

    54

  • 8/14/2019 Tracking of Radar-Detected Precipitation-Centroids using Scale-Space Methods for Automatic Focussing

    56/90

    average velocity vav :When entering a new Tracking sequence, vav is set to v

    maxabs . Subsequently vav is

    calculated as the mean value of the detected velocities greater than 0. The constraintresulting from this is determined by a factor cav so, that v

    maxav = cav vav. This serves

    to leave room for variation of velocity up to the factor cav from the mean velocity ofthe previous snapshot. If vmaxav is exceeded, the match is rejected and the new Blob isgiven a fresh ID.