32
Chapter 10 Displacement Measurements 10.1. Introduction The Earth is an active planet: its surface is constantly being reshaped as a result of mass transfers of either internal or external, natural or man-made origin. The displacements of the Earth’s surface, whether we are speaking of the ground surface, the glaciers or the water, vary a lot in terms of spatial extension, amplitude and temporal evolution. Their study constitutes an essential part of geodesy, and their quantification is a major topic of geoscience. The objective is not only to understand the current shape of our planet and its evolution, but also to efficiently manage natural hazards. Until the 20th Century, measurements of the Earth’s surface displacements were punctual, in situ measurements, for example the classic leveling techniques, distance or tilt measurements. The distribution of these measurements on the Earth’s surface was not homogeneous, with a concentration in specific areas such as natural observatories (e.g. the Vesuvius volcanological observatory, created in 1845) and a small temporal sampling. At the end of the 20th Century, two major progresses were achieved. On the one hand, the development of spatial geodetic techniques (GPS, optic imagery, InSAR, etc.) has allowed for a drastic improvement of the spatial coverage of the measurements regarding the radar and optic imagery. On the other hand, the generalization of continuous in situ measurements has caused splendid improvement in the accuracy of temporal sampling. With regard to in situ or field measurement, the remote sensing measurement is still strongly limited due to its small temporal sampling, which, even though improved, still remains limited when compared to the possibilities currently offered Chapter written by Yajing YAN, Virginie PINEL, Flavien VERNIER and Emmanuel TROUVÉ. Remote Sensing Imagery, Edited by Florence Tupin, Jordi Inglada and Jean-Marie Nicolas © ISTE Ltd 2014. Published by ISTE Ltd and John Wiley & Sons, Inc.

Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Embed Size (px)

Citation preview

Page 1: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Chapter 10

Displacement Measurements

10.1. Introduction

The Earth is an active planet: its surface is constantly being reshaped as a resultof mass transfers of either internal or external, natural or man-made origin. Thedisplacements of the Earth’s surface, whether we are speaking of the ground surface,the glaciers or the water, vary a lot in terms of spatial extension, amplitude andtemporal evolution. Their study constitutes an essential part of geodesy, and theirquantification is a major topic of geoscience. The objective is not only to understandthe current shape of our planet and its evolution, but also to efficiently managenatural hazards. Until the 20th Century, measurements of the Earth’s surfacedisplacements were punctual, in situ measurements, for example the classic levelingtechniques, distance or tilt measurements. The distribution of these measurements onthe Earth’s surface was not homogeneous, with a concentration in specific areas suchas natural observatories (e.g. the Vesuvius volcanological observatory, created in1845) and a small temporal sampling. At the end of the 20th Century, two majorprogresses were achieved. On the one hand, the development of spatial geodetictechniques (GPS, optic imagery, InSAR, etc.) has allowed for a drastic improvementof the spatial coverage of the measurements regarding the radar and optic imagery.On the other hand, the generalization of continuous in situ measurements has causedsplendid improvement in the accuracy of temporal sampling.

With regard to in situ or field measurement, the remote sensing measurement isstill strongly limited due to its small temporal sampling, which, even thoughimproved, still remains limited when compared to the possibilities currently offered

Chapter written by Yajing YAN, Virginie PINEL, Flavien VERNIER and Emmanuel TROUVÉ.

Remote Sensing Imagery, Edited by Florence Tupin, Jordi Inglada and Jean-Marie Nicolas © ISTE Ltd 2014. Published by ISTE Ltd and John Wiley & Sons, Inc.

Page 2: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

252 Remote Sensing Imagery

by ground instrumentation. However, it does have its advantages. It is unbeatable interms of spatial coverage, because it enables us to obtain continuous maps of surfacedisplacement over large areas. This advantage has allowed the detection andquantification of deformation in non-instrumented, remote areas or areas that do nothave the necessary financial means and human resources for ground instrumentation.This advantage has also proven very useful for regional studies. Furthermore,because of the archiving system, we can study, a posteriori, areas where aphenomenon we are interested in has been detected, and thus we have access toinitial phases. This kind of a posteriori study is never possible with groundinstrumentation, where data are only acquired with a decision made, and thereforeoften after the outbreak of the phenomenon.

Because of these advantages, the remote sensing displacement measurement hasgained significant development in the past few years, for the detection andquantification of natural as well as man-made deformations. In tectonics, theunderstanding of the forces at work and the evolution of the Earth’s relief is realizedby the quantification of the displacement of large units, blocks or tectonic plates, oftheir level of rigidity and the velocities at their margins. The precise measurements ofthe displacement field around faults allow us to understand their loading history, todetermine locking areas and depths and thus, to feed the models seeking to quantifyseismic risk. In recent years, the understanding of the seismic cycle came from in situmeasurements due to their good temporal resolution [LIN 96, DRA 01]. However,spatial imagery also brings significant constraints on the temporal evolution ofdisplacement fields around faults [JOL 13]. Furthermore, since the first co-seismicstudy by Massonnet & Rabaute [MAS 93], radar imagery has brought uniqueinformation on the geometry of fault rupture areas. In volcanic areas, magmatransport and storage in the upper layers of the crust often induce deformations.Consequently, in the field of volcanology, the measurement of surface displacementsallows us to infer the geometry and the behavior of the magma plumbing system, andalso to detect the arrival of magma at shallow depth, which could potentially precedean eruption. Since the first study by Massonnet et al. [MAS 95], remote sensing hasbecome a major asset in modern volcanology [SPA 12, HOO 12b]. It has allowed usto detect magma storage in non-instrumented areas [PRI 02]. Its good spatialcoverage has improved our knowledge of the geometry of magmatic intrusions[SIG 10], and it has enabled the detection of deep magma storage zones[DAL 04, OFE 11]. The vertical motions induced by surface load variations (e.g.variations of the water level in a lake, melting of a glacier, accumulation of eruptivedeposits) are qualified as isostatic motions. The use of remote sensing to quantifyvertical displacements, induced by a known variation in the surface load, allows us toobtain information on the rheology of the shallow layers of our planet[CAV 07, PAG 07, PIN 07]. This information is essential in modeling grounddisplacement occurring in response to a given evolution of stress. In glaciology, sincethe study by Goldstein et al. [GOL 93], remote sensing has been used to quantify the

Page 3: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 253

flow of the glaciers [BER 06, FAL 11, ERT 13], which, because of its sensitivity tothe thickness of the glacier and to hydrological conditions underneath the glacier, is amarker of climate change. In this field, the study of the displacement field is also atool for preventing the glacial risks associated, for example with the accumulation ofsubglacial water, or serac falls. Remote sensing has also proved to be useful forcharacterizing landslides [FRU 96, COL 06]. A good knowledge of the displacementand the temporal evolution allows us to better understand the process governinglandslides and to better manage the associated risks. In the case of landslides, as forsmall-size glaciers, the application of remote sensing techniques is, however, morecomplicated because of the acquisition geometry (strong slopes) and the nature of thedisplacement, which can be very rapid and strongly localized in space and time. Notall displacements recorded at the Earth’s surface are of natural origin, but some areinduced by internal mass transfers resulting from human activities (mine excavation,pumping or fluid injection, etc). The quantification of these man-made displacementsis also important for preventing future material damages. In this field as well, remotesensing has proven itself interesting [CAR 96].

10.2. Extraction of displacement information

Displacement measurement by remote sensing is based on the comparison ofthe images acquired with different dates so as to estimate the ground motion thatpotentially took place between these dates, on a part or in the entirety of the regionimaged. This subject is different from that of the change detection (Chapter 8), forseveral reasons:

– The nature of the sought-after information: the displacement measurementis scalar or vector information (depending on the number of components in thedisplacement vector), whereas the change detection is symbolic binary information (orN-ary information if we seek several change classes). We then deal with an estimationproblem where the sources of uncertainty must be analyzed and propagated in theprocessing chain, whereas the change detection is more related to the problem ofautomatic or supervised classification.

– The source of the sought-after information: when we are interested in temporalchanges, the albedo or the radiometry of pixels generally provides the information thatcan be compared directly, whereas the displacement information cannot be obtainedfrom a comparison between the values of the pixels. It comes either from the differencebetween their positions in the image, considering the geometry of the sensor, orfrom the phase difference that is related to the difference in the path traveled by theelectromagnetic wave in the case of radar imaging.

The displacement measurement therefore needs a step of information extractionexploiting one or the other of these two sources:

Page 4: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

254 Remote Sensing Imagery

– The measurement of the difference in position (offset tracking) by looking for amaximum of similarity can be applied both in optic and radar imagery. This approach,developed in section 10.2.1, is limited by the resolution of the sensor, and allows usto achieve a precision of the order of a tenth of the pixel’s resolution. However, it usesthe two axes of the image and allows us to obtain a 2D displacement measurement.

– The measurement of the phase difference using the technique called differentialinterferometry (D-InSAR) is mainly applied to SAR satellite images acquired onrepeated orbits that are sufficiently close (i.e. repeat pass interferometry) and whenthe time evolutions of the surface are sufficiently reduced to maintain the coherence(see sections 9.3 and 9.3.3.3). This approach, developed in section 10.2.2, can reacha precision of the order of a fraction of the wavelength, that is a precision of theorder of centimeters or even millimeters by exploiting series of interferograms (seesection 10.3.1). With a given acquisition geometry, it only allows us to measure thedisplacement in the direction of the radar line of sight or range.

The combination of these two sources of information, or measurements, issuedfrom the data acquired in different geometries then allows us to look for higher-levelinformation. Particularly, this second step, illustrated in section 10.3.2, allows us torebuild the three components (east, north, up) of the displacement vector (3D) or toinverse the physical model describing the phenomenon that causes the displacementobserved at the Earth’s surface (faults, volcanoes, etc.).

10.2.1. Maximum of similarity

The basic objective when searching for the maximum of similarity is to find theposition of a point or a pattern in an image. We have seen in the previous chapter thatthis is the basis of the optic and radar stereo vision. The difference in this context liesin that the search area is less constrained. In stereo vision, having been in epipolargeometry, we have seen that the homologous points were situated on the same line,thus limiting the search to a single dimension (section 9.1.2). We have also seen inChapter 2 that searching for a maximum of similarity was also the basis of theregistration (section 2.4). In displacement information extraction, the applicationvaries slightly from the registration, but the equations used remain the same. We donot seek to identify a global transformation function of one image into another, buttry to measure the displacement that has happened to the points between the twoimages. For the sake of simplicity and in order to reduce the size of the search areas,we suppose that the two images used are issued from a global registration (seesection 2.4), carried out by relying on, for example, the fixed areas or on theknowledge of the acquisition geometry and the topography.

Page 5: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 255

10.2.1.1. General case

Once the images are registered, we need a local, more intense and more completesimilarity search than the one used for the registration in order to extract displacementinformation. For this, let us consider a master image I1, which is generally the onewith the earlier date, and a slave image, I2. We thus calculate the displacement inthe slave image with respect to the master image. The main principle is to define, inthe master image, a local window, called the master window hereafter, that acts as apattern, and then to look for this pattern in a search area defined in the slave image.Afterward, the master window and the search area will be shifted by a given step, andthe search will begin for that new pattern. This step continues for all the regions of theimage where the displacement information needs be extracted. Figure 10.1 illustratesthis principle.

Figure 10.1. Illustration of the principle of the algorithm of the search for the maximum ofsimilarity; on the first level, the master image and the master window; on the second level, theslave image containing the search area that is larger than the master window and the slavewindow of the same size as the master window; at the third level, the results of the similarityfunction for all displacement (p, q)

Page 6: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

256 Remote Sensing Imagery

To determine the position of the master window in the search area, a similaritycriterion must be determined. There are many such criteria in the literature, as wehave already seen in previous chapters. Some of them are dedicated to optical images,but they can easily be applied to SAR amplitude images. Others are dedicated to SARimages and their characteristics [DEL 10a].

First of all, let us consider the general case of a similarity function D̃(I1,I2)(k, l)calculated locally at the point (k, l) between the images I1 and I2, that is between themaster window I1 centered at (k, l), and the slave window I2 also centered at (k, l).Because we search for a displacement between I1 and I2, we make severaldisplacements (p, q) on the slave image I2. Note T(p,q)(I2) the image I2 translated of(p, q). We then estimate the similarity D̃(I1,T(p,q)(I2))(k, l) and look for its maximumvalue in order to find the right displacement. Note Ω(k,l) the local neighborhoodcentered at (k, l) defining the master window:

Ω(k,l) = Ωk × Ωl = k − Ml

2; k +

Ml

2× l − Mc

2; l +

Mc

2.

It is quite natural to choose Ml and Mc as odd in order to center the window on(k, l).

With the master window defined, the search area Wr is determined through theknowledge of the maximum displacement in the line pmax and in the column qmax ofthe point (k, l). Thus, the search window (Rl×Rc) has the size Rl = Ml+2pmax andRc = Mc + 2qmax. Similarly to the master window, the search window is generallycentered on (k, l) if there has been a global pre-registration (see Figure 10.1).

The problem thus established, the objective now is to find the values p̂ and q̂ thatmaximize the similarity function D̃(I1,T(p,q)(I2))(k, l):

(p̂, q̂) = argmax(p,q)D̃(I1,T(p,q)(I2))(k, l) [10.1]

with the values p̂ and q̂ being comprised in the intervals [−pmax, pmax] and[−qmax, qmax], respectively. We thus obtain the best correspondence between themaster window and the slave window in the search area. The choice of the similaritymeasurement is therefore crucial.

10.2.1.2. Similarity functions in optical images

One of the most often used similarity functions is the cross-correlation, hereafternoted CC (cross-correlation) (see section 2.4.2.1). It can be applied well to both opticalimages and SAR amplitude images. The CC is given by:

D̃CC(I1,T(p,q)(I2))

(k, l) =1

MlMc(i∈Ωk) (j∈Ωl)

I1(i, j).I2(i+ p, j + q). [10.2]

Page 7: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 257

This similarity function1 has certain advantages: it is directly related to thestandard L2 norm of the difference between the master and slave images. It ismaximum in case of equality and then equal to the energy of these images. It can becalculated in the frequency domain using the fast Fourier transform (FFT), whichallows us to accelerate the calculation when the size of the windows becomessignificant. It is also possible to directly measure the offset between the two imagesin the frequency domain, on the basis of the FFT phase slope of the CC.

The major disadvantage of this similarity function is that its value depends on theintensity of the images. It favors the high-intensity pixels and does not allow thecomparison between the different maximums found. For example, two low-intensitypatterns perfectly correlated can have a correlation level lower than twohigh-intensity patterns that are not as well correlated. Therefore, we generally preferto use the normalized cross-correlation (NCC):

D̃NCC(I1,T(p,q)(I2))

(k, l) =(i,j)∈Ω(k,l)

I1(i, j).I2(i+ p, j + q)

(i,j)∈Ω(k,l)

|I1(i, j)|2.(i,j)∈Ω(k,l)

|I2(i+ p, j + q)|2. [10.3]

This function helps to solve the CC issue because, through normalization, thesimilarity result is comprised in the interval [0, 1].

Finally, it could be interesting, under certain conditions, to center the values andthus to use a centered normalized CC, usually noted ZNCC for Zero-mean NCC. Theauthors of [FAU 93] show, for example, that the ZNCC function is more efficient whenthe Gray distributions (histograms) of the images are visibly different:

D̃ZNCC(I1,T(p,q)(I2))

(k, l)

=(i,j)∈Ω(k,l)

I1(i, j)− I1 . I2(i+ p, j + q)− I2

(i,j)∈Ω(k,l)

|(I1(i, j)− I1)|2.(i,j)∈Ω(k,l)

|(I2(i+ p, j + q)− I1)|2, [10.4]

where Im and Ir are, respectively, the means of the master and slave windows:

I1 =1

Ml Mc(i,j)∈Ω(k,l)

I1(i, j), I2 =1

Ml Mc(i,j)∈Ω(k,l)

I2(i+ p, j + q).

1 This function corresponds to the similarity D̃prod(I1,I2)

of section 2.4.2.1, equation [2.6].

Page 8: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

258 Remote Sensing Imagery

Note that the denominator corresponds to the product of the empirical localstandard deviation. The centering and the normalization thus give a similaritymeasurement 2 comprised in the interval [−1, 1].

There is a large number of similarity functions. Several of them, used in the contextof computer vision, are presented in [CHA 03]. We could equally refer to Chapter 8.

10.2.1.3. Similarity functions in SAR images

The previous functions can also be applied to SAR amplitude images, but they donot consider certain specificity of these data, particularly the speckle effect, the shapeof the impulse response or the possibility to use phase information.

When we only use the images detected (amplitude or intensity), we can try tointroduce the characteristics of the speckle effect, as seen in section 4.3 and to use theapproach of maximum likelihood. Depending on whether the speckle is correlatedbetween the two images or not (which corresponds to the coherence loss ininterferometry), this approach leads to two different similarity measurements. In thecase where the speckle is not correlated, by considering the logarithm of the imagesin order to reinforce the contribution of the low-intensity pixels, we obtain thefunction noted uncorrelated maximum likelihood (UML) [ERT 09]:

D̃UML(I1,T(p,q)(I2))

(k, l) =(i,j)∈Ω(k,l)

logI1(i, j)

I2(i+ p, j + q)

−2 log 1 +I1(i, j)

I2(i+ p, j + q).

In the case where the speckle is correlated with a correlation level ρk,l ∈ [0, 1],we obtain the function noted correlated maximum likelihood (CML) [ERT 09]:

D̃CML(I1,T(p,q)(I2))

(k, l) =(i,j)∈Ω(k,l)

log I1(i,j)I2(i+p,j+q) − 2 log 1 + I1(i,j)

I2(i+p,j+q)

− 1 +1

2Nlog

⎛⎜⎝1−4ρk,l

I1(i,j)I2(i+p,j+q)

1 + I1(i,j)I2(i+p,j+q)

2

⎞⎟⎠⎞⎟⎠ [10.5]

We can also refer to the measurements presented in Chapter 8.

2 This function corresponds to the similarity D̃corr(I1,I2)

of section 2.4.2.1, equation [2.7].

Page 9: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 259

The use of one or the other of these functions can be guided by the a prioriknowledge of the acquisition conditions and the imaged surfaces (temporaldecorrelation, significant baseline, forest areas etc.) or by an estimation of thecorrelation level on previously registered areas. In both cases, these functions are notnormalized. We must choose a criterion such as the relative height of the similaritypeak, in order to have a confidence indicator associated with the measureddisplacement and to be able to remove aberrant results.

The previous criteria correspond to the displacement measured on the distributedtargets whose response is affected by the speckle effect. However, if we are interestedin the displacement of the point targets where the response is dominated by a scatterer,we can seek to correlate these points with the impulse response that results from SARimage formation. Once this 2D sinus cardinal response is known, we can thus finelylocalize the response of these targets in one image or another, and directly deducetheir subpixel displacement [SER 06]. This approach noted “SINC”, has particularlybeen tested for measuring the displacement of corner reflectors that follow the motionof a glacier [FAL 11]. TerraSAR-X strip-map images (resolution 2m) have been usedfor assessing the three approaches: ZNCC, UML and SINC. The conclusions tend toshow that the gain in performance of the specific methods, that is UML on distributedtargets and SINC on punctual targets, does not discriminate regarding the use of thecorrelation method that has the advantage of being applied to different types of targetsand providing a well-calibrated confidence measurement.

When we have complex data, that is Single Look Complex (SLC) images, we canbenefit from all information in order to search for the maximum of similarity by usingthe module of the complex normalized correlation function:

D̃coh(I1,T(p,q)(I2))

(k, l) =(i,j)∈Ω(k,l)

I1(i, j)I∗2 (i+ p, j + q)

(i,j)∈Ω(k,l)

|I1(i, j)|2(i,j)∈Ω(k,l)

|I2(i+ p, j + q)|2

where I∗2 refers to the conjugate of I2. We fall on the coherence used in radarinterferometry (formula [7.25]) for the estimation of the phase difference with aspatial averaging to reduce the noise and provide an indicator of the phase stabilitysee section 9.3). This approach noted coherence optimization procedure (COP)[GRA 01, STR 02] is much more precise because the coherence drops as soon as thepixels Im(i, j) and Ir(i+p, j+q) do not cover the same resolution cell on the ground.However, the use of this similarity function needs to take several precautions:

– The interferometric conditions must be fulfilled (small baseline and absence oftemporal evolution) so that the “proper phases” related to the speckle effect are thesame in the two acquisitions and disappear in the product I1.I∗2 (see section 9.3.3.3).

Page 10: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

260 Remote Sensing Imagery

– We must perform an over sampling of the complex data (for example using zero-padding in the Fourier domain) so as to search for a subpixel displacement that reducesthe decorrelation due to the fact that the resolution cells are not overlapping.

– The choice of the size of the window Ω is based on the compromise between theneed to average a sufficient number of samples in order to reduce the variance of thecoherence estimation and the phase rotation within the estimation window dependingon the size of the fringes.

Finally, when we have polarimetric SAR data, several strategies are possible tobenefit from the richness of the polarimetric information in the displacementmeasurement. The first consists of using a similarity function that considers thedifferent polarimetric channels (HH, HV, VV or after the transformation in the Paulibasis, see Chapter 4) or the coherence matrix. There are several similaritymeasurements that have been proposed, in the context of the change detection[DIE 02], the filtering [DEL 10b] or the displacement measurement [ERT 13].Another strategy consists of decomposing the coherence polarimetric matrix using aspherical invariant random vector model in order to obtain a variable of scalar texturethat we then try to track between the images. Supposing that this variable follows aFisher law, a similarity function is then deduced from the maximumlikelihood [HAR 11].

10.2.1.4. Sub pixel displacement

Searching for the similarity using functions such as correlation, or some of itsvarieties, gives a displacement of (p̂, q̂) pixels in integer values. We can, however,refine this measurement in order to extract subpixel displacement information.Traditionally, an interpolation of the similarity function in the neighborhood of themaximum allows us to obtain this information. The most simple methods that can beused are the interpolation by two parabolas, one in x and the other in y:

D(x) = ax2 + bx+ c

D(y) = dy2 + ey + f,

or the interpolation via a paraboloid:

D(x, y) = ax2 + by2 + cxy + dx+ ey + f.

Page 11: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 261

Thus, the subpixel displacement corresponds to the coordinates of theinterpolation maximum in the studied neighboring area. In the case of a paraboloid,the function obtained via interpolation may become of the hyperbolic type andtherefore may provide an aberrant result. A conformity test of the solution (belongingto the maximum of the interpolation area) allows us to rule out these values.

The implementation of the displacement calculation via a maximum of similarityis illustrated in Figure 10.2. The module and the orientation of the displacementmeasured corresponds to the motion of the Argentière glacier (Mont-Blanc massif) atthe level of the Lognan serac fall. The height of the similarity peak allows us todetect the area where the displacement cannot be measured because of the fall of iceblocks between the two dates. The points where the subpixel interpolation did notwork are mainly situated in these areas of poor correlation.

10.2.2. Differential interferometry

Differential radar interferometry (D-InSAR) is today widely used to measure thedisplacements of the Earth’s surface, whether of seismic, volcanic or gravitationalorigin [MAS 98]. Commercial software such as DIAPASON [MAS 97] and Gamma[WEG 05] or free software such as ROI-PAC [ROS 04], NEST from the EuropeanSpace Agency (ESA) or the EFIDIR_Tools developed by research laboratories allowus to obtain displacement measurements between two dates or to build series ofinterfergrams in order to monitor smaller and smaller deformations.

The first steps of the D-InSAR processing chain can be regrouped in two parts:on the one hand the SAR synthesis, the registration of the slave image on the masterimage and the generation of the differential interferogram, and on the other hand thephase filtering and the phase unwrapping. These steps are generally followed bycorrections of geometric or atmospheric artifacts presented in section 10.2.3. Thesecorrections may also need to reloop with the previous steps.

10.2.2.1. Generation of differential interferograms

The SAR synthesis, the registration of the slave image on the master image andthe generation of the interferogram, introduced in section 9.3, are related: using thesame “Doppler centroid” or, on the contrary, deleting the disjoint parts of thespectrum [GAT 94]. Once the SLC master and slave images have the same geometry,we may obtain a first estimation of the coherence using spatial averaging (complexmultilooking) in order to reduce the noise [GOL 88]. The number of lines andcolumns of the averaging window can be chosen so as to obtain approximatelysquare pixels on the ground, for example 5 × 1 or 10 × 2, for ERS satellite imageswhose SLC data have a resolution of approximately 4 m in azimuth and 20 m inrange on the ground (ground range).

Page 12: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

262 Remote Sensing Imagery

a)

b)

c)

Figure 10.2. Displacement calculation by maximum of similarity. a) Lognanserac fall (Argentière glacier) on May 29 and 30, 2009 (photograph by Luc

Moreau); b) 2D displacement vector transformed to module and orientation;c) height of the cross-correlation peak (NCC) and subpixel interpolation. For a

color version of this figure, see www.iste.co.uk/tupin/RSImagery.zip

Page 13: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 263

The phase difference φorb/topo due to the increase in distance in range (so-called“orbital” fringes) and the relief (so-called “topographical” fringes) can be calculatedfrom orbital data and a digital elevation model (DEM). Different from the previouschapter, we do not seek to determine the elevation of the imaged points, but we useauxiliary information in order to calculate the path difference ΔR between the twoorbits and the position of the points supposed to be fixed on the ground. This differenceis then transformed into a phase via the wavelength λ: φorb/topo = 4πΔR/λ and canbe subtracted during the averaging step, in order to reduce the number of fringes andto avoid the aliasing phenomenon. In a pixel P associated with a window Ω to theinitial resolution, the phase φ(P ) and the coherence Coh(P ) are then estimated by(see equation [7.25]):

Coh(P ).eiφ(P ) =(k,l)∈Ω z1(k, l)z

∗2(k, l)e

−iφorb/topo(k,l)

(k,l)∈Ω |z1(k, l)|2 (k,l)∈Ω |z2(k, l)|2[10.6]

where z1 and z2, respectively, designate the master image and the slave image thathave been previously registered. After this step, we dispose of an interferogram thatshould only measure the displacement between the dates. The result of this initialaveraging and the subtraction of the orbital and topographic fringes is illustrated inFigure 10.3.

In practice, because of the imprecision of the auxiliary information on the orbits,a residual orbital contribution can remain in the interferometric phase. This residualcontribution is usually corrected by adjusting a parametric model such as a planthroughout the interferogram [HAN 01, LOP 09]. Similarly, in certain cases, DEMerrors also cause a residual phase term that is proportional to the perpendicularbaseline. There are approaches that use a series of interferograms such as the smallbaseline subset (SBAS approach) and the permanent scatterers (PS) approach,allowing us to correct this residual term (see section 10.3.1).

10.2.2.2. Phase filtering and phase unwrapping

The noise, which perturbs the interferometric phase and constitutes an obstaclein phase unwrapping, is essentially induced by decorrelation (temporal evolution,distributed targets seen from slightly different angles, etc.). To reduce this noise,there are three categories of filters introduced in the literature (before, during andafter the construction of the interferogram). The filters before the construction of theinterferogram consist of separating the signal from the noise in the spectral domain[HAN 01]. The filtering during the construction of the interferogram corresponds tothe complex multilooking technique given by equation [10.6]. The increase in thenumber of looks allows us to highly reduce the variance of the estimation of thephase and the coherence (see section 7.3, Figure 7.8), but at the cost of a decreasein the sampling frequency that can turn out to be problematic in the areas with strong

Page 14: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

264 Remote Sensing Imagery

deformation gradient. To further increase the number of averaged samples withoutscale reduction, it is necessary to apply, after the construction of the interferogram, an“averaging”- type filter [ROD 92, MAS 93]. For this, it is necessary to respect as muchas possible the hypotheses of stationarity and ergodicity that allow for the Hermitianproduct z1(P )z∗2(P ) by relying on samples from the neighborhood of the point P . Toensure these hypotheses, two ways are preferred:

– using adaptive neighborhood that search for a subwindow or a set of pixelsbelonging to the same statistical population as the filtered pixel [LEE 94, GOL 98,VAS 04, DEL 11b].

– compensating the local fringe pattern in order to “flatten” the phase in thefiltering window. This compensation can be done by estimating a first-order modelof fringes given by 2D local frequencies on a rectangular neighborhood [TRO 96] oran adaptive neighborhood [VAS 08].

a) Interferogram b) Differential interferogram

c) Coherence d) Amplitude 1

Figure 10.3. ERS tandem interferometric data, 1995/12/31 – 1996/01/01,Mer-de-Glace glacier, initial averaging 5× 1 ; a)-b): phase before and aftersubtraction of the topographic fringes; c) initial coherence; d) amplitude of

one of the two images

Page 15: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 265

The filtering then consists of re-estimating the wrapped phase as well as thecoherence, by applying the averaging in equation [10.6] on the selected window, andcompensating the rotation of the phase using the estimated local frequencies. Thisstep allows us to obtain a “de-noised” interferogram in order to address the phaseunwrapping through propagation, and a coherence image that is much morediscriminating in order to identify the areas that cannot be unwrapped correctly. Thelocal frequencies can also be used directly for phase unwrapping using the method ofleast squares [TRO 98]. Figure 10.4 illustrates the filtering and the unwrapping stepson a co-seismic interferogram that measures the deformation of the fault during theKashmir earthquake in 2005.

a) b) c)

Figure 10.4. ENVISAT interferogram (2004/11/06-2005/11/26) of theearthquake in Kashmir (2005). a) Original phase estimated with an averaging(10×2); b) phase filtered through a multi scale approach with an estimation of

local frequencies [YAN 13]; c) phase unwrapped using least squares

The interferometric phase, which measures the difference in the round-triptrajectory of the wave sent by the radar, is only known modulo 2π, so theinterferogram only measures the corresponding displacement modulo λ/2. Toremove this ambiguity, phase unwrapping is necessary. In each pixel, we seek to findthe right multiple of 2π to be added to the value of the main phase φ(P ) given by the

Page 16: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

266 Remote Sensing Imagery

interferogram in order to have the exact value of the phase ϕ(P ) = φ(P ) + 2πk(P ),with k(P ) as a relative integer [CHE 00, CHA 04a]. The fundamental hypothesis forphase unwrapping is to consider that the surface to rebuild is relatively regular andthe unwrapped phase is continuous; in other words, no noise is present and theNyquist criterion is respected during the sampling. These conditions imply that thephase varies within π between two adjacent pixels. In the literature, there are twolarge types of methods for phase unwrapping: the local methods and the globalmethods. The local methods are based on a propagation of the phase value, pixel bypixel. In these methods, each pixel is assessed individually along paths with necessityof the continuity of the coherent area. The branch-and-cut method [GOL 88] and theminimum cost flow method (MCF) [CHE 00] are in this category. Different fromthe local methods, global methods seek a solution on the whole image that minimizesthe deviation between the phase gradients measured in the interferogram and those ofthe result. Some methods are based on image processing techniques such as thesegmentation, the cellular automata and the Markovian models. Complete work hasbeen proposed for this specific problem [GHI 98]. In the geophysical community, thetool extensively used for phase unwrapping is SNAPHU3 [CHE 02], which is basedon the MCF algorithm.

Phase unwrapping remains a crucial step in differential interferometry thatconditions the success of its application. The choice of the method depends on thenature of the interferograms to be processed. There have been several attempts fordeveloping methods that avoid the problem of phase unwrapping [FEI 09], but evenuntil today, this problem remains a delicate subject, as no method seems fullyoperational. The problems currently encountered in phase wrapping are thediscontinuity of the coherent areas and the strong gradient of the displacement thatcauses the aliasing problem, which corresponds to the appearance of fake fringes ininsufficiently sampled areas. For the latter, we sometimes use a priori models of thedisplacement or measurements from other sources of information in order to reducethe number of fringes and make the phase unwrapping easier [SCH 05]. However,this solution cannot always be applied because of errors in the model or theimprecision of the measurements issued from other sources or simply because of alack of information, especially for an event that has just taken place.

10.2.3. Corrections

The measurements from the amplitude correlation provide a “distance” or a“difference in position” between two pixels corresponding to the same area on theground. As the acquisition conditions of the two images are slightly different, this“distance” is the sum of several contributions:

3 Statistical-Cost, Network Flow Algorithm for Phase Unwrapping.

Page 17: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 267

– a topographically induced stereoscopic effect;

– a distortion induced by the lack of parallelism of orbits and the potentialdifference of the “Doppler Centroid”;

– a displacement of the pixel on the ground between two successive acquisitions.

To deduce the surface displacement, we must carry out some corrections in order toeliminate the first two contributions [MIC 99]. In practice, the measurement in range iscorrected using a DEM, which has a resolution equivalent to that of the radar image,and the measurement in azimuth image is corrected by removing a ramp, which isestimated outside of the deformation area.

Regarding the influence of the topography, this is due to the stereoscopic effect(see Chapter 9) and, depending on whether we correct an offset or a phase difference,it can be expressed in the following form (equation [9.7]):

δRtopo =Borth H

Rsinθφtopo =

4πBorthH

λRsinθ, [10.7]

with Borth the perpendicular baseline that characterizes the distance between the 2trajectories (defined by equation [9.6]), H the elevation of the given point, R thetarget-satellite distance and θ the incident angle. Potential errors of DEM will haveless impact in the case of smaller perpendicular baselines.

Regarding the measurement from differential interferometry, besides thetopographic and orbital corrections carried out using the DEM and the auxiliaryinformation of the orbits (see section 10.2.2), an atmospheric correction usually turnsout to be necessary in order to reduce this source of uncertainty and to be able toquantify displacements of order of centimeters, or even millimeters. The variations ofphysical properties of the atmosphere between two radar acquisitions cause avariation of the propagation speed of electromagnetic waves. This induces a phasedifference of an atmospheric origin that can be wrongly interpreted as grounddisplacement. This effect can induce several fringes in an interferogram, equivalentto ground displacement of several tens of centimeters. This is therefore the mainlimitation to the use of InSAR for ground displacement measurement[ZEB 97, HAN 01].

The air refractivity is sensitive to the pressure of dry air, the partial pressure inwater vapor, the temperature and the quantity of water in the form of clouds. It can beexpressed as follows:

N = k1Pd

T+ k2

e

T+ k3

e

T 2+ k4Wc + k5

ne

f2[10.8]

Page 18: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

268 Remote Sensing Imagery

where Pd is the dry air partial pressure (Pa), e is the water vapor pressure (Pa), T is thetemperature (K), Wc is the cloud water content (kg.m−3), ne is the electron densityin the ionosphere, f is the electromagnetic wave frequency, k1 = 0, 776 K.Pa−1,k2 = 0, 716 K.Pa−1, k3 = 3, 75.103 K2Pa−1, k4 = 1, 45.103 m3kg−1 and k3 =−4, 03.107 m3s−2 [SMI 53, PUY 07, DOI 09].

Any change in the temperature, humidity or pressure along the trajectory of theradar wave can, therefore, cause a phase variation. The so-called “atmospheric” phasethat results from this can be decomposed into a turbulent component and a stratifiedcomponent. The turbulent component comes from the dynamic of the atmosphere,which is too difficult to model to perform accurate corrections. In most cases, it can beconsidered as random for each acquisition date on the scale of the SAR image scene. Itcan, however, be strongly reduced, or even eliminated, by stacking the interferogramsor by filtering [SCH 03, HOO 07].

a) b)

20

-104

19

19.2

19.6

-103-103.6

19.8

19.4

-103.4 -103.2-103.8 -104 -103-103.6 -103.4 -103.2-103.8

4500

100

Elevation(m)

Figure 10.5. Stratified tropospheric artifact in the volcanic region of Colima, Mexico.a) Interferogram corrected for topographic effects, in ground geometry. Ascending ENVISATdata with a perpendicular baseline of 5 m and a temporal baseline of 385 days. b) DEM(SRTM) for the same geographic area. The interferometric fringes are strongly correlated withthe topography. For a color version of this figure, see www.iste.co.uk/tupin/RSImagery.zip

The stratified part is induced by the variation in the pressure, the temperature andthe water vapor content profiles in the troposphere between different radaracquisitions. It corresponds to atmospheric fringes that are, on the first order,correlated to the topography (see Figure 10.5). In the absence of complementaryinformation, this effect can be empirically reduced, by using the correlation betweenthe phase and the elevation observed on the images in non-deforming areas[REM 03, CAV 07]. This method is based on the assumption of the absence of lateralvariations of stratification on the scale of the image. It presents a major drawback: apotential partial elimination of the displacement signal, particularly on volcanoes,where the ground displacement induced by a magma reservoir is usually strongly

Page 19: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 269

correlated to the topography. Alternatively, when there is a sufficiently dense GPSnetwork on the area studied, it can be used, successfully, to estimate the stratifiedatmospheric delay and the turbulent delay [WEB 02, JAN 04, LI 06b]. Locallyacquired meteorological data can also be taken into account in order to estimate thetropospheric delay [DEL 98]. Some authors also propose to use multispectral dataacquired at the same time as the SAR data in order to estimate the water vaporcontent and to correct the tropospheric effects [LI 06a]. Finally, another interestingapproach consists of using temperature profiles, pressure and water vapor contentprovided by global meteorological models in order to calculate the troposphericdelay [DOI 09, JOL 11]. These models have a temporal resolution of several hoursand a spatial resolution of several tens of kilometers. Figure 10.6 shows the variationof the delay versus elevation ratio, a ratio calculated on the basis of globalmeteorological models in the area of Colima Volcano in Mexico. There is a seasonalvariation of amplitude 8.8 rad/km. More precisely, this means that between anENVISAT image acquired during the dry season and another one acquired during thewet season, on the area of the volcano that extends from 100 to 4,460 m altitude, wecan observe 5 fringes that could be misinterpreted as a 15-cm displacement; whereas,on this area, having corrected this artifact, much smaller displacement ofapproximately 1.5 cm per year, has been identified [PIN 11].

10.3. Combination of displacement measurements

10.3.1. Analysis of time series

Today, SAR data are available for more than 20 years. Particularly, for the Cband, the most used band, the two ERS satellites are compatible for SARinterferometry, and under extreme configurations, we can obtain an interferogrambetween an ERS and an ENVISAT image [PEP 05, PER 06]4. Furthermore, with thetwo satellites ERS2 and ENVISAT having been operational at the same time, it ispossible to put together the displacement time series independently acquired withERS and ENVISAT when the common time span is sufficient. For given sites, severalhundred images are available so that, in the past few years, methods seeking toexploit the time series in order to reduce the errors and increase the accuracy ofinterferometry have been developed – thus favoring its ability to detect and quantifydisplacements of small amplitude (of the order of mm/year). Such an accuracy isrequired for the study of phenomena characterized by a small deformation rate suchas the inter-seismic displacement along an active fault or the current isostaticreadjustment. The first processing of a series of images consisted of averagingdifferent interferograms covering the same event [SIM 07]. Later, methodological

4 ERS and ENVISAT not having exactly the same wavelength, it is not possible to build astandard interferogram between an ERS image and an ENVISAT image.

Page 20: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

270 Remote Sensing Imagery

developments have allowed going beyond this elementary stage by establishing twodifferent approaches simultaneously: the PS approach and the SBAS approach.Hooper et al. [HOO 12a] propose a review of these techniques.

Time (years)

2002 2004 2006 2008 2010

-60

-55

-50

Delay/elevationratio(rad/km)

Figure 10.6. Temporal evolution of the delay/elevation ratio (in rad/km) induced by a stratifiedtroposphere in the vicinity of Colima volcano, Mexico. Ratios represent an average over the100–4460-m elevation range corresponding to the topography of the volcano. The squares aredelay values calculated from the ERA interim global meteorological model at the time of theALOS acquisitions. The triangles are the values estimated using the NARR meteorologicalmodel (according to [PIN 11]) at the time of the ENVISAT acquisitions. The values arecalculated at the time of the image acquisition during the descending pass. The sinusoidalcurve corresponds to the best adjustment of the seasonal variations obtained by considering allthe daily meteorological data (amplitude of 8.8 rad/km)

10.3.1.1. Permanent scatterers (PS) approach

The first method is based on the detection and monitoring of punctual targets: theresponse of the pixel is dominated by a particular scatterer whose phase presentsgreat temporal stability and remains very little sensitive to the acquisition geometryvariations, thus allowing the use of large baselines that cannot usually be exploitedon the distributed targets. This is the permanent scatterers or persistent scatterers (PS)method [FER 01]. In urban environments, these scatterers are generally the roofs ofthe buildings oriented so that they backscatter maximum of energy in the direction ofthe satellite like a mirror or the result of a reflection on the ground and then on aperpendicular structure. In rural areas, the PS are generally less common, forexample rocks with a specific orientation. To make use of these PS, the processing isdone at the highest possible resolution, so that neither filtering nor spatial averagingis applied. A common master image is chosen from the available images, and it iscombined with each of the other images in the series to produce interferograms. Thechosen master image usually corresponds to the one that minimizes the spatial

Page 21: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 271

baseline and the temporal baseline of the interferograms produced (see Figure 10.7(a)). However, the baselines can be very large for certain interferograms, which willlimit their coherence. The smartness is then to select and focus on the pixels whosephase remains stable.

Time (years)

−500

0

500

1000

1500

2000

Perpendicularbaseline(m)

2003 2004 2005 2006 2007

Time (years)

−500

0

500

1000

1500

2000

Perpendicularbaseline(m)

2003 2004 2005 2006 2007

a) b)

Figure 10.7. Spatio-temporal distribution of interferograms calculated by two different methodsof time series processing. The nodes and the arcs represent, respectively, the 38 images used andthe interferograms calculated. a) The case of the PS in which a master image is chosen and only37 interferograms are produced. b) The case of the small baseline approach (SBAS) for which71 interferograms are produced, all characterized by small spatial and temporal baselines. Theimages in this example correspond to the 38 ENVISAT images acquired from November 2002to March 2007 in the city of Mexico (Mexico) and used for quantifying the urban subsidence(according to [YAN 12a])

In the classical approach [FER 01], the phase is modeled as a function of thespatial baseline and the temporal baseline, and the displacement is considered to belinear in time. This phase is integrated temporally and then spatially. Differentvarieties [WEG 05, ADA 09, ZHA 11] of the PS technique have been implemented,modifying or combining one or several previous techniques. A priori information onthe displacement is often necessary, from which a deformation model is established.In this model, the average displacement rate and the DEM error constitute the twomain parameters. The estimation of these two parameters is carried out betweenneighboring pixels. This approach works in areas where the density of the selectedpixels is significant and the displacement is effectively linear in time, as long as asufficient number of images is used (around 30 images). The PS approach has beensuccessfully applied in urban areas where strong density of stable and bright targetsexists [LIU 09, PAR 09, OSM 11, YAN 12a]. Figure 10.8 shows an example ofdisplacement rate obtained using the PS technique, that is the subsidence rate onMexico City. In natural areas, the application of this technique still remains a greatchallenge because of the limited PS coverage. However, Hooper et al. [HOO 04]have succeeded quantifying the deformation in a volcanic region by using a variety of

Page 22: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

272 Remote Sensing Imagery

the PS method, not needing any a priori on the temporal function of the displacement[HOO 04].

10.3.1.2. SBAS approach (small baseline subset)

The second approach assembles the methods qualified as SBAS, which are basedon the minimization of the spatio-temporal decorrelation through the combination ofinterferograms characterized by a small spatial and temporal baseline[BER 02, SCH 03, USA 03]. This approach is optimal for the pixels whose responseis not dominated by a unique scatterer, but by a distribution of scatterers. The smalltemporal and spatial baseline allows us to maximize the coherence, which facilitatesthe extraction of reliable phase on a time series. The selected interferograms form aredundant network that connects the images in space and time simultaneously (seeFigure 10.7 (b)). The decorrelation noise is reduced by spatial filtering, which causesa loss in spatial resolution. As a result, the high-frequency displacement is usuallyeliminated during the filtering. After the spatial phase unwrapping, an inversionallows us to obtain the displacement by dates. The SBAS technique has beensuccessfully applied in numerous contexts [SCH 03, CAV 07, LOP 09]. Since its firstimplementation, this technique has been modified and adapted to the specificity ofthe phenomena studied. Lopez-Quiroz et al. [LOP 09] has applied specific processingtechniques in order to adapt to the subsidence measurement of Mexico characterizedby large area of deformation, significant rate and gradient of deformation. Theversion developed within the ANR EFIDIR project by M.-P. Doin called NSBAS,allows us, moreover, to incorporate corrections for the stratified part of thetropospheric artifacts [DOI 11] (the turbulent part is classically eliminated byfiltering in the time series). The spatial coverage is generally larger than in the case ofthe PS method, which makes the correction of orbital effects much easier. However,this method does not provide punctual displacement. Recently, the selection of theinterferograms according to a small baseline criterion has been applied on a series ofamplitude correlation measurements. Casu et al. [CAS 11] have shown that it waspossible to benefit from the information redundancy and the high quality of the offsetof the amplitude images for image pairs with small spatial baseline, in order to obtaininformation on the displacement in range and azimuth, on the basis of the offsets of aseries of amplitude images with a precision of the order of a 30th of a pixel(compared to a 10th of a pixel with only two images). This is a remarkable variety ofthe SBAS technique, and it allows us to monitor the events with large displacement.

The PS and SBAS techniques allow us to achieve an accuracy of millimeters/yearin displacement rate measurement [HOO 12a]. The SBAS technique provides acontinuous description of the deformation structure and its temporal behaviorwithout any a priori hypothesis on the displacement, whereas the PS techniqueenables a quantitative discussion of the disparity of small-scale deformation. Thesetwo methods thus provide information on the displacement at different butcomplementary scales. The accuracy of these two techniques strongly depends on the

Page 23: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 273

number of images available, the resolution of the images, as well as that of the DEM.The constraint on the quantity of the images is currently becoming less and lessimportant due to the successive launching of the satellites equipped withhigh-resolution sensors. Given the advantages and disadvantages of the SBAS and PStechniques, the combination of these two methods opens up new prospects. Certainattempts have already been implemented. Hooper [HOO 08] has combined the SBAStechniques and the PS techniques in order to monitor the time evolution of thedisplacement associated with volcanic intrusions. The spatial coverage of PS pointswas improved due to this combination, which facilitates the application of PStechniques in natural areas. In Yan et al. [YAN 12a], the subsidence of Mexico Citywas analyzed by using simultaneously the SBAS and PS techniques in order tomeasure at the same time the global subsidence and the punctual subsidenceassociated with isolated objects that have a different behavior from that of theirneighbors. In Liu et al. [LIU 13], the PS technique was applied on a series of SARimages whose perpendicular baseline is small in order to measure the subsidence ofthe city of Tianjin (China). Due to the use of small baseline images, the analysisbetween neighboring points with the PS technique can be done without the DEM,because the DEM error is very small between two neighbouring points. Thus, theimpact of the DEM of resolution lower than that of SAR images could be avoided.Given the increasing availability of the high-resolution time series, the combinationof these two techniques seems to be able to provide very precise displacementmeasurements over large areas.

Figure 10.8. Subsidence rate of Mexico City estimated using the PS technique.The rate is superimposed on the amplitude image. A cycle represents 15 cm per

year [YAN 12a]. For a color version of this figure, seewww.iste.co.uk/tupin/RSImagery.zip

Page 24: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

274 Remote Sensing Imagery

Data mining techniques are also being developed with the objective, viaextraction of spatio-temporal features, to summarize the time series of satelliteimages and to encourage the knowledge exploration. The Grouped FrequentSequential patterns, as proposed and defined in Julea et al. [JUL 11], allow us, forexample, searching groups of pixels that are sufficiently connected between them andcovering a minimum surface. These connectivity and surface constraints allow theextraction of these patterns under standard conditions in terms of execution durationand memory requirement. From the point of view of the duration of the executionand the available memory. These operational aspects are detailed in Juleaet al. [JUL 12]. Besides the technical interest that these constraints have, the latteralso allow the user to extract evolutions that are meaningful with regard to theapplication. For example, in Meger et al. [MEG 11], experiments carried out oninterferograms formed using the SBAS approach have shown that it is possible tocharacterize a seismic fault by highlighting the creep phenomenon along the fault.

In the case of differential interferometry, even after all the corrections detailed insection 10.2.3 and the noise reduction by the processing in the time series, only arelative displacement value can be obtained from the phase, that is to say, a constantshift from the absolute value exists. To obtain the absolute displacement, a referencesuch as a point on an area whose displacement is known is required. Usually, inpractice, the displacement in far-field is considered to be null, which raises theproblem for the displacement over large area (larger than 100 km). Yan etal. [YAN 13] propose another solution using the displacement measurement issuedfrom the amplitude correlation in order to “register” the displacement obtained bydifferential interferometry. Similarly, Manconi et al. [MAN 12] has combined timeseries from amplitude correlation and differential interferometry by using the SBASapproach in order to benefit from the advantages of each type of measurement. Othertypes of geodesic measurement, such as the GPS data, can also be integrated in orderto correct the orbital and atmospheric effects and to provide punctual complementaryinformation in 3D, with better sampling frequency [WEB 02, GOU 10]. Theintegration of these different sources of displacement measurement is oftenperformed during the reconstruction of 3D displacement field.

10.3.2. Reconstruction of 3D displacement field

The 3D displacement at the Earth’s surface, classically projected on the terrestrialcoordinate system axes east, north and vertical (E, N, Up), is commonly used forcharacterizing the surface displacement induced by earthquakes [WRI 04, PAT 06,YAN 13 or by volcanic activities [WRI 06, GRA 09]. The knowledge of this vectorallows us, in particular, to facilitate the interpretation of the displacement field interms of sources, to calculate maps of deformation rates, as well as subsidence orlifting volumes by integrating the vertical component (Up) of the displacement.

Page 25: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 275

In SAR imagery, the displacements are measured in the direction of the line ofsight (LOS or range) and/or in the direction of the satellite movement (azimuth) foreach acquisition. These measurements correspond to the projections of the 3Ddisplacement in the directions of each acquisition (Figure 10.9).

X(East)

Y

(North)

Z (Up)

O

SAR Image

Un

Uup

Ue

RL O S

iR a z

i

(North)

Satellite

trajectory

* Satellite trajectory projected

on the ground.

*

Figure 10.9. Geometric illustration of 3D displacement and of thedisplacement in the LOS and in the direction of the radar movement (azimuth)

during SAR acquisition (according to [FAL 12])

We may therefore write:

R = PU,

where R corresponds to the vector of displacement measured by amplitudecorrelation or differential interferometry. P corresponds to the projection vectormatrix, U denotes the 3D displacement vector to estimate with three components E,N, Up.

To reconstruct the 3D displacement using the displacement measurementsobtained from SAR images, at least three different projections are necessary. Diverseacquisition geometries, including variable incident angles (typically for ENVISAT,the incident angles vary from 19◦ to 44◦), different orbital directions (descending andascending) and different displacement directions (range and azimuth) make thisreconstruction possible. Outside of the high latitude areas, the azimuth directions of

Page 26: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

276 Remote Sensing Imagery

the ascending and descending passes are nearly collinear. In practice, they are oftenconsidered as the same projection. To benefit from the complementary informationbrought by each of the projections, the common procedure involves combining all theavailable projections simultaneously and calculating the optimal solution by meansof the least squares approach [YAN 12c]. Other combination strategies that consist ofeither selecting the highest quality projections before the inversion or fusing theinversion results obtained with independent subgroups of projections are proposed inYan [YAN 11].

In most of the studies, the errors presented in the measurements are considered,in the first approximation, as random and characterized by a Gaussian distribution,which justifies the common use of the least squares approach for the estimation of the3D displacement. Noting with ΣR, the error covariance matrix of the measurementsR, the 3D displacement estimated by the least squares approach is then given by:

U = P tΣ−1R P

−1P tΣ−1

R R. [10.9]

In this approach, the uncertainty associated with the 3D displacement is given bythe covariance matrix ΣU :

ΣU = P tΣ−1R P

−1. [10.10]

In many cases, the errors are considered to be independent from one measurementto another, which reduces the covariance matrix ΣR to a diagonal matrix.

However, the sources of error in SAR imagery are very complex: they come fromdifferent perturbations that took place along the radar wave propagation and at theback scattering surface, as well as the noise generated in the electronic processing.Moreover, imperfect corrections (geometric and atmospheric, see section 10.2.3)carried out in the processing chain also induce epistemic errors that remain constantor vary predictably in repeated measurements [YAN 11, YAN 12b]. These diversesources result in errors with very different characteristics and distributions, makingthe hypothesis of random independent Gaussian errors questionable. To take randomerrors and epistemic errors into account at the same time, a fuzzy approach based onthe possibility theory [ZAD 78] is preferable. This approach has been applied todisplacement measurements by satellite imagery for the first time by Yan and Yan etal. [YAN 11, YAN 12b]. The 3D displacement is then estimated by the fuzzyarithmetic based on the principle of the extension of [ZAD 78] according to theequation:

U = (P tΣ−1R P )−1P tΣ−1

R ⊗R [10.11]

Page 27: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 277

where the components U and R are no longer scalar values but fuzzy sets and ⊗ refersto the matrix operator of fuzzy multiplication where the sum and the conventionalscalar product are replaced by the corresponding fuzzy operations.

This fuzzy approach differs from the probabilistic approach in the manner ofrepresenting and propagating the measurement errors in the estimation of the 3Ddisplacement. In this approach, the errors are modeled by possibility distributions,more precisely, here, symmetrical triangular distributions that include a family ofprobability distribution [MAU 01, DUB 04]. With possibility distributions, therandom and epistemic errors are integrated in a unified modeling. According to Yanet al. [YAN 12b], the 3D displacement errors obtained with the probabilisticapproach provide a lower envelope, whereas the errors obtained with the possibilisticapproach provide the upper envelope (see Figure 10.10). The real errors should besituated between these two extremes. The more justified the hypothesis ofindependent random errors, the closer the real errors are to probabilistic errors, andconversely, the less justified this hypothesis, the closer real errors are to possibilisticerrors.

Figure 10.10. Comparison of possibility distributions resulting from the propagation ofuncertainties using the probabilistic approach and the possibilistic approach. For theprobabilistic approach, the variance is calculated from equation [10.10] and the possibilitydistribution is built through equivalence with the Gaussian law. For the possibilistic approach,the input uncertainties were modeled by triangular possibility distributions and propagatedaccording to equation [10.11]. The possibilistic approach gives a more significant uncertaintythan the probabilistic approach. The width at half maximum is traditionally used as anuncertainty parameter in the possibilistic approach

Page 28: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

278 Remote Sensing Imagery

The combination of measurements resulting from amplitude correlation anddifferential interferometry constitutes an interesting subject in 3D displacementestimation. The measurement in the range direction resulting from amplitudecorrelation is obtained in the same direction as the measurement resulting fromdifferential interferometry. These two measurements, therefore, correspond exactly tothe same displacement. However, in practice, these two types of measurement areconsidered to be complementary. The measurements resulting from amplitudecorrelation are often reliable in the near-field where large displacements are observedbut they are not very precise in the far-field, where the amplitude of displacement issmall. On the contrary, the measurements resulting from a differential interferometryare often available in the far-field where the displacement is small, but they are muchrarer in the near-field for large earthquakes, because of the coherence loss and thealiasing problem induced by strong displacement gradients. This is illustrated inFigure 10.11 by taking the Kashmir earthquake (2005) as an example. The spatialdistribution of displacement measurements issued from amplitude correlation anddifferential interferometry for the Kashmir earthquake (2005) is presented inFigure 10.12. These measurements cover at the same time the near-field and thefar-field in relation to the fault, approximately 400 km in the North–South directionand 250 km in the East-West direction. In the near-field of the fault, the displacementmeasurements are obtained from amplitude correlation. In the far-field, themeasurements are mainly from differential interferometry (see Figure 10.4), exceptthat in the NW part where there is no available interferometric measurement. Thecoverage in the far-field is important, which allows us to constrain the modeling ofthe deformation source in depth, because the surface displacement in the far-field isstrongly related to the slip on the fault plane in depth.

The 3D displacement at the Earth’s surface, built from these measurements, isillustrated in Figure 10.13. The spatial distribution is smaller than that of themeasurements in range and azimuth directions (see Figure 10.12), because of a lackof different projections on certain areas in the far-field. In Figure 10.13, thedeformation area, that occupies a strip of approximately 90 km in length and orientedNW-SE, is highlighted. The maximum displacement can be found on the NEcompartment that overlaps on the SW compartment. A rupture takes place on thetrace of the fault that separates the fault into two main segments. On the southsegment, the displacement is relatively small, lower than 2 m in horizontal and 4 m invertical. On the north segment, the maximum displacement reaches 5 m and 3 m invertical and horizontal, respectively.

The 3D displacement fields at the Earth’s surface are not the information soughtafter by geophysicists who are interested either in the rheology of the upper layers ofthe Earth’s, or in the depth of this displacement: the geometry and the fault slipdistribution, the geometry and the pressure variation of a magma intrusion, etc. Ingeneral, all this information is obtained through the inversion of a physical modelusing the surface displacement measurements resulting from amplitude correlation

Page 29: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 279

and differential interferometry. The 3D displacement is not directly used as a sourceof measurements in the inversion of the model, because of its low spatial coverageand potential errors introduced during its estimation. However, the 3D displacementcan also be obtained from the prediction of the physical model, as long as thegeometrical and mechanical parameters of the model are available. The integration ofthe 3D displacement estimated from the surface measurements and the physicalmodel allows for a cross validation between the estimated 3D displacement field andthe parameters of the physical model obtained from the surface displacementmeasurements. In Figure 10.14, the vertical component of the 3D displacement of theKashmir earthquake (2005) estimated from the surface measurements and predictedby the physical model, as well as the uncertainties associated according to theprobabilistic approach and the possibilistic approach, are shown [YAN 12b]. Ageneral agreement between the displacement values obtained by the measurementsand by the model is observed, despite the significant processing difference betweenthe two procedures. In the far-field of the fault, there is a very good superpositionbetween the two cases, although the fluctuation of the displacement value estimatedby the measurements is significant. In the near-field of the fault, there is adiscrepancy due to the defect of the global model used, the maximum displacement isunder-estimated by the model near the fault. This comparison allows us to validatethe 3D displacement estimation procedure by the least squares approach and theinversion procedure of the physical model. This type of comparison is very usefulwhen another source of measurement, or the ground truth, is not available.

Figure 10.11. Availability of different types of co-seismic measurement for theKashmir earthquake (2005) [YAN 11]

Page 30: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

280 Remote Sensing Imagery

Figure 10.12. Spatial distribution of the co-seismic displacement induced bythe Kashmir earthquake [YAN 13]. For a color version of this figure, see

www.iste.co.uk/tupin/RSImagery.zip

Figure 10.13. 3D displacement at the Earth’s surface estimated by leastsquares approach, using the measurements resulting from amplitude

correlation and differential interferometry [YAN 13]. For a color version ofthis figure, see www.iste.co.uk/tupin/RSImagery.zip

10.4. Conclusion

In this chapter, we have presented the main techniques for displacementmeasurement using remote sensing sources, as well as their applications, in particularthe use of SAR images that provides double geometric information via sampling, andthe phase of the complex data. The arrival of the spatial imagery has caused a truerevolution in geodesy by significantly improving our ability to measure the ground

Page 31: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

Displacement Measurements 281

displacements and their temporal evolution with great precision over large areas.Spectacular results have been obtained in numerous fields, including the study ofsubsidence in urban environment, of the co-seismic, inter-seismic and post-seismicmotion, of glacier flows, of volcanic deformation, etc. The displacementmeasurements by remote sensing cover nearly the whole world, with an accuracywithin millimeters, which would have been impossible with traditional tools. Theyprovide the predominant sources for the studies of the terrestrial deformationnowadays. Thus, for the latest large earthquakes (since the Wenchuan earthquake in2008), it is indeed due to SAR and GPS measurements that have been made availableto the public, that we were able to model the fault, understand the deformationmechanism and predict the area of the aftershocks so efficiently.

Figure 10.14. Comparison of 3D displacements at the Earth’s surface,obtained from the least squares approach and from the prediction of thephysical model in the case of the Kashmir earthquake (2005) [YAN 12b]

The SAR image processing techniques have been improved rapidly in the past 20years, reaching processing chains that go from the calculation of the offsets to thereconstruction of the 3D displacement, from the formation of the interferograms tothe inversion of a series of images in order to obtain the temporal evolution of thedisplacement, or even to the PS detection to correct the elevation of targets and tomeasure their displacements.

Page 32: Remote Sensing Imagery (Tupin/Remote Sensing Imagery) || Displacement Measurements

282 Remote Sensing Imagery

Today, these techniques are still being improved in order to integrate as much aspossible the benefit of the high spatial resolution and the increasing frequency of thedata acquisition for terrestrial displacement measurement. At the same time, effortsto combine different techniques, for example amplitude correlation and differentialinterferometry, the PS technique and the SBAS technique, seek to make the best useof the information contained in the data, using the complementarity of the differentapproaches.

Currently, the displacement measurement by remote sensing is still mainlyapplied to past events. The time series are particularly used for the study of the timeevolution of the events that have taken place before data processing. With the nextlaunching by ESA of two Sentinel-1 satellites, SAR data will be acquired nearlyeverywhere on the Earth at least every 6 days, with a resolution similar to that ofENVISAT. By adding the data issued from other satellites (RADARSAT-2 and thefuture constellation RADARSAT of three satellites, TerraSAR-X and TanDEM-X,the four satellites COSMOS-SkyMed, ALOS-2, etc.), real-time monitoring by timeseries will become possible. The availability of a considerable amount of databecause of the high spatial resolution and the strong repetitiveness of acquisitionsconstitutes a true technological challenge. If we do not wish to be quickly limited bythe storage capacity and computation time, we will have to adapt our ways ofworking by relying on data mining and by adapting our tools and means ofcomputation. It will also be important to modify our processing algorithms of timeseries so that they can integrate the new data gradually, without having to restart fromthe beginning. Given this challenge, the combination of remote sensing techniques inreal-time and physical models (extensively used in meteorology) is possible. Itshould allow us to predict the evolution of an event such as a magma reload of areservoir located beneath an active volcano or a rupture of serac. Data assimilation,today extensively exploited in the science of atmosphere and oceanography, will thenopen up new prospects for the observation and prevention of natural hazards and willlead us toward a new age of remote sensing data usage in displacement measurement.