15
Chapter 11 Radar applications In this final chapter, we review more recent advances in radar remote sensing approaches and algorithms. Additional information about our environment can be extracted from radar signals using spatial diversity, transmitters of oppor- tunity, synthetic apertures, multiple antenna pointing positions, and even the near field. Growth in the radar art in the near term is expected to take place mainly in these areas. 11.1 Radar interferometry Radar interferometry utilizes multiple spaced receivers to derive information about the spatial organization of the radar targets in the direction(s) transverse to the radar line of sight. We assume that the radar target is sufficiently far away to be in the Fraunhofer zone of the entire interferometry array, i.e., so that the rays from all the antennas to the target are parallel. There are two classes of interferometry: additive and multiplicative. Additive interferometry, where signals from different antennas are added prior to detection, is really just another term for adaptive beamforming, which has already been addressed. The discussion here concerns multiplicative interferometry, where the outputs from spaced antennas are multiplied. Figure 11.1 (left) shows the geometry of an interferometry experiment using a single pair of antennas. The antennas are separated by a baseline distance d. The only coordinate of importance is the angle ψ, the angle between the interferometry baseline and the rays to the target. The path length difference from the target to the two antennas is just d cos ψ. We can determine this angle for backscatter from a single target by beating or multiplying the signals from the two antennas: v 1 = v e j(k·x1ωt) v 2 = v e j(k·x2ωt) where x 1,2 are the displacements of the two antennas from a fixed reference, and v 1 v 2 = |v | 2 e jk·(x2x1) = |v | 2 e jk·d21 = |v | 2 e jkd cos ψ = |v | 2 e = 2π λ d cos ψ The cosine of the elevation angle ψ is the sine of the zenith angle θ, and for targets nearly overhead, sin θ θ. In any event, the phase of the interferometric cross-correlation gives the target bearing. 171

Antennas and Radar - Ch. 11 (David Lee Hysell)

Embed Size (px)

DESCRIPTION

Notes on antennas and radar by David Lee Hysell, a professor in the Earth and Atmospheric Sciences department teaching at Cornell University. This packet is used in the course ECE/EAS 4870.

Citation preview

  • Chapter 11

    Radar applications

    In this final chapter, we review more recent advances in radar remote sensing approaches and algorithms. Additionalinformation about our environment can be extracted from radar signals using spatial diversity, transmitters of oppor-tunity, synthetic apertures, multiple antenna pointing positions, and even the near field. Growth in the radar art in thenear term is expected to take place mainly in these areas.

    11.1 Radar interferometry

    Radar interferometry utilizes multiple spaced receivers to derive information about the spatial organization of the radartargets in the direction(s) transverse to the radar line of sight. We assume that the radar target is sufficiently far away tobe in the Fraunhofer zone of the entire interferometry array, i.e., so that the rays from all the antennas to the target areparallel. There are two classes of interferometry: additive and multiplicative. Additive interferometry, where signalsfrom different antennas are added prior to detection, is really just another term for adaptive beamforming, which hasalready been addressed. The discussion here concerns multiplicative interferometry, where the outputs from spacedantennas are multiplied.

    Figure 11.1 (left) shows the geometry of an interferometry experiment using a single pair of antennas. The antennasare separated by a baseline distance d. The only coordinate of importance is the angle , the angle between theinterferometry baseline and the rays to the target. The path length difference from the target to the two antennas is justd cos. We can determine this angle for backscatter from a single target by beating or multiplying the signals fromthe two antennas:

    v1 = vej(kx1t)

    v2 = vej(kx2t)

    where x1,2 are the displacements of the two antennas from a fixed reference, and

    v1v2 = |v|2ejk(x2x1)

    = |v|2ejkd21

    = |v|2ejkd cos

    = |v|2ej, =

    2pi

    d cos

    The cosine of the elevation angle is the sine of the zenith angle , and for targets nearly overhead, sin . In anyevent, the phase of the interferometric cross-correlation gives the target bearing.

    171

  • dv1 v2

    d cos

    v1

    v2

    v3

    dxdy

    x

    x

    y

    k

    Figure 11.1: Geometry for radar interferometry in two (left) and three (right) dimensions.

    Determining the bearing of a target in three dimensions is accomplished using interferometry with multiple base-lines, as illustrated in Figure 11.1 (right).

    v1v2 = |v|2ejkdx cos

    v1v3 = |v|2ejkdy cos

    Thus, the interferometer provides complete range and bearing (and Doppler frequency) information for a single target.Note that information from the third baseline formed by antennas 2 and 3 is also available as a consistency check, forexample.

    11.1.1 Ambiguity

    As with range and Doppler processing, inherent ambiguity exists in interferometry. The root cause is the periodicityof phase angles, i.e.

    =2pi

    d cos + 2pin

    where n is any integer. Solving for the direction cosine gives

    cos =

    d

    (

    2pi n

    )

    If d /2, it is impossible to solve this equation with a real valued with any value of n other than zero, regardlessof the measured phase angle . Ambiguity arises when baseline lengths are longer than /2. In such cases, however,ambiguity can be resolved by using multiple baselines, each one satisfying

    cos =

    dj

    (j2pi

    nj

    )

    for the given index j. One need only determine the nj that point to a consistent angle . Ambiguity can also beavoided or mitigated by transmitting with a narrow-beam antenna, thereby limiting the possible values of .

    172

  • 11.1.2 Volume scatter

    Next, suppose interferometry is applied when multiple, discrete targets exist in the scattering volume. The receivedvoltages and their product in that case are

    v1 =

    mi=1

    viej(kix1t)

    v2 =mj=1

    vjej(kj x2t)

    v1v2 =

    mi=1

    mj=1

    vi vjej(kj x2kix1)

    The voltages and their product act like random variables when the number of targets in the volume is large. In thatcase, it is the expectation of the product that is of interest. We can usually take the components of the received voltagesfrom different targets to be statistically uncorrelated. Then

    v1v2 =

    mi=1

    |vi|2ejkid21

    = m|vi|2ejkd cosi

    = m|vi|2ejkdi

    As before, the angle brackets imply the expected value. The last step was performed with the assumption that thescatterer amplitudes and positions are also uncorrelated. The direction cosine was replaced with the zenith angle, i,for simplicity.

    The interferometric cross-correlation evidently depends on the number of targets in the volume and on the powerscattered by each. It also depends on the expectation of the phasor exp(jkd), which would be unity in the case ofcollocated antennas. Let us decompose the zenith angle into a mean angle and a deviation from the mean = - . Then

    ejkd = ejkdejkd

    = ejkd(1

    k2d2

    22

    )

    where we have expanded the exponential in a Taylor series and made use of the fact that 0. The expansion islegitimate so long as the scattering volume is sufficiently narrow, either because the antenna beams are narrow or thetargets are confined spatially for some reason. In practice, this approximation is usually, but not always, valid, sincekd may be large.

    Properly normalized, the interferometry cross product is has the form

    v1v2|v1|2 N1

    |v2|2 N2

    = ejkd(1

    k2d2

    22

    )(11.1)

    where N1 and N2 are noise power estimators for the two channels and where the noise signals from the two channelsare uncorrelated. (This assumption could be violated if some of the noise in question is actually interference.) Equation(11.1) shows that the phase angle of the normalized cross product is associated with the mean arrival angle of thebackscatter, the scattering center of gravity. The magnitude or coherence, meanwhile, is associated with the meansquare deviation or spread of arrival angles. Similar remarks hold in three dimensions. Interferometry with two ormore non-collinear baselines therefore completely determines the first three moments of the spatial distribution ofscatterers (the total power, bearing, and spread).

    More insight into the problem comes from considering the formal definition of the normalized cross product

    ejkd

    F ()ejkddF ()d

    173

  • where F is the backscatter or brightness distribution introduced earlier in the discussion of adaptive arrays. Interfer-ometry with a large number of nonredundant baselines can specify the normalized cross product or visibility in detail,albeit incompletely. This function is in turn related to the brightness by Fourier transform. Various strategies exist forinverting the transformation and recovering the brightness on the basis of available data. This is the field of aperturesynthesis radar imaging.

    11.2 Acoustic arrays and non-invasive medicine

    The preceding discussion concerned constructing an adaptive array for the purpose of optimizing the signal-to-noise orsignal-to-interference ratio of a received signal. The technique has imaging overtones, since the ratio can be optimizedone bearing at a time in the process of mapping the sky. A related problem exists in medicine, where physiciansmake images of targets within living tissue. Here, acoustic pulses replace radio signals, and piezoelectric transducersreplace antennas. Inhomogeneous tissue replaces free space, so that the acoustic waves travel neither along straightlines nor at constant speed. Scattering and absorption of the signal also can occur along its propagation path, furthercomplicating things. The problem of identifying the target from the signals it scatters is nevertheless similar to theadaptive beamforming problem.

    The medical problem does not stop at imaging diagnostics, however. An array of transducers is also capable oftransmitting acoustic signals of sufficient strength to modify the tissue (e.g. shatter a kidney stone) if they can be madeto arrive at the target simultaneously and in phase. The problem then becomes one of finding the signals that mustbe transmitted through each of the transducers to most efficiently focus the acoustic power on the target on the basisof signals collected with those same sensors. Such focusing not only leads to finer imagery but can also be used fornon-invasive surgery. One kind of array that performs this feat is known as a time reverse mirror (TRM).

    Time reverse focusing is a space-time generalization of the matched filter theorem, which we will see in chapter5 governs the optimal response of a linear filter to an input signal in time. The output of a filter with an impulseresponse function h(t) is maximized by a signal of the form h(T t), since the output h(t) h(T t), which is theautocorrelation function, is necessarily a maximum at t = T . The output of any linear filter is therefore maximized bya signal that is the time reverse of the filter response to an impulse. TRM generalizes this notion to filtering in spaceand time, where the spatial domain is the volume enclosed by the sensor array and containing the target.

    In the notation of TRM, we define the usual temporal impulse response function for the ith transducer in the array,hi(t). We also define a forward spatio-temporal impulse response function, hfi (ri, t). This represents the acousticsignal present at the ith transducer at time t and position ri in response to an impulsive acoustic pulse launched bythe target, which resides at r = 0, t = 0. A reverse spatio-temporal impulse response function hri (ri, t) can also bedefined as the signal present at the target at time t due to an impulse at the ith transducer at time 0. According to thereciprocity theorem, however, hfi (ri, t) = hri (ri, t), which is crucial. Henceforth, the superscripts will not be written.Note also that reciprocity holds for inhomogeneous as well as homogeneous media.

    The first step in the TRM process involves illuminating the inhomogeneous medium with the signal from onetransducer. The signal will scatter off the target (and perhaps from other targets), which behaves like an impulsiveacoustic source. Different signals will be received by the transducers at different times, reflecting different propagationpaths. Each transducer will consequently record

    hi(t) hi(ri, t)

    All together, the recordings can be regarded as the output of a spatio-temporal filter to an impulsive input injected atthe origin. In the next step of the process and as a generalization of matched filtering, each transducer is driven by anelectrical signal equal to the time reverse of the one it recorded:

    hi(T t) hi(ri, T t)

    Each signal must pass back through the transducer and then through the medium on its way to the target. This impliesadditional convolutions with the temporal and space-time impulse response functions. The acoustic pulse arrivingback at the target due to the ith transducer will therefore be:

    hi(T t) hi(ri, T t) hi(ri, t) hi(t)

    174

  • d
  • where (r, t) is the scattering amplitude of the target, including the appropriate distance scaling. This function is arandom variable, and meaningful information about it is contained in its autocorrelation function

    R(r, ) (r, t)(r, t )

    It is this autocorrelation function, or its Fourier transform (the Doppler spectrum) that we seek to extract from the data.Consider the estimator

    Q(r, ) 1

    T

    T

    y(t)x(t 2r/c)y(t )x(t 2r/c )dt

    where T is the incoherent integration interval. We can evaluate the meaning of the estimator by substituting y(t) fromabove. In so doing, we replace the dummy variable r with p and q in the two instances where y(t) appears.

    Q(r, )

    =1

    T

    T

    dpdqdt x1(t 2p/c)1(p, t p/c)x2(t 2r/c)x

    3(t 2q/c )

    2 (q, t q/c )x4(t 2r/c )

    where subscripts have been added to help keep track of the various terms. This is obviously a very complicatedexpression. As in the case of pulse coding, however, we rely on the properties of the transmitted signal to simplifythings, permitting the estimator to revert to something like the desired autocorrelation function. Symbolically, we have

    Q =

    dpdq 1

    2x1x

    2x

    3x4

    where we regard the time average as a proxy for the expectation and note that the scattering amplitude and transmittedsignal are statistically uncorrelated. Concentrate first on the term involving the scattering amplitudes, which is itselfnearly the desired ACF:

    12 = R(p, )(p q)

    The delta function arises from the assumption that the scatterers in different volumes (ranges) are statistically uncor-related. This is a common assumption for volume-filling targets but should probably be examined in the case of hardtargets.

    The remaining term can be handled with the fourth moment theorem for Gaussian random variables, which is howwe regard the transmitted waveform (see chapter 1):

    x1x2x

    3x4 = x1x

    2x

    3x4+ x1x

    3x

    2x4+ x1x4x

    3x

    2

    = Rx(2(r p)/c)Rx(2(r q)/c) +Rx(2(q p)/c+ )R

    x()

    The last of the three terms on the first line is obviously zero since the random phase terms involved do not cancel. Theremaining two can be expressed in terms of the autocorrelation function of the transmitted waveform. Exactly howRx evaluates will be different for different kinds of broadcast services. Some are better suited for passive radar thanothers. To see this, we substitute back into the equation for Q and integrate, making use of the Dirac delta.

    Q(r, ) =

    dpR(p, )

    (|Rx(2(r p)/c)|

    2 + |Rx()|2)

    =

    dpR(p, ) |Rx(2(r p)/c)|

    2 (rp)

    +

    dpR(p, ) |Rx()|

    2 ()

    A suitable broadcast signal is one with a singly, sharply peaked autocorrelation function, much as was the case forradar pulse codes. An example of a particularly bad signal is standard broadcast (NTSC) television, which has regularsync pulses that lead to a multiply-peaked autocorrelation function. (FM radio and HDTV is much better.) To theextent Rx can be approximated by a Dirac delta function, the estimator becomes

    Q(r, ) R(r, ) + const ()

    176

  • where the term with the constant is an artifact that contaminates the estimator in the zero lag only. This really representsa kind of transmitter-induced clutter that has to be dealt with in some way. The zero lag of the ACF translates to thenoise floor of of the spectrum in the frequency domain. The artifact may be easy to distinguish, estimate, and remove,leaving behind a faithful representation of the backscatter spectrum/ ACF.

    Passive radar analysts must still contend with range and Doppler aliasing. Although there is no interpulse periodas such, the operator imposes an effective IPP with the choice of the length of the lags of the ACF that are actuallycomputed. Since this can be decided after the data are collected, the analyst has considerable flexibility and should beable to strike an optimal balance.

    The main difficulty posed by passive radar is neither clutter nor aliasing but rather practical implementation. Thedata rates and data transfer issues involved are daunting. Furthermore, the correlations involved in the estimate arecomputationally expensive. Fast Fourier transforms can be incorporated to reduce computational cost, as can besignificant coherent integration. Coarse quantization and specialized hardware improve the performance of practicalsystems further. Nevertheless, passive radar is not for the timid!

    11.4 Inverse problems in radar signal processing

    The idea of matched filtering was introduced back in chapter 4 as a way of maximizing the signal-to-noise ratio ofa radar signal at some instant in time. Matched filtering was subsequently used for processing frequency and phasecoded pulses. The result was generally a filter output with the desired signal-to-noise ratio optimization but with someamount of clutter in the form of range sidelobes. We can understand passive radar signal processing too as a form ofmatched filtering. Here, the received signal was multiplied by the anticipated (transmitted) signal prior to detection.The fourth-moment theorem and the sharply-peaked autocorrelation function of the transmitted waveform caused theoutput to resemble the desired quantity R , albeit with some artifacts.

    Just as the radiation pattern of an antenna can be designed for minimum sidelobes rather than for maximum gain,so too can the impulse response function of the filter be designed to reduce or eliminate clutter rather than to maximizethe signal-to-noise ratio. The clutter in question could be a byproduct either of the modulation of the radar waveformor of the scattering geometry, such as is encountered in planetary and synthetic aperture radar. Minimizing the clutteris potentially as powerful a design principle as matched filtering, a principle around which complex signal processingschemes can be designed and organized.

    Very often, the measured radar signal has the general form of (11.2), viz., an integral transformation involving thedesired unknown ( in the passive radar case) and some other known function, the kernel. The process of extractingthe desired quantity from the measured one is an example of a linear inverse problem. Inverse problems such as thisappear in all manner of science and engineering problems and are of central importance to remote sensing. A varietyof methodologies exist for approaching the problem using both discrete and continuous mathematics.

    One of the most direct methodologies involves defining another linear transformation that, when applied to themeasured quantity, returns a close approximation of the desired unknown. This is known as the method of Backusand Gilbert and is essentially just the clutter-minimizing filter strategy suggested above. In some cases it reduces tomatched filtering, although this is not generally true.

    As an example, consider finding a filter impulse response function h(t) which, when convolved with a binary codedpulse signal, produces an ideal pulse compression response with a narrow peak and no sidelobes. The shape of h(t)would not in general resemble the transmitted pulse, be confined to binary levels, or even have the same length as thepulse. The optimal shape could be found either through analysis or with a computational search. Since it will not be amatched filter in general, it will succeed in clutter reduction at the expense of degraded signal-to-noise performance.This is not an academic problem; radars usually transmit imperfectly formed pulses, and tailoring of the filter shape tocompensate for the imperfections must often be performed along these lines.

    Another example of this kind of inverse methodology appears in computed axial tomography (CAT) scans, whichare not really radar experiments but are close enough to warrant investigation here. CAT scans utilize differential X-rayabsorption to make volumetric images of different kinds of targets including living tissue. The geometry for a CAT

    177

  • xyn

    f(x,y)X-raysource

    screens

    Figure 11.3: Geometry for a CAT scan. The line intercepting the screen at the point s is parametrized by the equationx cos + y sin = s.

    scan is shown in Figure 11.3. Here, a body is illuminated by X-rays which undergo straight-line propagation from thesource to a screen on the opposite side of the body, where they are detected at position s on the screen. The X-rays areabsorbed as they travel through the body. The log of the relative X-ray intensity on the screen is then proportional tothe path-integrated absorption. The entire apparatus can be pivoted about the central axis, and so the path-integratedabsorption can be measured in two dimensional (s, ) space.

    The Radon transform, named for Johann Radon, relates the absorption coefficient within the body f(x, y) to theX-ray measurements on the screen:

    R(s) =

    f(x, y)(x cos + y sin s)dxdy (11.3)

    where the integration is over the entire area containing the body but where the Dirac delta function picks out only thecontributions to the absorption along the line that intersects the screen at the point s for a given orientation .

    The quantityR(s) is sometimes called a sinogram because a point target in f(x, y) produces a single sine wave ina plot ofR versus . Suppose that f(x, y) = (xx, yy). In that case, it is easy to see thatR(s) = A sin(+),where x = A sin and y = A cos. The amplitude and phase of the sine wave therefore reflect the radial andangular positions of the point target, respectively. A filled volume consequently produces a conglomeration of sinewaves of different amplitude and phase in the final sinogram. Inferring the shape of the absorbing body from thesinogram by eye would pose quite a challenge, however.

    If we knew f(x, y), we could easily calculate R(s). How do we estimate the former knowing the latter? What isneeded is another transformation that converts R(s) back to f(x, y). To find one, begin by considering the so-calledFourier slice theorem. Consider the two-dimensional Fourier transform pair:

    f(w) =

    f(x)eixwdx (11.4)

    f(x) =

    (1

    2pi

    )2 f(w)eixwdw (11.5)

    Next, consider the one-dimensional Fourier transform of the sinogram:

    R(w) =

    R(s)e

    iwsds (11.6)

    =

    f(x, y)ei

    wnx w(x cos + y sin )dxdy (11.7)

    = f(wn) (11.8)

    178

  • where w plays the role of a spatial frequency. Equation (11.6) refers to the operation of taking a one-dimensionalFourier transform of the sinogram at some particular angle . Performing the ds integral lead directly to (11.7). In thatequation, the exponential has the form iwn x, where n is the normal vector in Figure 11.3. According to (11.4),this equation is then the two-dimensional Fourier transform to the space wn, as shown in (11.8). The consequence ofall this is that the Fourier transform of a sinogram measured at some angle is a slice through the two-dimensionalFourier transform of f(x, y) taken along a cut in the normal (n) direction.

    The Fourier slice theorem suggests a means of inverting sinograms. We could measure them along a number ofdifferent angles, take the required 1-D Fourier transforms, assemble the appropriate cuts, and then calculate the inverse2-D Fourier transform of f(w) according to (11.5) to arrive at f(x, y). While this is certainly possible, the cylindricalsymmetry of the problem prevents the exploitation of a Fast Fourier transform algorithm, and it is not apparent how toperform the operations involved efficiently.

    The radon transform R can be viewed as an operator that maps from f(x, y) to a Rf(x, y) = g(s). There existsan adjoint operatorR# that satisfies (Rf, g) = (f,R#g). This is called the back-projection operator. Its definition is

    R#(x) =

    2pi0

    g(, n() x)d (11.9)

    where g now represents the sinogram which is a function of s n() x. Formally, (11.9) is R#g = R#Rf . Strictbackprojection is equivalent to matched filtering. If the back projection operator is constructed such that R#Rf = f ,then it will transform the sinogram back to the desired absorption function, and the inversion problem is solved. Ingeneral, however, backprojection/matched filtering does not produce the truest images, although it does produce theimage with the maximum SNR.

    In fact, (11.9) yields a recognizable but imperfect reproduction of f(x, y) only. In order to improve the inversion,the sinograms must first be filtered. The appropriate filter is one that emphasizes the high-frequency components overlow frequencies in w-space:

    H(w) = |w|h(w) (11.10)The necessary filter is simply a ramp function. Finally, the desired absorption function if estimated as

    f(x) =1

    4piR#H(g) (11.11)

    showing how the sinograms can be transformed using a simple linear transformation that, in this case, differs fromthe transformation that produced them. The prescription is to filter all the sinograms, one angle at a time, and thenintegrate over the entire collection using the back projection operator in (11.9). All that is involved numerically is anumber of discrete Fourier transforms and other one-dimensional integrals.

    That (11.11) actually returns the underlying absorption function f(x, y) is easily shown. We can express (11.11)as:

    f(x)?=

    1

    4pi

    2pi0

    Hg(, s)d

    ?=

    1

    4pi

    1

    2pi

    2pi0

    |w|R(w)eiwsdwd

    ?=

    (1

    2pi

    )2 2pi0

    0

    wR(w)eiw(x cos +y sin )dwd

    =

    (1

    2pi

    )2 2pi0

    0

    wf(wn)eiwxdwd

    where the last line is just the definition of the inverse Fourier transform in two dimensions and in cylindrical coor-dinates. Evidently, the ramp function serves as the differential component w that appears in cylindrical coordinates.Functionally, it serves to emphasize high-frequency components that are otherwise underemphasized by sampling atwo-dimensional function at fixed angular increments. While the effect of the ramp filter function is to preferentially

    179

  • sinogram

    20 40 60 80 100120

    102030405060

    sinogram w/ noise

    20 40 60 80 100120

    102030405060

    backprojection

    20 40 60

    102030405060

    backprojection w/ noise

    20 40 60

    102030405060

    FBP

    20 40 60

    102030405060

    FBP with noise

    20 40 60

    102030405060

    Figure 11.4: Demonstration of the CAT scan problem for an image resembling the international radiation symbol.The image is discretized on a grid of 64 screen points s by 120 angles. The left column shows sample sinograms.The middle column is the backprojection or matched-filter solution. The right column is the filtered backprojectionsolution. The top and bottom rows reflect calculations without and with added noise, respectively. Whereas thematched-filter solution has the highest SNR, the filtered backprojection solution gives the truest image.

    amplify noise, the integral implicit in the sinogram tends to attenuate noise, and the overall CAT scan analysis turnsout to be well conditioned.

    An example CAT scan inversion is shown in Figure 11.4. This figure compares the performance of simple back-projection and filtered backprojection in the absence and presence of added noise. Equation (11.9), the backprojectionsolution, represents simple matched filtering of the sinogram. The backprojected image is therefore the one with themaximum the signal-to-noise ratio. Equation (11.11) is meanwhile a linear integral transformation that reverses insome sense the linear transformation in (11.3) with the objective of recovering the most accurate representation of theabsorption function f(x, y) possible rather than maximizing the signal-to-noise ratio as in matched filtering. The formof (11.11) essentially compensates for the effects of the cylindrical geometry of the CAT scan. The effect is to recovermore detail at the expense of SNR. The tradeoff is an acceptable one so long as the system is well-posed, i.e., stablein the presence of noise. Similar transformations and tradeoffs are often made with other problems in remote sensing.

    An important example arises in synthetic aperture radar, where a digital filter must also be designed to compen-sate for the geometry inherent in that problem. The transformation will be discussed below in the context of rangemigration.

    11.5 Synthetic Aperture Radar (SAR)We already discussed some of the basic principle behind synthetic aperture radars in the context of virtual antennaarrays. Here, we delve into the details of SAR data processing. Like passive radar, the details are complicated, and thedata processing burden can be enormous. The impact on planetary exploration, terrestrial resource management, andearthquake studies in particular justify the effort.

    SAR is similar to planetary radar in that the two-dimensional locations of objects in the radar field of view are

    180

  • vy0

    x0

    h0Rm

    0t=0

    PO

    L

    R0

    Figure 11.5: Geometry of a SAR experiment.

    determined through a combination of range gating and Doppler analysis. In planetary radar, the Doppler shift is dueto the motion of the target, whereas in synthetic aperture radar, it is due to the motion of the vehicle carrying theradar. Objects in the field of view must be stationary in either case for straightforward analysis of the radar echoesto be effective. Range and Doppler ambiguities must be avoided through the proper choices of radar beam shapes,frequencies, and interpulse periods, as usual. Objects to the left and right of a SAR will exhibit the same Dopplershifts, just like objects in the northern and southern hemispheres do in the case of planetary radar, but this potentialambiguity can be avoided in the former case by using side-looking radars.

    The radar range resolution in SAR can be enhanced through pulse compression, which most often takes the formof chirp sounding in practice. Image resolution in the direction of vehicle motion, meanwhile, is enhanced by incor-porating long time series in the determination of the echo Doppler shifts, since long time series imply fine spectral(and therefore spatial) resolution. In this way, broad-beam antennas can be advantageous, since they permit targets onthe ground to be observed for long periods of time as the vehicle moves. Specialists in the field refer to beamwidthcompression, which is analogous to pulse compression in its attempt to make a small antenna perform like a largeone. The price to be paid for using small antennas comes in the form of reduced sensitivity, and as usual, an optimalbalance between resolution and sensitivity must be struck.

    Signal processing can take several forms, depending on the sophistication of the imaging algorithm. Generally,it begins with matched filter decoding of the chirped transmitter waveform. This produces the standard data matrixencountered back in chapter 7, with each row representing data from a different pulse. The range resolution is deter-mined by the compression ratio as before. For the moment, we can regard the data as being analyzed one range gate(column) at a time, although this will be reconsidered in the discussion of range migration below.

    The processing which follows can be regarded as a generalization of coherent integration. In chapter 7, we sawthat conventional spectral analysis is performed by coherently integrating (summing) the signal voltages in a givenrange bin, compensating for the phase progression of the signals in different Doppler frequency bins by multiplyingby the appropriate factor of exp(-j(t)), where (t) = t. The procedure is carried out for each frequency of interest.Detection (squaring) of the results produced spectrograms. The same processing occurs here, except with (t) beingthe anticipated phase history of a target at a given spatial location of interest. During coherent integration, onlysignals arising from that location will add constructively. Images are formed by considering every candidate locationseparately. Data processing then amounts to correlating an appropriate complex function exp(j(t)) with the rows ofthe data matrix. The correlation can be performed efficiently using discrete Fourier transforms.

    The primary new task is the calculation of (t), which will be different for different range gates. That calculation isdescribed below. The results can be tabulated and stored rather than calculated on the fly for improved computationalspeed.

    The geometry of the experiment in question is shown in Figure 11.5. The radar is carried on a vehicle travelingat a velocity v and an altitude h. A particular point of interest on the ground P in its field of view has coordinates

    181

  • fD

    t

    2R0

    R0 0

    2pi

    4pi

    6pi

    8pi

    t

    Figure 11.6: (left) Doppler-shift profiles for targets at two different ranges of closest approach. The time of closestapproach is t = 0 for both cases. (right) Corresponding phase history profile for one of the targets. Note that the phaseis periodic in the interval [0, 2pi].

    (x, 0, 0). The radar antenna in use is directional, but not too directional. Say its main beam is pointed in the directionof P at time t = 0, which will correspond to the midpoint of the observations. The coordinates of the vehicle at thistime are (0, y, h). This makes the pointing angle (sometimes called the squint angle) of the directional antennawith respect to the vehicle velocity = cos1(y/R), where R = (x2 + y2 + h2)1/2. The total length over whichobservations will be made is L. Presumably, L is limited by the finite beamwidth of the radar antenna along with thesquint angle. The corresponding observing time is T = L/v.

    During the time interval T , the range to the target P will change according to

    R =x2 + (y vt)

    2 + h2, T/2 < t < T/2

    =R2 2Rvt cos + v

    2t2, T/2 < t < T/2

    where we have used y = R cos . Assuming L R, we can expand this in a binomial expansion as

    R R vt cos +v2t2

    2Rsin2 +

    v3t3

    2R2cos sin

    2 +

    Let us consider the simplest case of a side-looking radar with a squint angle = pi/2 so that y = 0. In that case,

    R R +v2t2

    2Rv4t4

    8R3+

    More important than the range to the target is the phase delay, which for a two-way radar path is just = 2R(2pi/):

    = +2piv2t2

    R

    2piv4t4

    4R3+

    and the Doppler frequency, fD = /2pi = 2R/:

    fD = 2v2t

    R+v4t3

    R3+

    Figure 11.6 shows the time history of the Doppler shifts of two targets with different ranges of closest approach,R= Rm. The time of closest approach is t = 0 for both cases. Near that time, the curves are approximately linear. Cubicand higher order dependencies on time become more evident with increasing |t|. Associated with these curves areequivalent phase history curves reflecting (t). These are used in coherent integration to construct the actual images.Note that the phase wraps so as to be periodic on the interval [0, 2pi].

    182

  • 11.5.1 Image resolution

    The level of detail in a SAR image is limited by the granularity of the time-Doppler map. Given a total observingtime T = L/v, the width of the Doppler bins will be f = 1/T = v/L. The longer the observing time, the longer thesynthetic antenna array, and the finer the frequency resolution. Near the time of closest approach, we have

    f 2v2

    Rt

    y = vt

    =R2L

    which is the same estimate obtained earlier on the basis of synthetic aperture size arguments. Note that the presenttreatment has been made in the limit R L, although a more general analysis can be performed. Note also that thisis the resolution of the image in the direction parallel to the vehicle motion. The transverse resolution is defined by thepulse width and the compression ratio as in a conventional radar experiment.

    11.5.2 Range delay due to chirp

    We have seen that pulse compression using linear FM chirps introduces a systematic time delay due to a skewing ofthe ambiguity function. The magnitude of the delay is given by | | = |fDtp/f |, where fD is the Doppler shift, tp isthe pulse length, and f is the frequency excursion of the chirp. This can also be written as | | = |fDtpteff |, whereteff is the effective pulse length after compression. Since it is this last expression that determines the transverse rangeresolution of the images, it is important that the artificial range delay | | be small by comparison to teff .

    The Doppler shift of a target on the ground will be fD = (2v/) cos(), where v is the vehicle speed and is theangle the radar ray makes with the vehicle velocity vector. Combining these factors, the range delay introduced bychirping can be neglected if the following ratio is small compared to unity:

    | |

    eff=

    2v

    tp cos

    As an example, consider an L-band radar operating at 1.07 GHz with a 4 s pulse length on a spacecraft traveling at7 km/s observing a target at an angle of 60. The ratio in this case would be precisely 0.1 and small enough for theartificial delay to be neglected.

    11.5.3 Range migration

    The preceding discussion assumed that individual scatterers in the radar field of view remain within their respectiverange gates as the vehicle traversed its path. Of course, the finite Doppler shift of the echoes implies a time rate ofchange of range. There is not necessarily a contradiction, as the range of a target need only undergo a change of afraction of a wavelength for there to be a discernible Doppler shift. Since the range bins are likely to be a large numberof wavelengths in dimension, Doppler-shifted targets may remain within a range gate for a considerable period oftime. For large path lengths L, however, targets could easily be expected to migrate from one range bin to another.This behavior defeats the effects of coherent and incoherent integration and has the potential to limit the resolution ofa SAR image.

    Consider a SAR with a squint angle of 90. Take the distance of closest approach to be R. At times t = L/2v,the times corresponding to the beginning and end of data acquisition, the range will be its greatest:

    R R +L2

    8R

    making the maximum range excursion L2/8R. In a spaceborne SAR experiment with L = 20 km andR = 1000 km,the range migration will be 50 m, which could well span several range bins.

    183

  • Strategies for coping with range migration, called range migration algorithms or RMAs, occupy center stagein contemporary SAR methodology research. An important aspect of the algorithms is that the radar data are nolonger segregated into range bins and are instead regarded as part of a continuous time series. Since the phase historyof echoes from a target depends uniquely on its two dimensional location, range binning is not essential and onlycomplicates matters in the event range migration is occurring.

    In RMA methodologies, data are regarded as being functions of two independent variables: time and satelliteposition. It is supposed that the image, which is a function of two spatial coordinates, is related to the data by somelinear (integral) transformation. The objective is to find the optimal transformation and then apply it expediently. Notethat the same strategy has been used many times throughout this text. Matched filtering is one example, where thedesired integral transformation is a convolution with the impulse response function. Pulse decompression is moreovernothing more than matched filtering. In the analysis of passive radar, the desired linear transformation is the onegiven by Q. In constructing images of the radar brightness from measured interferometric visibilities, the requiredtransformation is just a Fourier transform. Note further that, in each case, the transformation needed to invert thedata is similar in form to the transformation that describes how the data came about in the first place (the forwardtransformation).

    What is the relationship between the image and the data in SAR? We assume that the data have been processed forpulse compression but not sorted into rows and columns. In general, we can write formally

    d(s, t) =

    ei(t2Rs,x/c)A(, s,x)V (x)dd2x (11.12)

    Here, d(s, t) represents the data or the signal registered by the radar receiver at time t and position s along the vehicletrajectory. The surface reflectivity in the plane is represented by V (x). The exponential in the integral reflects thephase that the received signal will have in view of its Doppler frequency and the distance R between x and thevehicle location. Finally, A(, s,x) is a factor that gives the signal amplitude as a function of geometry. Its role ismainly to specify at which Doppler frequency the signal power will be concentrated. The effects of frequency chirps(and a great many other things) must be accounted for here. Finally, the received signal comes from the integral overall locations in the plane and all possible Doppler frequencies. This is the forward transformation.

    We can propose that an estimate of the surface reflectivity can be formed from a similar linear transformationoperating on the data:

    V (x) =

    ei

    (t2Rs,x/c)Q(, s,x)d(s, t)ddsdt (11.13)

    where Q(, s,x) is a function that must be determined. To proceed, substitute (11.12) into (11.13). The time intervalcan be performed immediately, evolving a factor of ( ). After performing the integral, we are left with:

    V (x) =

    ei2k(Rs,xRs,x)QAdds

    K(x,x)

    V (x)d2x

    where k = /c. Now, if the kernel function K can be made to approximate a Dirac delta function, (x x), throughthe right choice of Q, then the approximator for the surface reflectivity will be a good one. In fact, it is nearly in thecorrect form already. This can be appreciated with the application of the method of stationary phase. The exponentialterm in the kernel is highly oscillatory, and the greatest contribution to the integral will come from regions in sspace where its argument is small. This will be where the two Rs are nearly the same, which is also where x and xare nearly the same. Taylor expanding the argument in this region gives

    2k(Rx,s Rx,s) 2k(x x) R

    (x x)

    where is a new, two-dimensional auxiliary variable. Finally, transform from s space to space with the aid ofthe Jacobian of the transformation:

    K(x,x) =

    ei(x

    x)QA

    (s, ) d2

    184

  • Clearly, making Q1 = A|(s, )/| yields a kernel K which performs like a Dirac delta function. Combined with(11.13), this specifies the transformation for converting raw SAR data into surface imagery without ever having toperform range binning and without the distorting effects of range migration.

    11.5.4 InSAR

    We have already discussed the issues underlying interferometry. InSAR adapts interferometric to synthetic apertureimaging. This can be done either by flying two radar units in close proximity, as has been done with the spaceshuttle, or by treating SAR data from subsequent passes (orbits) as the two components of the interferometer. Thelatter approach requires precise timing and navigation and only works when the medium under study remains fixed inspace between passes. Low coherence in InSAR interferograms may be indicative of massive disruptions in surfacetopology, as occurs during earthquakes.

    11.6 Weather radar

    11.7 GPR

    11.8 References

    11.9 Problems

    185