View
406
Download
1
Category
Preview:
Citation preview
SURE 440 ANALYTICAL PHOTOGRAMMETRY
LECTURE NOTES
Robert Burtch Professor
August 2000
SURE 440 – Analytical Photogrammetry Lecture Notes Page i
TABLE OF CONTENT
TOPIC Page Coordinate Transformations 1 Basic Principles 1 General Affine Transformation 3 Orthogonal Affine Transformation 5 Isogonal Affine Transformation 5 Example of an Isogonal Affine Transformation 6 Rigid Body Transformation 8 Polynomial Transformations 8 Projective Transformation 9 Transformations in Three Dimensions 9 Corrections to Photo Coordinates 12 Analytical Photogrammetry Instrumentation 12 Ground Targets 13 Abbe’s Comparator Principle 13 Basic Analytical Photogrammetric Theory 14 Interior Orientation 14 Film Deformation 15 Lens Distortion 17 Seidel Aberration Distortion 17 Decentering Distortion 18 Atmospheric Refraction 21 Earth Curvature 24 Example 27 Projective Equations 34 Introduction 34 Direction Cosines 35 Sequential Rotations 38 Derivation of the Gimbal Angles 39 Linearization of the Collinearity Equation 46 Numerical Resection and Orientation 51 Introduction 51 Case I 52 Example Single Photo Resection – Case I 55 Case II 60 Case III 63
SURE 440 – Analytical Photogrammetry Lecture Notes Page ii Principles of Airborne GPS 67 Introduction 67 Advantages of Airborne GPS 68 Error Sources 69 Camera Calibration 71 GPS Signal Measurements 72 Flight Planning for Airborne GPS 72 Antenna Placement 73 Determining the Exposure Station Coordinates 75 Determination of Integer Ambiguity 80 GPS-Aided Navigation 81 Processing Airborne GPS Observations 82 Strip Airborne GPS 92 Combined INS and GPS Surveying 94 Texas DOT Accuracy Assessment Project 95 Economics of Airborne-GPS 97 References 101
Coordinate Transformations Page 1
COORDINATE TRANSFORMATIONS
Basic Principles
A coordinate transformation is a mathematical process whereby coordinate values expressed in one system are converted (transformed) into coordinate values for a second coordinate system. This does not change the physical location of the point. An example is when the field surveyor sets up an arbitrary coordinate system, such as orienting the axes along two perpendicular roads. Later, the office may want to place this survey onto the state plane coordinate system. This can be done by a simple transformation. The geometry is shown in figure 1.
Figure 1. Geometry of simple linear transformation in two dimensions. In figure 1, a point P can be expressed in a U, V coordinate system as UP and VP. Likewise, in the X, Y coordinate system, the point is defined as XP and YP. Assume for the time that both systems share the same origin. Then the X-axis is oriented at some angle α from the U-axis. The same applies for the Y and V axes. The X-coordinate of the point can be shown to be
epdeX P += From triangle feP:
α=⇒=α
cosfPeP
ePfPcos
The XP coordinate can be written as
Coordinate Transformations Page 2 ePdeX P +=
But,
α=∴=α
α=∴=α
cosU
ePePU
cos
tanYdeYdetan
PP
PP
Then,
α−α=
+α=αα
+αα=
α+α=
sinYcosXU
UsinYcosXcosU
cossinY
cosUtanYX
PPP
PPP
PP
PPP
In a similar fashion, the VP coordinate transformation can also be developed.
PbabVP += But,
( ) α−=⇒−
=α
α=⇒=α
sinbcXabbcX
absin
tanYbcYbctan
PP
PP
Therefore, ab becomes
( )
αα−α=
αα−=
cossinYsinX
sintanYXab2
PP
PP
and Pb can be shown to be
α=⇒=α
cosYPb
PbYcos PP
Coordinate Transformations Page 3 The VP coordinate then becomes
α+α=
αα−
α+α=
α+
αα−α=
cosYsinX
cossin
cos1YsinX
cosY
cossinYsinXV
PP
2
PP
P2
PPP
In these derivations, the angle of rotation (α) is a rotation to the right. It can easily be shown that the conversion from U, V to X, Y will take on a similar form. Since the angle would be in the opposite direction, one can insert a negative value for α and then arrive at the following form of the transformation (recognizing that the sin (-α) = -sin α):
α+α−=
α+α=
cosVsinUY
sinVcosUX
PPP
PPP
This can be shown in matrix form as:
αα−αα
=
P
P
P
P
VU
cossinsincos
YX
Next, we will take these transformation equations and expand them into different forms all related to what is called the affine transformation.
General Affine Transformation The general affine transformation is normally shown as
Which provides a unique solution if 0baba
22
11 ≠ .
( )1cybxa'y
cybxa'x
222
111
++=
++=
Coordinate Transformations Page 4 This is a two-dimensional linear transformation that is used in photogrammetry for the following:
a) Transforms comparator coordinates to photo coordinates and is used for correcting film distortion.
b) Connecting stereo models. c) Transforms model coordinates to survey coordinates.
The property of the affine transformation is that it will carry parallel lines into parallel lines. In other words, two lines that are parallel to each other prior to the transformation will remain parallel after the transformation. It will not preserve orthogonality.
Figure 2. Physical interpretation of the affine transformation. Figure 2 shows the physical interpretation involved in this transformation. The x and y axes represent the original axis system while x’, y’ represent the newly transformed coordinate system. The transformation can then be written as
where: ∆x’, ∆y’ are the translation elements in moving from the center of the original coordinate system to the center of the transformed coordinate system,
Cx, Cy are scale factors in the x and y direction, α is the angle of rotation, and ε is the angle of non-orthogonality between the axes of the transformed
coordinate system. Note that there are six parameters in this transformation (Cx, Cy, α, ε, ∆x’, and ∆y’). When comparing equations (1) and (2), one can see the following.
( ) ( )
( ) ( ) ( ) ( )( )2
'ycosyCsinxC'y
'xsinyCcosxC'x
yx
yx
∆+ε+α+ε+α−=
∆+α+α=
Coordinate Transformations Page 5
Orthogonal Affine Transformation
To the general affine case, one can impose the condition of orthogonality, i.e., ε → 0. This results in a five parameter transformation (Cx, Cy, α, ∆x’, and ∆y’). This transformation is useful when one takes into account the differences in the magnitude of film shrinkage in the length of the film versus its width. The transformation is shown as:
Note that this transformation is non-linear.
Isogonal Affine Transformation To the general case of the affine transformation one can impose two conditions. These conditions are orthogonality (ε → 0) and uniform scale (C = Cx = Cy). The isogonal affine transformation is also called the Helmert transformation, similarity transformation, Euclidean transformation, and conformal transformation. It is shown as:
If one recalls those equalities expressed in equation (3), we can see that
Therefore, equation (5) can be expressed as
( )( ) ( )3
'yc'xccosCbsinCbsinCacosCa
21
y2y1
x2x1
∆=∆=β+α=α=ε+α−=α=
( )4'ycosyCsinxC'y
'xsinyCcosxC'x
yx
yx
∆+α+α−=
∆+α+α=
( )5'ycosyCsinxC'y
'xsinyCcosxC'x
∆+α+α−=
∆+α+α=
21
21
absinC
bacosC
=−=α−
==α
Coordinate Transformations Page 6
211
111
cyaxb'y
cybxa'x
++−=
++=
or, as normally shown as:
In this form (6), the transformation is linear. The back solution can be shown as:
Example of an Isogonal Affine Transformation The following are the measured comparator values: xUL = 70.057 mm yUL = -40.014 mm xLR = 80.067 mm yLR = -50.026 mm xPT = 76.0985 mm yPT = -41.9810 mm The “true” photo coordinates of the reseau are: x’UL = 70.107 mm y’UL = -39.843 mm x’LR = 80.133 mm y’LR = -49.820 mm Recall that the transformation formulas can be written as:
( )6daybx'y
cbyax'x
++−=
++=
( ) ( )
( ) ( )( )7
bad'yac'xby
bad'ybc'xax
22
22
+−+−=
+−−−=
Coordinate Transformations Page 7
The unknowns are a, b, c, and d while the measured values are xUL, yUL, xLR, and yLR. The “true” values are x’UL, y’UL, x’LR, and y’LR. Differentiating the transformation formulas with respect to the parameters would follow as:
0d'x
1c'x
yb'x
xa'x
ULUL
ULUL
ULUL
=∂
∂=
∂∂
=∂
∂=
∂∂
The design matrix (B) is shown as
The discrepancy vector (f) and the vector containing the parameters (∆∆∆∆) are shown as:
The normal equation is found using the following relationship where N is the normal coefficient matrix:
N = BtB
dxbya'ycybxa'xdxbya'ycybxa'x
LRLRLR
LRLRLR
ULULUL
ULULUL
+−=++=+−=++=
( )
( )
( )
( )
−−−−−−
=
−
−=
∂∂
∂∂
∂∂
∂∂
=
10067.80026.5001026.50067.8010057.70014.4001014.40057.70
10xy01yx10xy01yx
d,c,b,a'y
d,c,b,a'x
d,c,b,a'y
d,c,b,a'x
B
LRLR
LRLR
ULUL
ULUL
LR
LR
UL
UL
=∆
−
−=
=
dcba
820.49133.80843.39
107.70
'y'x'y'x
f
LR
LR
UL
UL
Coordinate Transformations Page 8
The inverse of the normal coefficient matrix is shown as:
The constant vector (t) and the solution (∆∆∆∆) are computed as: The transformed coordinates of the point are then computed as:
Rigid Body Transformation To the general affine transformation, one further set of conditions can be imposed. This includes orthogonality and not scale (Cx = Cy = 1). In this case the transformation can be shown with only three parameters (α, ∆x’, and ∆y’) as:
−−−
=
202
124.150040.90429.15422040.90124.1500429.15422
N
−
=−
942775.760942775.76
748971.0449211.0009978.0449211.0748971.00009978.0
N 1
=
−
−==∆
−
−== −
dcba
045424.0014579.0002547.0
999051.0
tN
663.89240.150776.33068.1514
fBt 1T
( )( ) ( )( )
( )( ) ( )( ) ( )793.41
045424.09810.41999051.00985.76002547.0dyaxb'y
mm148.76014579.09810.41002547.00985.76999051.0
cybxa'x
PTPTPT
PTPTPT
−=−+−+−−=
++−=
=+−−+=
++=
Coordinate Transformations Page 9
( )8'ycosysinx'y
'xsinycosx'x
∆+α+α−=
∆+α+α=
Polynomial Transformations A polynomial can also be used to perform a transformation. This is given as
m
m
++++++=
++++++=
xybybxbybbxbb'y
xyayaxayaxaa'x
52
42
3210
52
42
3210
An alternative from Mikhail [Ghosh, 1979] can also be used.
( ) ( )
( ) ( ) l
l
++−++−=
++−+++=
xy2AyxAyAxAB'y
xy2AyxAyAxAA'x
322
4120
422
3210
Projective Transformation The projective equations are frequently used in photogrammetry. Shown here, without derivation, is the form of the 2-D projective transformation.
1ydxdbybxb
'y
1ydxdayaxa
'x
21
321
21
321
++++
=
++++
=
Transformations in Three Dimensions The developments that have already been presented represent the transformation in 2-D space. Surveying measurements are increasingly being performed in a 3-D mode. The approach is basically the same as above, except for the addition of one more axis about which the transformation takes place. Discussion on the use of the projective equations will be given in a later section.
Coordinate Transformations Page 10 Instead of using the projective equations, polynomials may be used to perform the 3-D transformation. Ghosh [1979] gives the general form of this type of transformation.
m
m
m
+++++
++++++++=
+++++
++++++++=
+++++
++++++++=
212
211
2109
872
62
52
43210
212
211
2109
872
62
52
43210
212
211
2109
872
62
52
43210
xzcyxcxyczxc
yzcxyczcycxczcycxcc'z
xzbyxbxybzxb
yzbxybzbybxbzbybxbb'y
xzayxaxyazxa
yzaxyazayaxazayaxaa'x
This transformation is not conformal therefore it should only be used where the rotation angles are very small. Mikhail presents another form of the 3-D polynomial, which is conformal in the three planes. This is given as [Ghosh, 1979}:
( )
( )
( ) �
�
�
+++++−−++−+=
++++−+−+++−=
++++−−++++=
0zxA2yzA2zyxAzAyAxAC'z
xyA20yzA2zyxAzAyAxAB'y
xyA2zxaA0zyxAzAyAxAA'x
56222
71430
57222
64120
67222
53210
The 0’s here indicate that the coefficients for the terms, yz in x’, zx in y’, and xy in z’ are zero. A polynomial projective transformation can be shown, without derivation, as [Ghosh, 1979]:
1zdydxdczcycxc
'z
1zdydxdbzbybxb
'y
1zdydxdazayaxa
'x
321
4321
321
4321
321
4321
++++++
=
++++++
=
++++++
=
Coordinate Transformations Page 11 A solution is possible provided that
0
ddddccccbbbbaaaa
4321
4321
4321
4321
≠
Corrections to Photo Coordinates Page 12
CORRECTIONS TO PHOTO COORDINATES
Analytical Photogrammetry Instrumentation Analytical photogrammetry is performed on specialized instruments that have a very high cost due to the fact that there is a limited market. With the onset of digital photogrammetry, the instrumentation is cheaper (being the computer) but the software still remains expensive for this specialized applications. The design characteristics of analytical instrumentation include [Merchant, 1979]:
• High accuracy • High reliability • High measuring efficiency • Low first cost • Low cost of maintenance
In addition, operational efficiency becomes an important consideration. This factor involves the necessary training required for the operator of the equipment. If the instrument requires an individual with a basic theoretical background in photogrammetry along with experience, then there will be a limited pool from which one can draw their operators. Operational efficiency also involves on the comfort of the operator when operating the equipment. One of the advantages of digital photogrammetry is that it has the capability, at least theoretically, to completely automate the whole process and an individual with no basic understanding of photogrammetric principles can do this. There are various different kinds of photogrammetric instrumentation that can be used in analytical photogrammetry. At the low end, precision analog, or semi-analytical (computer-aided) stereoplotters can be used either in a monoscopic or stereoscopic mode. When used on a stereoplotter, it is important to put all of the elements in their zero positions (ω’ = ω” = ϕ’ = ϕ” = κ’ = κ” = by’ = by” = bz’ = bz” = 0) [Ghosh, 1979]. The base (bx), scale and Z-column readings should be at some realistic value. Analytical plotters can also be used for analytical photogrammetric measurements. These instruments are generally linked to analytical photogrammetry software that helps the operator complete the photo measurements. Comparators are designed specifically for precise photo measurements for analytical photogrammetry. Comparators can be either monoscopic or stereoscopic. The photographs are placed on the stages and all points that are imaged on the photo are measured. The last type of instrument is the digital or softcopy plotter. Photos are scanned (or captured directly in a digital form) and points are measured. With autocorrelation techniques the whole process of aerotriangulation can be automated with the solution containing more points than can be done manually.
Corrections to Photo Coordinates Page 13 To achieve the high accuracy demanded by many analytical photogrammetric applications, it is important that the instrument upon which the measurements are made is well calibrated and maintained. There are many systematic error sources associated with the comparator. They are
“a) Errors of the instrument system, - scaling and periodic errors (of the x, y measuring systems involving scales,
spindles, coordinate counter, etc.); - affinity errors (being the scale difference between x an y directions); - errors of rectilinearity (bending) of the guide rails; - lack of orthogonality between x and y axes (also known as ‘rectangularity
error’). b) Backlash and tracking errors. c) Dynamic errors (e.g., microscope velocity does not drop to zero at points to be
approached during the operation. d) Errors of automation in the system,
- digital resolution (smallest incremental interval); - errors due to deviation of the direction. This is because the control system
may not provide for the continuously variable scanning direction.” [Ghosh, 1979, p.30]
One could determine the corrections to each of these error sources, although from a practical perspective these errors are accounted for by transforming the photo measurements to the “true” photo system, which is based on calibration.
Ground Targets Ground targets can be one of three different types. Signalized points are targeted on the ground prior to the flight. Several different target designs are used in photogrammetry. Detail points are those well defined physical features that are imaged on the photography. These items can be things like the intersection of roads (for small-scale mapping), intersections of sidewalks, manholes, etc. The last type of control point is the artificial point that is added to the photography after the film is processed. Using a point transfer instrument, such as the PUG by Wild, points are marked on the emulsion of the film.
Abbe's Comparator Principle Abbe's comparator principle states that the object that is to be measured and the measuring instrument must be in contact or lie in the same plane. The design is based on the following requirements: "i) To exclusively base the measurement in all cases on a longitudinal graduation with
which the distance to be measured is directly compared; and
Corrections to Photo Coordinates Page 14 ii) To always design the measuring apparatus in such a way that the distance to be
measured will be the rectilinear extension of the graduation used as a scale." [Manual of Photogrammetry, ASP, in Ghosh, 1979, p.7]
Basic Analytical Photogrammetric Theory
Analytical photogrammetry can be broken down into three fundamental categories: First Order Theory, Second Order Theory and Third Order Theory. Fist Order Theory is the basic collinearity concept where the light rays from the object space pass through the atmosphere and the camera lens to the film in a straight line. Second Order Theory corrects for the most significant errors that are not accounted for in First Order Theory. Those items that are normally covered include lens distortion, atmospheric refraction, film deformation and earth curvature. Third Order Theory consists of all the other sources of error in the imposition of the collinearity condition, which are not included in Second Order Theory. These errors are usually not accounted for except for special circumstances. They include platen unflatness, transient thermal gradients across the lens cone, etc.
Interior Orientation The first phase of analytical photogrammetric processing is the determination of the interior orientation of the photography. The photogrammetric coordinate system is shown in figure 2. The point, p, is imaged on the photograph with coordinates xp, yp, 0. The principal point is determined through camera calibration and it generally is reported with respect to the center of the photograph as defined by the intersection of opposite fiducial marks (indicated principal point). It has coordinates xo, yo, 0. The perspective center is the location of the lens elements
Figure 3. Examples of Abbe's Comparator principle with simple measurement systems.
Corrections to Photo Coordinates Page 15 and it has coordinates xo, yo, f. The vector from the perspective center to the position on the photo is given as
−−−
=f0yyxx
a op
op�
Interior orientation involves the determination of film deformation, lens distortion, atmospheric refraction, and earth curvature. The purpose is to correct the image rays such that the line form the object space to the image space is a straight line, thereby fulfilling the basic assumption used in the collinearity condition.
Figure 4. Photographic coordinate system.
Film Deformation
When film is processed and used it is susceptible to dimensional change due to the tension applied to the film as it is wound during both the picture taking and processing stages. In addition, the introduction of water-based chemicals to the emulsion during processing and the subsequent drying of the film may cause the emulsion to change dimensionally. Therefore, these effects need to be compensated. The simplest approach is to use the appropriate transformation model discussed in the previous section. One of the problems with this approach is that it is possible that unmodelled distortion can still be present when only four (or fewer) fiducial marks are employed. To overcome this problem, reseau photography is commonly employed for applications requiring a higher degree of accuracy. A reseau grid consists of a grid of targets that are fixed to the camera lens and imaged
Corrections to Photo Coordinates Page 16 on the film. One simple approach is to put a piece of glass in front of the film with the targets etched on the surface. The reseau grid is calibrated so that the positions of the targets are accurately known. By observing the reseau targets that surround the imaged points and using one of the transformation models discussed earlier, the results should more accurately depict the dimensional changes that occur due to film deformation. For example, the isogonal affine model can be used. It will have the following form, taking into consideration the coordinates of the principal point (xo, yo).
−
αα−αα
+
∆∆
=
o
o
yx
'y'x
cossinsincos
yx
yx
In its linear form it looks like:
−
+
−
=
o
o
yx
dc
'y'x
abba
yx
Using 4 fiducials, an 8-parameter projective transformation can be used. Its advantage is that linear scale changes can be found in any direction. The correction for film deformation is given as
o21
321
o21
321
y1'yc'xcb'yb'xb
y
x1'yc'xca'ya'xa
x
−++++
=
−++++
=
Measurement of the four fiducials yields 8 observations. Therefore, this model provides a unique solution. Other approach to compensation of film deformation is to use a polynomial. One model, used by the U.S. Coast and Geodetic Survey (now National Geodetic Survey) when four fiducials are used is shown as:
xybybxbby'yyyxyayaxaax'xxx
3210
3210
++++=−=∆++++=−=∆
This model can be expanded to an eight fiducial observational scheme as:
xybyxbybxbxybybxbby'yyy
xyayxayaxaxyayaxaax'xxx
72
62
52
43210
72
62
52
43210
++++++++=−=∆
++++++++=−=∆
Corrections to Photo Coordinates Page 17
Lens Distortion The effects of lens distortion are to move the image from its theoretically correct location to its actual position. There are two components of lens distortion: radial distortion (Seidel aberration) and decentering distortion. Radial lens distortion is caused from faulty grinding of the lens. With today’s computer controlled lens manufacturing process, this distortion is almost negligible at least to the accuracy of the camera calibration itself. Decentering distortion is caused by faulty placement of the individual lens elements in the camera cone and other manufacturing defects. The effects are small with today’s lens systems. The values for lens distortion are determined from camera calibration. These values are generally reported by either a table or in terms of a polynomial (see the example at the end of this section).
Seidel Aberration Distortion
Seidel has identified five lens aberrations. These include astigmatism, chromatic aberration (this is sometimes broken into lateral and longitudinal chromatic aberration), spherical aberration, coma, curvature of field, and distortion. An aberration is the "failure of an optical system to bring all light rays received from a point object to a single image point or to a prescribed geometric position" [ASPRS, 1980]. It is caused by the faulty grinding of the lens. Generally, aberrations do not affect the geometry of the image but instead affect image quality. The exception is Seidel's fifth aberration - distortion. Here the geometric position of the image point is moved in image space and this change in position must be accounted for in analytical photogrammetry. The effect of this distortion is radial from the principal point. Conrady's intuitive development for handling this radial distortion is expressed in the following polynomial form:
Figure 5. Radial Lens Distortion Geometry.
Corrections to Photo Coordinates Page 18
This is based on three general hypotheses: “a. The axial ray passes the lens undeviated; b. The distortion can be represented by a continuous function; and
c. The sense of the distortions should be positive for all outward displacement of the image.” [Ghosh, 1979, p.88]
From Figure 3, recall that
r2 = x2 + y2
By similar triangles, the following relationship can be shown
yy
xx
rr δ=δ=δ
the x and y Cartesian coordinate components of the effects of this distortion are thus found by:
The corrected photo coordinates can then be computed using the form:
DECENTERING DISTORTION Decentering lens distortion is asymmetric about the principal point of autocollimation. When the value is "one" then the radial line remains straight. This is called the axis of zero tangential distortion (see figures 4 and 5).
�+++++=δ 94
73
52
310 rkrkrkrkrkr
( )
( )yrkrkkyrry
xrkrkkxrrx
42
210
42
210
�
�
+++=δ=δ
+++=δ=δ
( )
( )yrkrkk1yrr1yyy
xrkrkk1xrr1xxx
42
210c
42
210c
�
�
−−−−=
δ−=δ−=
−−−−=
δ−=δ−=
Corrections to Photo Coordinates Page 19
Figure 6. Geometry of tangential distortion showing the tangential profile.
Figure 7. Effects of decentering distortion. Duane Brown, using the developments by Washer, designed the corrections for the lens distortion due to decentering. Brown called this the "Thin Prism Model" and it is shown as:
Corrections to Photo Coordinates Page 20
where: J1, J2 are the coefficients of the profile function of the decentering distortion, and ϕo is the angle subtended by the axis of the maximum tangential distortion with the
photo x-axis. The concept of the thin prism was found to be inadequate to fully describe the effects of decentering distortion. Therefore, the Conrady-Brown model was developed to find the effects of decentering on the x,y encoders:
A revised Conrady-Brown model made further refinements to the computation of decentering distortion and this model is shown to be:
where:
1
34
1
23
o12
o11
JJP
JJP
cosJPsinJP
=
=
ϕ=ϕ−=
P’s define the tangential profile function. This is the tangential distortion along the axis of maximum tangential distortion. The corrected photo coordinates due to the effects of decentering distortion can then be found by subtracting the errors computed in the previous equations. The corrected photo coordinates become:
( )( ) oo
42
21
oo4
22
1
cosJcosrJrJy
sinJsinrJrJx
ϕ=ϕ+=δ
ϕ−=ϕ+−=δ
( )
( )
ϕ
+−ϕ+=δ
ϕ−ϕ
−+=δ
o2
2
o24
22
1
o2o2
24
22
1
cosry21sin
ryx2rJrJy
cosr
yx2sinrx21rJrJx
( )[ ][ ]
( )[ ][ ]l
l
+++++=δ
+++++=δ
44
23
2221
44
232
221
rPrP1y2rPyxP2y
rPrP1yxP2x2rPx
Corrections to Photo Coordinates Page 21 xc = x - δx yc = y - δy
Atmospheric Refraction
Figure 8. Effects of atmospheric refraction on object space light ray. Light rays bend due to refraction. The amount of refractions is a function of the refractive index of the air along the path of that light ray. This index depends upon the temperature, pressure and composition, including humidity, dust, carbon dioxide, etc. The light rays from the object space to image space must pass through layers of differing density thereby bending that ray at various layer boundaries along the path. From Snell's Law we can express the law of refraction as
where: n = refractive index dn = difference in refractive index between the two mediums θ = angle of incidence, and θ+dα = angle of refraction Generalizing and simplifying yields
( ) ( )α+θ=θ+ dsinnsindnn iiii
Corrections to Photo Coordinates Page 22
Integrating
where ln indicates the natural logarithm and the subscripts L is the camera station and P is the ground point. Generalizing
where K is the atmospheric refraction constant. For vertical photography, dθ can be expressed with respect to r as
δr can also be expressed as a function of K using:
θ=α tanndnd
( ) o
P
o
p
o
P
nn
n
n
nlntann
dntand ⋅θ=θ=α=α ∫∫α
α
θ=α=θ tanK2
d
( )
θ+=∴
θ
+=
θθ+=θθ=
θ=
df
rfdr
dfr1f
dtan1fdsecfdr
tanfr
22
2
2
2
2
Corrections to Photo Coordinates Page 23
The radial component can also be expressed using a simplified power series:
�+++= 53
321 rkrkrkdr
where the k’s are constants. The Cartesian components of atmospheric refraction are
K is a constant determined from some model atmosphere. For example, the 1959 ARDC (Air Rome Development Center) model developed from Bertram is shown as:
The atmospheric model developed by Saastamoinen for an altitude of up to eleven kilometers is given by
For altitudes up to nine kilometers, this equation can be simplified as
+=∴
+=θ+=
θ=θ
2
3
2222
frrKdr
frK
frftanK
frfdr
tanKd
yfr1K
rryy
xfr1K
rrxx
2
2
2
2
+=
δ=δ
+=
δ=δ
622 10
Hh
250h6hh2410
250H6HH2410K −⋅
+−−
+−=
( ) ( ) ( )[ ] 6245.4256.5256.5 10H02257.010.277H02257.01h022576.01H
1225K −⋅
−−−−−=
( ) ( )[ ]{ } 610hH202.01hH13K −⋅+−−=
Corrections to Photo Coordinates Page 24 There are several other atmospheric models. Ghosh [1979] also identifies the US Standard Atmosphere and the ICAO Standard atmosphere. He also states that, up to about 20 km, these models are almost the same. Table 1 shows the amount of distortion using a focal length of 153 mm and the ICAO Standard atmosphere [from Ghosh, 1979, p.95]. The tabulated values, dr, are in micrometers.
Flying Height in m
For Radial Distance r of the Image Point from the Photo Center, in mm Coefficients
12 24 50 63 78 94 111 131 153 k1⋅10-2 k2⋅10-6 For Ground Elevation – 0 m above sea level
3000 0.4 0.9 1.9 2.6 3.4 4.5 5.9 7.9 10.7 3.4 1.53 6000 0.7 1.5 3.3 4.4 5.9 7.7 10.1 13.5 18.3 6.1 2.50 9000 0.9 1.9 4.2 5.7 7.5 9.9 13.0 17.3 23.4 7.7 2.23
For Ground Elevation – 500 m above sea level 3000 0.3 0.7 1.6 2.1 2.8 3.7 4.9 6.4 8.8 2.8 1.25 6000 0.7 1.3 3.0 4.0 5.3 6.9 9.1 12.2 15.4 5.4 2.3 9000 0.9 1.8 3.9 5.3 7.0 9.2 12.0 16.0 21.7 7.2 2.99
For Ground Elevation – 1000 m above sea level 3000 0.3 0.6 1.3 1.7 2.2 2.9 3.9 5.1 6.9 2.2 0.99 6000 0.6 1.2 2.7 3.6 4.8 6.3 8.2 10.9 14.5 4.8 2.08 9000 0.8 1.6 3.6 4.9 6.5 8.5 11.2 14.9 20.1 6.7 2.76
For Ground Elevation – 1500 m above sea level 3000 0.2 0.4 0.8 1.2 1.6 2.2 2.8 3.8 5.1 1.6 0.74 6000 0.5 1.1 2.4 3.2 4.2 5.5 7.3 9.7 13.1 4.2 1.87 9000 0.7 1.5 3.4 4.5 6.0 7.8 10.3 13.8 18.6 6.1 2.59
Table 1. Radial image distortion due to atmospheric refraction.
EARTH CURVATURE
Earth curvature causes a displacement of a point due to the curvature of the earth. The point, when projected onto a plane tangent to the ground nadir point, will occupy a position on that plane at a distance of ∆H from the earth's surface. The image displacement, as shown in the figure, is always radially inward towards the principal point. From the geometry, we can see that
Corrections to Photo Coordinates Page 25
Figure 9. Earth Curvature Correction.
�+−=
≈θ∴
≈=θ
2
2
R2D1
RDcoscos
RD
R'D
( )
R2DH
R2D11R
cos1RcosRRH
2
2
2
≈∆∴
−+−=
θ−=θ−=∆
�
From which we can write
But,
D'H
fdE ∆=
Corrections to Photo Coordinates Page 26
Therefore,
But
Yielding
Since H'/(2Rf2) is constant for any photograph
where:
2fR2HK =
The effects of earth curvature are shown in the Table 2 with respect to the flying height (H) and the radial distance from the nadir point [Ghosh, 1979; Doyle, 1981]. Looking at the formula for earth curvature and the intuitive evaluation of the figure, one can see that the effects will increase rapidly at higher flying heights and the farther one moves from the nadir point.
H'H
DdE ∆≈
RH2fD
R2D
'HD
'HfdE
2
3
2
=
=
rf
'HD ≈
2
3
fR2r'HdE =
3rKdE =
Corrections to Photo Coordinates Page 27
EXAMPLE A vertical aerial photograph is taken with an aerial camera having the following calibration data: Calibrated focal length = 152.212 mm Fiducial mark & principal point coordinates are shown in the next figure of the fiducial
marks. The radial lens distortion is shown from the following diagram delineating the distortion curve.
Figure 10. Example showing calibration values for fiducials and principal point.
H in km R(mm) 0.5 1 2 4 6 8 10
10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 20 0.0 0.0 0.1 0.1 0.2 0.2 0.3 40 0.1 0.2 0.4 0.9 1.3 1.8 2.2 60 0.4 0.8 1.5 3.0 4.5 6.0 7.6 80 0.9 1.8 3.6 7.2 10.8 14.3 17.9 100 1.8 3.5 7.0 14.0 21.0 28.0 35.0 120 3.1 6.0 12.1 24.2 36.3 48.4 60.5 140 4.9 9.6 19.2 38.4 57.6 76.8 96.0 160 7.1 14.3 28.6 57.2 85.7 114.3 142.9
Table 2. Amount of earth curvature (in mm) for vertical photography assuming a focal length of 150 mm [from Ghosh, 1979, p.98].
Corrections to Photo Coordinates Page 28
Figure 11. Camera calibration graph of distortion using both polynomials and radial distortion.
The Theabocoo Que
Tabl
Radial Distance (mm) 20 40 60 80 100 120 140 160
Distortion (µm) +6 +9 +6 -1 -7 -9 -1 -13
Polynomial (µm) +5.3 +7.9 +5.2 +0.5 -7.1 -10.5 +0.6 +123
decentering lens distortion values are: J1 = 8.10x10-4 J2 = -1.40x10-8 Νo = 108o 00'
flying height is 38,000' above mean sea level. The average height of the terrain is 400' ve mean sea level. The photograph is placed in the comparator and the following image rdinates are measured:
point rx (mm) ry (mm) 1 28.202 13.032 2 240.341 16.260 3 237.068 228.432 4 24.980 225.160 Pt. p 228.640 36.426
stions:
e 2. Radial lens distortion for camera in the example.
Corrections to Photo Coordinates Page 29 1. What are the image coordinates of point p corrected for film deformation and reduced to
put the origin at the principal point? Use a 6-parameter general affine transformation and compute the residuals.
2. What are the image coordinates of p corrected additionally for radial and decentering lens distortion?
3. What are the image coordinate corrections at p for atmospheric refraction and earth curvature?
4. What are the final corrected image coordinates of p? SOLUTION 1. The observed photo coordinates are: x = 228.640 mm y = 36.426 mm The design matrix (B) is:
The discrepancy vectors are:
The normal coefficient matrix inverse (N-1) is:
The parameters are: a1 = 0.99923 a2 = -0.00428
0.1160.225980.240.1432.228068.2370.1260.16341.2400.1031.13202.28
=
=
622.226974.228
973.16648.14
F
984.23928.235257.238274.26
F 21
−−−
=−
9647056994.00026815783.00000222132.00029475275.00000000002.00000222209.0
N 1
Corrections to Photo Coordinates Page 30 b1 = 0.00441 b2 = 0.99917 c1 = -1.96656 c2 = 1.75196 The residuals are:
The transformed coordinates are: x = 226.657 mm y = 37.168 mm The photo coordinates translated to the principal point become: x = 226.657 mm - 131.104 mm = 95.553 mm y = 37.168 mm - 121.814 mm = -84.646 mm 2. Lens distortions are computed as follows:
Siedel radial distortion in terms of their rectangular coordinate vales are:
−
−=
−
−
=
00430.000430.000429.0
00430.0
V
00294.000294.0
00294.000294.0
V 21
( ) ( )mm653.127
646.84553.95yxr 2222
=−+=+=
( ) ( )mm663.8
r10223.2r10794.5r286.0r 5935
−=×+×−=∆ −−
Corrections to Photo Coordinates Page 31
The decentering distortion using the revised Conrady-Brown model is shown as follows:
The coordinates corrected for decentering distortion then become:
3. Using the 1959 ARDC model:
( )
mm652.84653.127
00866.01646.84rr1yy
mm559.95653.127
00866.01553.95rr1xx
c
c
−=
−−−=
∆−=
=
−−=
∆−=
000017.01010.81040.1
JJP
00025.0108cos1010.8cosJP
00077.0108sin1010.8sinJP
4
8
1
23
o4o12
o4o11
−=××−==
−=×=ϕ=
−=×−=ϕ−=
−
−
−
−
( )[ ][ ]( )( ) ( )( )( )[ ] ( )( )[ ]
( )[ ][ ]( ) ( )( ) ( ) ( )( )[ ] ( )( )[ ]
mm003.0653.127000017.01655.842653.12700025.0652.84559.9500077.02
rP1y2rPxyP2y
mm016.0653.1270025.01652.84559.9500025.02559.952653.12700077.0
rP1xyP2x2rPx
222
23
2221
222
222
221
=−+−+−+−−=
+++=δ
−=−+−−++−=
+++=δ
( )
mm655.84003.0652.84yyy
mm576.95016.0559.95xxx
c
c
−=−−=δ−=
=−−=δ−=
Corrections to Photo Coordinates Page 32
The effects of earth curvature are presented as:
4. The corrected photo coordinated due to the effects of refraction are:
The coordinates corrected of earth curvature become:
km12.0m1000
km1'3937
m1200'400
km58.11m1000
km1'3937m1200'000,38
=
=
( )( )
0000887.0
1025058.11658.11
58.11241010250H6H
H2410K 62
62
=
×
+−
=×
+−= −−
( )
mm013.0
655.84212.152653.12710000887.0y
fr1Ky
mm014.0
576.95212.152653.12710000887.0x
fr1Kx
2
2
2
2
2
2
2
2
−=
−
+=
+=δ
=
+=
+=δ
( ) ( )( ) ( )
mm0807.0000,906,20212.1522400000,38653.127
Rf2'HrdE 2
2
2
3
=
−==
( ) ( )mm642.84
013.0655.84yyy
mm561.95014.0576.95xxx
c
c
−=−−−=δ−=
=−=δ−=
Corrections to Photo Coordinates Page 33
The final corrected photo coordinates are, thus, x = 95.622 mm y = -84.696 mm
( )
mm696.84653.127
0807.01642.84r
dE1yy
mm622.95653.127
0807.01561.95r
dE1xx
c
c
−=
+−=
+=
=
+=
+=
Projective Equations Page 34
PROJECTIVE EQUATIONS
Introduction
In the first section we were introduced to coordinate transformations. The numerical resection problem involves the transformation (rotation and translation) of the ground coordinates to photo coordinates for comparison purposes in the least squares adjustment. Before we begin this process, lets derive the rotation matrix that will be used to form the collinearity condition. In photogrammetry, the coordinates of the points imaged on the photograph are determined through observations. The next procedure is to compare these photo coordinates with the ground coordinates. On the photograph, the positive x-axis is taken in the direction of flight. For any number of reasons, this will most probably never coincide with the ground X-axis. The origin of the photographic coordinates is at the principal point which can be expressed as
−−−
=
fyyxx
'Z'Y'X
o
o
where: x, y are the photo coordinates of the imaged point with reference to the
intersection of the fiducial axes xo, yo are the coordinates from the intersection of the fiducial axes to the principal
point f is the focal length Since the origin of the ground coordinates does not coincide with the origin of the photographic coordinate system, a translation is necessary. We can write this as
−−−
=
L
L
L
1
1
1
ZZYYXX
ZYX
where: X, Y, Z are the ground coordinates of the point XL, YL, ZL are the ground coordinates of the ground nadir point Thus, in the comparison, both ground coordinates and photo coordinates are referenced to the same origin separated only by the flying height. Note that the ground nadir coordinates would correspond to the principal point coordinates in X and Y if the photograph was truly vertical.
Projective Equations Page 35
Direction Cosines If we look at figure 1, we can see that point P has coordinates XP, YP, ZP. The length of the vector (distance) can be defined as
Figure 12. Vector OP in 3-D space.
[ ]21
2P
2P
2P ZYXOP ++=
The direction of the vector can be written with respect to the 3 axes as:
OPZcos
OPYcos
OPXcos
P
P
P
=γ
=β
=α
These cosines are called the direction cosines of the vector from O to P. This concept can be extended to any line in space. For example, figure 2 shows the line PQ. Here we can readily see that the vector PQ can be defined as:
QPZZYYXX
PQ
PQ
PQ
PQ
−=
−−−
=
The length of the vector becomes
Projective Equations Page 36
( ) ( ) ( )[ ] 212
PQ2
PQ2
PQ ZZYYXXPQ −+−+−= and the direction cosines are
Figure 13. Line vector PQ in space.
PQZZ
cos
PQYY
cos
PQXX
cos
PQ
PQ
PQ
−=γ
−=β
−=α
If we look at the unit vector as shown in figure 3, one can see that the vector from O to P can be defined as
Projective Equations Page 37
kzjyixOP ++=
Figure 14. Unit vectors.
and the point P has coordinates (x, y, z)T. Given a second set of coordinates axes (I, J, K), one can write similar relationships for the same point P. Each coordinate axes has an angular relationship to each of the i, j, k coordinate axes. For example, figure 4 shows the relationship between J and i . The angle between the axes is defined as (xY). Since i has similar angles to the other two axes, one can write the unit vector in terms of the direction cosines as:
( )( )( )
=
⋅⋅⋅
=xZcosxYcosxXcos
KiJiIi
i
Similarly, we have for j and k ,
( )( )( )
( )( )( )
=
=
zZcoszYcoszXcos
kyZcosyYcosyXcos
j
Figure 15. Rotation between Y and x axes.
Projective Equations Page 38 Then, the vector from O to P can be written as
( )( )( )
( )( )( )
( )( )( )
( )( )( )
( )( )( )
( )( )( )
=
=
+
+
=
ZYX
zyx
zZcoszYcoszXcos
yZcosyYcosyXcos
xZcosxYcosxXcos
zZcoszYcoszXcos
zyZcosyYcosyXcos
yxZcosxYcosxXcos
xOP
This can be written more generally as
xRX = To solve these unknowns using only three angles, 6 orthogonal conditions must be applied to the rotation matrix, R. All vectors must have a length of 1 and any combination of the two must be orthogonal [Novak, 1993]. Thus, designating R as three column vectors [R = (r1 r2 r3)], we have
0rrrrrr1rrrrrr
1T33
T22
T1
3T32
t21
T1
===
===
Sequential Rotations
Combination Axes of Rotation 1) Roll (ω) – Pitch (ϕ) – Yaw (κ) x – y - z 2) Pitch (ϕ) – Roll (ω) – Yaw (κ) y – x - z 3) Heading (H) – Roll (ω) – Pitch (ϕ) z – x - y 4) Heading (H) – Pitch (ϕ) – Roll (ω) z – y - x 5) Azimuth (α) – Tilt (t) – Swing (s) z – x - z 6) Azimuth (α) – Elevation (h) – Swing (s) z – x - z
Applying three sequential rotations about three different axes forms the rotation matrix. Doyle [1981] identifies a series of different combinations. These are shown in Table 1 and they all presume a local space coordinate system. Roll (ω) is a rotation about the x-axis where a positive rotation moves the +y-axis in the direction of the +z-axis. Pitch (ϕ) is a rotation about the y-axis. When the +z-axis is moved
Table 1. Rotation combinations.
Projective Equations Page 39 towards the +x-axis then the rotation is positive. A rotation about the z-axis is called yaw (κ) with a positive rotation occurring when the +x-axis is rotated towards the +y-axis. All of these angles have a range from -180° to +180°. Heading (H) is a clockwise rotation about the Z-axis from the +Y-axis to the +X-axis. Azimuth (α) is a clockwise rotation about the Z-axis from the +Y-axis to the principal plane. Tilt (t) is a rotation about the x-axis and is defined as the angle between the camera axis and the nadir or Z-axis. This rotation is positive when the +x-axis is moved towards the +z-axis. Swing is a clockwise angle in the plane of the photograph measured about the z-axis from the +y-axis to the nadir side of the principal line. Heading, azimuth and swing have a range from 0° to 360° while the tilt angle will vary between 0° to 180°. Finally, elevation (h) is a rotation in the vertical plane about the x-axis from the X-Y plane to the camera axis. The rotation is positive when the camera axis is above the X-Y plane. The combinations (1) and (2) are frequently used in stereoplotters while (3) and (4) are common in navigation. Professor Earl Church developed (5) in his photogrammetric research whereas the ballistic cameras often used the 6th combination.
Derivation of the Gimbal Angles For a physical interpretation of the rotation matrix written in terms of the directions cosines, we can look at the planar rotations of the axes in sequence. In the first section we saw that the coordinate transformation can be written in the following form:
αα−αα
=
P
P
P
P
VU
cossinsincos
YX
In the photogrammetric approach, we rotate the ground coordinates to a photo parallel system. This involves three rotations: ω - primary, ϕ - secondary, and κ - tertiary. If we look at the ω rotation about the X1 axis, we should realize that the X-coordinate does not change but the Y and Z coordinates do change (figure 5). Moreover, the new values for Y and Z are not affected by the X-coordinate. Thus, one can write
Projective Equations Page 40
ω⋅+ω−⋅+⋅=ω⋅+ω⋅+⋅=
⋅+⋅+=
cosZ)sin(Y0XZsinZcosY0XY
0Z0YXX
1112
1112
1112
or in matrix form
ωω−ωω=
1
1
1
2
2
2
ZYX
cossin0sincos0
001
ZYX
or more concisely,
CMC2 ω=
The next rotation is a ϕ- rotation about the once rotated Y2-axis. One can write
ϕ⋅+⋅+ϕ⋅=⋅++⋅=
ϕ−⋅+⋅+ϕ⋅=
cosZ0YsinXZ0ZY0XY
)sin(Z0YcosXX
2223
2223
2223
or in matrix form:
ϕϕ
ϕ−ϕ=
2
2
2
3
3
3
ZYX
cos0sin010
sin0cos
ZYX
Figure 16. Rotation angles in photogrammetry.
Projective Equations Page 41 or more concisely,
23 CMC ϕ= Finally, we have the κ-rotation about the twice-rotated Z3-axis (see figure 5). This becomes
( )333
333
333
Z0Y0X'Z0ZcosYsinX'Y
0ZsinYcosX'X
+⋅+⋅=⋅+κ⋅+κ−⋅=
⋅+κ⋅+κ⋅=
which in matrix form is
κκ−κκ
=
3
3
3
ZYX
1000cossin0sincos
'Z'Y'X
or more concisely as
3CM'C κ= Thus, the transformation from the survey parallel (X1, Y1, Z1) system is shown as
=
=
ωϕκ
1
1
1
1
1
1
G
ZYX
MMMZYX
M'Z'Y'X
Performing the multiplication, the elements of MG are shown as:
ϕωϕω−ϕκϕω+κωκϕω−κωκϕ−κϕω−κωκϕω+κωκϕ
=coscoscossinsin
sinsincoscossinsinsinsincoscossincoscossincossinsincossinsinsincoscoscos
MG
If the rotation matrix is known, then the angles (κ, ϕ, ω) can be computed as [Doyle, 1981]
Projective Equations Page 42
11
21
31
33
32
mmtan
msin
mm
tan
−=κ
=φ
−=ω
If the so-called Church angles (t, s, α) are being used, then the rotation matrix can be derived in a similar fashion. The values for M are:
α−α−−α−α−α−α−α−αα−α−
=tcoscostsinsintsin
scostsinscoscostcossinssinscossintcoscosssinssintsinssincostcossinscosssinsintcoscosscos
M
If the rotation matrix is known then the Church angles can be found using the following relationships [Doyle, 1981]:
23
13
223
213
232
23133
32
31
mm
stan
mmmmtsinormtcos
mm
tan
=
+=+==
=α
The collinearity concept means that the line form object space to the perspective center is the same as the line from the perspective center to the image point (figure 6). The only difference is a scale factor. Since the comparison is performed in image space, the object space coordinates are rotated into a parallel coordinate system. This relationship can be written as
AkMa = Recall that we wrote two basic equations relating the location of a point in the photo coordinate system and ground nadir position.
−−−
=
−−−
=
L
L
L
1
1
1
o
o
ZZYYXX
ZYX
andfyyxx
'Z'Y'X
Projective Equations Page 43
Figure 17. Collinearity condition. Then,
−−−
=
−−−
L
L
L
333231
232221
131211
o
o
ZZYYXX
mmmmmmmmm
kfyyxx
where k is the scale factor. This equation takes the ground coordinates and translates them to the ground nadir position. The rotation matrix (MG) takes those translated coordinates and rotates them into a system that is parallel to the photograph. Finally, these coordinates are scaled to the photograph. The result is the predicted photo coordinates of the ground points given the exposure station coordinates (XL, YL, ZL) and the tilt that exists in the photography (κ, ϕ, ω). If we express this last equation algebraically, then we have
( ) ( ) ( )[ ]( ) ( ) ( )[ ]( ) ( ) ( )[ ]L33L32L31
L23L22L21o
L13L12L11o
ZZmYYmXXmkfZZmYYmXXmkyyZZmYYmXXmkxx
−+−+−=−−+−+−=−−+−+−=−
To eliminate the unknown scale factor, divide the first two equations by the third. Thus,
( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )
−+−+−−+−+−
−=−
−+−+−−+−+−
−=−
L33L32L31
L23L22L21o
L33L32L31
L13L12L11o
ZZmYYmXXmZZmYYmXXm
fyy
ZZmYYmXXmZZmYYmXXmfxx
Projective Equations Page 44 Otto von Gruber first introduced this equation in 1930. This equation must satisfy two conditions [Novak, 1993].
232
222
212
231
221
211
323122211211
mmmmmm
0mmmmmm
++=++
=++
If we look at the equation for MG above, lets see if the first condition is met.
( )( )( )
( )
0sinsincoscossincoscos
sincossinsincoscossincoscossinsincossinsinsincos
cossincoscossincossincoscossincoscossinsincossinsuinsincoscossincos
cossinsinsincoscoscosmmmmmm
22
2
2
323122211211
=ωϕϕ−ωκκϕ−
κ+κωϕϕ+ωκκϕ=ωϕϕ−ωκϕϕ+
ωκκϕ−ωκϕϕ+ωκκϕ=
ωϕϕ−κϕξ−κωκϕ−+κϕω+κωκϕ=++
Thus, the first condition is met. For the second constraint, lets first look at the left hand side of the equation.
( ) 1sinsincoscos
sinsincoscoscosmmm2222
22222231
221
211
=ϕ+κ+κϕ=
ϕ+κϕ+κϕ=++
The right side of the equation becomes
( )[ ]( ) 1coscossinsin
sincoscossincossinsinsincos
sinsinsincoscossincossincossinsincos
sinsinsinsincossincossin2coscos
sincossinsincossincossin2sincosmmm
2222
2222222
22
2222222222
22
2222
22222232
222
212
=ω+ϕ+ϕ=ωϕ+ω+κ+κωϕ=
ωϕ+
ωκϕ+ωκ+ωκϕ+ωκ=ωϕ+
ωκϕ+ωωκκϕ−ωκ+
ωκϕ+ωωκκϕ+κω=++
Thus, both sides of the equation equal are equal to one and to each other. Since (X – XL), (Y – YL) and (Z – ZL) are proportional to the direction cosines of A , these equations can also be presented as [Doyle, 1981]:
Projective Equations Page 45
γ+β+αγ+β+α
−=−
γ+β+αγ+β+α
−=−
cosmcosmcosmcosmcosmcosm
fyy
cosmcosmcosmcosmcosmcosm
fxx
333231
232221o
333231
131211o
Here, cos α, cos β and cos γ are the direction cosines of A The inverse relationship is
( ) ( ) ( )( )( ) ( )( )
( ) ( ) ( )( )( ) ( )( )
−+−+−−+−+−
−=−
−+−+−−+−+−
−=−
fmyymxxmfmyymxxm
ZZYY
fmyymxxmfmyymxxm
ZZXX
33o23o13
32o22o12LL
33o23o13
31o21o11LL
These equations are referred to as the collinearity equations. It would be interesting to see how these equations stand up to the basic principles learned in basic photogrammetry. Recall that for a truly vertical photograph that the scale at a point can be written using
Yy
Xx
hHfS ==−
=
Here we assumed that the principal point coincided with the indicated principal point and that the X and Y ground coordinates were related to the origin, being at the nadir point with the X-axis coinciding with the line from opposite fiducials in the flight direction. If we look at the collinearity equations, the rotation matrix for a truly vertical photo would be the identity matrix. Thus,
=
100010001
MVert
Then, the projective equations become
( ) ( )( ) ( )
( )L
Lo
Lo
ZZkfYYkyyXXkxx
−=−−=−−=−
Projective Equations Page 46 If we further assume that the principal point is located at the intersection of opposite fiducials and if we substitute H for ZL and h for Z, then,
( )( )( )hHkf
YYkyXXkx
L
L
−=−=−=
Dividing the first two equations by the third ad manipulating the equation yields the identical scale relationships given in basic photogrammetry.
LL YYY
hHf
XXx
−=
−=
−
LINEARIZATION OF THE COLLINEARITY EQUATION The linearization of the collinearity equations are given in a number of different textbooks. The developments presented here follow that outlined by Doyle [1981]. For simplicity, lets define the projective equations in the following form.
( )
( ) 0WVfyyF
0WUfxxF
o2
o1
=+−=
=+−=
where U and V are the numerators in the projective equations given earlier and W is the denominator. From adjustments, we know that the general form of the condition equations can be written as
0FBAV =+∆+ The deign matrix (B) is found by taking the partial derivative of the projective equations with respect to the parameters. Thus, it will appear as:
∂∂
∂∂
∂∂
∂∂
∂∂
∂∂
κ∂∂
ϕ∂∂
ω∂∂
∂∂
∂∂
∂∂
κ∂∂
ϕ∂∂
ω∂∂
∂∂
∂∂
∂∂
∂∂
∂∂
∂∂
∂∂
∂∂
∂∂
=
i
2
i
2
i
2
i
1
i
1
i
1
222
L
2
L
2
L
2
111
L
1
L
1
L
1
2
o
2
o
2
1
o
1
o
1
ZF
YF
XF
ZF
YF
XF
FFFZF
YF
XF
FFFZF
YF
XF
fF
yF
xF
fF
yF
xF
B
The first section contains the partial derivatives with respect to the interior orientation, the second group are the partials with respect to the exterior orientation, and the third group are
Projective Equations Page 47 the partials with respect to the ground coordinates. The partial derivatives of the interior orientation (xo, yo, and f only) are very basic.
WV
fF
1yF
0xF
WU
fF0
yF1
xF
2
o
2
o
2
1
o
1
o
1
=∂∂
−=∂∂
=∂∂
=∂∂
=∂∂
−=∂∂
For the partial derivatives taken with respect to the exposure station coordinates, we will use the following general differentiation formulas:
∂∂−
∂∂=
∂∂−
∂∂
=∂∂
∂∂−
∂∂=
∂∂−
∂∂
=∂∂
PW
WV
PV
Wf
WPWV
PVW
fPF
PW
WU
PV
Wf
WPWU
PUW
fPF
22
21
where P are the parameters. For the exposure station coordinates (XL, YL, ZL), the partial derivatives of the functions U, V, and W become:
33L
32L
31L
23L
22L
21L
13L
12L
11L
mZWm
YWm
XW
mZVm
YVm
XV
mZUm
YUm
XU
−=∂∂−=
∂∂−=
∂∂
−=∂∂−=
∂∂−=
∂∂
−=∂∂−=
∂∂−=
∂∂
Then the partial derivatives of the functions F1 and F2 can be shown to be
Projective Equations Page 48
+−=
∂∂
+−=
∂∂
+−=
∂∂
+−=
∂∂
+−=
∂∂
+−=
∂∂
3323L
23313
L
1
3222L
23212
L
1
3121L
23111
L
1
mWVm
Wf
ZF
mWUm
Wf
ZF
mWVm
Wf
YFm
WUm
Wf
YF
mWVm
Wf
XF
mWUm
Wf
XF
Recall that the rotation matrix is given in the sequential form as
ωϕκ= MMMM G then the partial derivatives of the orientation matrix with respect to the angles can be shown to be
GG
GG
GG
M000001010
MMMM
00cos00sin
cossin0MM
MM
M
010100000
MM
MMM
−=
κ∂∂
=κ∂
∂
ωω−
ω−ω=
ϕ∂∂
=ϕ∂
∂
−=
ω∂∂
=ω∂
∂
ωϕκ
ωϕ
κ
ωϕκ
Then the partial derivatives of the functions U, V, and W taken with respect to the orientation angles becomes
−−−
=
Li
Li
Li
G
ZZYYXX
MWVU
Yielding,
Projective Equations Page 49
−
−
−
ω∂∂
=
ω∂∂
ω∂∂
ω∂∂
Li
Li
Li
G
ZZ
YY
XX
M
W
V
U
−
−
−
ϕ∂∂
=
ϕ∂∂
ϕ∂∂
ϕ∂∂
Li
Li
Li
G
ZZ
YY
XX
M
W
V
U
−
−
−
κ∂∂
=
κ∂∂
κ∂∂
κ∂∂
Li
Li
Li
G
ZZ
YY
XX
M
W
V
U
Now one can evaluate the partial derivatives of F1 and F2 with respect to the orientation angles.
κ∂∂−
κ∂∂=
κ∂∂
ϕ∂
∂−ϕ∂
∂=ϕ∂
∂
ω∂∂−
ω∂∂=
ω∂∂
WWUU
WfF
WWUU
WfF
WWUU
WfF
1
1
1
Projective Equations Page 50
κ∂∂−
κ∂∂=
κ∂∂
ϕ∂
∂−ϕ∂
∂=ϕ∂
∂
ω∂∂−
ω∂∂=
ω∂∂
WWVV
WfF
WWVV
WfF
WWVV
WfF
2
2
2
The partial derivatives of the functions F1 and F2 with respect to the survey points are shown to be:
−=
∂∂
−=∂∂
−=
∂∂
−=∂∂
3121L
2
i
2
3111L
1
i
1
mWVm
Wf
XF
XF
mWUm
Wf
XF
XF
−=
∂∂
−=∂∂
−=
∂∂
−=∂∂
3222L
2
i
2
3212L
1
i
1
mWVm
Wf
YF
YF
mWUm
Wf
YF
YF
−=
∂∂
−=∂∂
−=
∂∂
−=∂∂
3323L
2
i
2
3313L
1
i
1
mWVm
Wf
ZF
ZF
mWUm
Wf
ZF
ZF
Numerical Resection and Orientation Page 51
NUMERICAL RESECTION AND ORIENTATION
Introduction
Numerical resection and orientation involves the determination of the coordinates of the exposure station and the orientation of the photograph in space. Merchant [1973] has identified four different cases, in order of increasing complexity.
Case I: Compute the elements of exterior orientation (κ, ϕ, ω, XL, YL, and ZL) by observing the photo coordinates (xi, yi) and treating the survey control coordinates (Xi, Yi, and Zi) as known.
Case II: This is an extension of Case I with the addition that the elements of
exterior orientation are also observed quantities. This can easily be visualized by the use of the global positioning system (GPS) on-board the aircraft.
Case III: This approach is an extension of Case II. Here the observations
include photo coordinates, exterior orientation, and survey coordinates (to unknown points). The survey control (coordinates to known points) is given. The solution is to find the adjusted exterior orientation parameters and the survey coordinates.
Case IV: Case IV is a further refinement of Case III except that the elements of
interior orientation are observed in addition to the photo coordinates, exterior orientation, and survey coordinates. The adjustment will result in adjusted exterior and interior orientation and survey coordinates.
The general notation for the mathematical model is given as
F = F(obs, X, Y) = 0 where: obs = the observed quantities, and X, Y = the parameters for the condition function. A Taylor’s Series evaluation is done to linearize the equation and this is shown as
( ) ( ) 0VobsF
XFFF
000
000
000
=
∂
∂+∆
∂∂+= (1)
Numerical Resection and Orientation Page 52 The subscript “0” indicates an observed parameter value whereas “00” means the current estimate of the value. This series is evaluated by comparing the observations to the current estimates of what those values need to be. Evaluation of this function results in the observation equation.
or in a more general form:
where: V = the residuals on the observations, ∆ = the alteration vector to the parameters,
f = the discrepancy vector found by comparing the mathematical model using the current estimate of the parameters with the observed values.
Case I
Case I is the simplest form of the space resection problem. The observed values are the photo coordinates (xi, yi). The elements of interior orientation along with the survey coordinate control are taken as error free. The observational variance-covariance matrix (3oo) is estimated and the exterior orientation elements ((κ, ϕ, ω, XL, YL, and ZL) and variance-covariance matrix on the adjusted parameters (3e
oo) are computed. The math model employs the central projective equations as the conditional function. It is shown in general form as:
where the central projective equations are, for x and y:
The observation equations are written as
0AVBfF =+∆+=
0fBAV =+∆+
( ) ( )( )
=
yFxF
xF
( ) ( )
( ) ( ) 0ZYcyyyF
0ZXcxxxF
o
o
=∆∆
−−=
=∆∆−−=
0fBAVee
=+∆+
Numerical Resection and Orientation Page 53
where: ( )
( )( )
( )( )
I1001
y,xyF
y,xxF
obsF
A
jj
j
jj
j
j
jj =
=
∂∂
∂∂
=
∂∂
=
Thus, the general form for n points is
0fBVee
=+∆+ (2) If the number of photo points is larger than three then a least squares adjustment is performed. The function to be minimized is expressed as
where: λ = the Langrangian multiplier (vector of correlates), and W = the weight matrix for the photo observations, which is defined as
The weight matrix is usually assumed to be a diagonal matrix derived from the a priori estimates of the observational variance-covariance matrix. This is usually sufficient for a two-axis comparator but the correlation cannot be neglected for polar comparators. Differentiation of the function yields
( )
( )( )
( )( )
ωϕκ∂∂
ωϕκ∂∂
=
∂
∂=
LLL
j
LLL
j
jj
e
Z,Y,X,,,yF
Z,Y,X,,,xF
ParametersF
B
( )( ) O
OOj
j
j
j
y
xj yF
xFfv
vV
=
=
[ ]LLL Z,Y,X,,, δδδδωδϕδκ=∆
+∆+λ−= fBV2WVVF
weT
∑= −obs
OO
1W
Numerical Resection and Orientation Page 54
There are (4n + m) unknowns: 2n in V, 2n in λ, and m in e∆ . Collecting the observation
equation and the differentiated function gives
0f00V
0BI
B00
I0We
e
1e=
+
λ∆
−
(5)
Eliminating V and λ and substituting V from (3) into (2) yields
or
Substituting λ into (4) results in the normal equations
or
where: N = the normal coefficient matrix and t = the constant vector The adjusted parameters become
The adjusted parameters are found by adding the corrections to those parameters:
)4(0B2F
)3(02VW2VF
Te
e =λ
−=
∆∂
∂
=λ−=∂∂
0fBWee
1 =+∆+λ−
WfBWee
−∆−=λ
0WfBBWBTeeeTe
=
+∆
0tNe
=+∆
tN 1e
−−=∆
Numerical Resection and Orientation Page 55
In the least squares adjustment, the process is iterated until the alteration vector reaches some predefined value. The process of updating the parameter values before undergoing another adjustment is commonly referred to as the “Newton-Raphson” method. The residuals are computed as follows:
Therefore
The unit variance is expressed as
with the variance-covariance matrix relating the adjusted parameters is
12o
X
oo
N
e
−σ=∑
Example Single Photo Resection – Case 1
Following is an example of a single photo resection and orientation, Case I problem. The following data are entered into the program. Survey control is treated as error free, the photo observations are measured quantities already corrected for atmospheric refraction, lens distortion, and earth curvature. The exterior orientation is estimated. A weight matrix for the photo observations was based on a standard error of 10µm. Following is an example of the single photo resection and orientation.
e
ooaXX ∆+=
0fBVee
=+∆+
aoFV
0fV
−=
=+
6n2WVVT
2o −
=σ
Numerical Resection and Orientation Page 56
SINGLE PHOTO RESECTION AND ORIENTATION - CASE I Photo Number 1 Photo observations:
Point No. x y 1 61.982 79.0182 -73.147 78.2403 -54.934 65.8994 -26.046 -29.4495 -34.893 -71.2876 -23.980 -31.8897 -11.783 88.9228 -85.047 105.8369 -26.468 -6.08210 -12.523 79.02611 27.972 85.02712 12.094 -69.86113 -80.458 -70.012
Survey Control Points: Point No. X Y Z 1 44646.75000 111295.53700 273.86600 2 45527.20300 109932.63000 275.53100 3 45536.70500 110193.01300 275.10100 4 46322.43000 111086.31900 254.99000 5 46797.22300 111261.00100 263.21400 6 46334.26800 111122.89000 254.85000 7 45019.89000 110475.18200 262.84500 8 45328.04500 109650.87600 291.36500 9 46087.13500 110933.34300 255.65500 10 45126.21800 110531.17400 261.97300 11 44815.80000 110910.16300 288.32000 12 46489.27900 111729.17600 266.85200 13 47061.42300 110795.42700 268.63900 Exterior Orientation Elements (Estimated) XL YL ZL Kappa Phi Omega 45900.0000 111150.0000 2090.0000 2.1500 0.0000 0.0000 The initial values for the design matrix (B) and the discrepancy vector (f) are shown as:
Numerical Resection and Orientation Page 57
The initial values for the normal coefficient matrix (N) are:
−−−−−−−−−−−−−−−−−−−−−−−−−−
=
−−−−−
−−−−−−−
−−−−−−−
−−−−−
−−−−−
−−−−−
−−−−−
−−−−−
−−−−−
−−−−−
−−−−−
−−−−−
−−−−−−−
=
0875.56357.23097.24270.16046.21628.59897.26765.49310.29540.29669.34727.50857.38762.40136.34114.27489.37577.10504.34771.24395.35084.47212.38890.41034.26286.5
f
5647.706172.1688223.770356.00457.00698.03670.1428286.1329245.640427.00698.00457.0
6635.1040510.1495210.130370.00456.00698.05123.1318336.785513.670074.00698.00456.0
8692.949512.1791348.330486.00462.00706.08061.1221434.1036316.870184.00706.00462.0
9679.1109332.1618465.70449.00455.00696.08732.1298825.790157.820043.00696.00455.0
8317.825384.1275140.230017.00454.00694.09943.1296027.851510.30128.00694.00454.0
7222.1741336.1625743.790610.00463.00707.05405.1938997.578029.1090442.00707.00463.0
1848.1175356.1719068.60504.00455.00696.07678.1298770.790077.920038.00696.00455.0
7773.820500.1345686.210157.00453.00693.05356.1273078.888754.280118.00693.00453.0
3077.873882.1601354.330370.00456.00696.02036.1254782.995381.670181.00696.00456.0
2877.822941.1335689.230144.00453.00693.00349.1286295.883986.260128.00693.00453.0
7656.1190967.1414256.500382.00458.00701.08062.1531100.733384.690278.00701.00458.0
1935.1380566.1442580.680452.00459.00701.00129.1731797.699612.810376.00701.00459.0
7032.761953.1836106.670447.00458.00700.06350.1328596.1291204.810372.00700.00458.0
B
Numerical Resection and Orientation Page 58
The initial values for the constant vector (t) are compute a t = BTW and are shown as
The following data represent the values for the alteration vector for each iteration. Iteration No. 1 Alteration Vector (Delta):
-8.15331-3.94869-0.15855-0.021760.01941
0.00958 Iteration No. 2 Alteration Vector (Delta):
0.61417 0.722010.70268
-0.00013 0.00012 0.00022
−−−
−−−−−−
=
8000.40916409421900.4793827459000.42739502349100.2078557847400.6415319761300.961373269
6170.5353924928.2164760000.08256.2914125.18861946390.1069672887.1162762187.1956894.905
6390.1069679805.19422607816.3538830311.650000.06894.905
N
−
−
=
995.26185558450.108535637580.37790387
492.326210.14980527.51743
t
Numerical Resection and Orientation Page 59 Iteration No. 3 Alteration Vector (Delta):
0.0015 -0.0015 0.00033
0.00000 0.00000 0.00000
Iteration No. 4 Alteration Vector (Delta):
0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
Exterior Orientation Elements (Adjusted) XL YL ZL Kappa Phi Omega 45892.4624 111146.7719 2090.5445 2.1281 0.0195 0.0098 Residuals on Photo Observations:
Point No. x y 1 -0.002 -0.0092 0.004 0.0073 -0.002 0.0024 -0.001 -0.0025 0.002 -0.0046 -0.000 -0.0007 0.006 0.0118 0.006 0.0019 -0.011 -0.00010 -0.007 0.00111 0.002 0.00612 -0.001 0.00713 0.004 -0.006
The A Posteriori Unit Variance is .3471294
Numerical Resection and Orientation Page 60 The Variance-Covariance Matrix of Adjusted Parameters is: .0233948622 .0011026685 -.0020985439 -.0000016307 .0000104961 -.0000002099 .0011026685 .0154028192 -.0034834200 -.0000000932 .0000001678 -.0000075937 -.0020985439 -.0034834200 .0025329779 .0000001566 -.0000009114 .0000018958 -.0000016307 -.0000000932 .0000001566 .0000000005 -.0000000007 .0000000000 .0000104961 .0000001678 -.0000009114 -.0000000007 .0000000048 .0000000001 -.0000002099 -.0000075937 .0000018958 .0000000000 .0000000001 .0000000039
Case II With Case II, we introduce direct observations on the parameters. A growing example of this situation is the use of airborne GPS where the receiver is used to determine the exposure station of the camera at the instant of exposure. Although this will only provide the exposure station coordinates, integrated systems such as GPS with inertial navigation can yield the rotational elements also. This new resection application adds a new math model to the adjustment. This is, for all of the exterior orientation elements:
Since the observations have residuals, the adjusted parameters can only be estimated initially. Thus,
LL
LL
LL
Zoo
LZoL
Yoo
LYoL
Xoo
LXo
L
ooo
ooo
ooo
ZvZ
YvY
XvX
v
v
v
δ+=+
δ+=+
δ+=+
δ+ω=+ω
δ+ϕ=+ϕ
δ+κ=+κ
ωω
ϕϕ
κκ
( )( )( )( )( )( ) 0ZZZF
0YYYF
0XXXF
0F
0F
0F
aL
oLL
aL
oLL
aL
oLL
ao
ao
ao
=−=
=−=
=−=
=ω−ω=ω
=ϕ−ϕ=ϕ
=κ−κ=κ
Numerical Resection and Orientation Page 61 Rearranging, we have
The observation equations are
Grouped with the observation equations developed for the photo coordinates, we have
or
where:
The function to be minimized is
where the weight matrix is shown to consist of
( )
( )
0ZZv
0YYv
0XXv
0v
0v
0v
LL
LL
LL
Zoo
LoLZ
Yoo
LoLY
Xoo
Lo
LX
ooo
ooo
ooo
=δ−
−+
=δ−
−+
=δ−
−+
=δ−ω−ω+
=δ−
ϕ−ϕ+
=δ−κ−κ+
ωω
ϕϕ
κκ
0fVeee
=+∆−
0fV
0fBV
eee
ee
=+∆−
=+∆+
0fBVe
=+∆+
=
−=
= e
e
eff
fI
BBVV
V
+∆+λ−= fBV2VWVF
eTT
Numerical Resection and Orientation Page 62
= e
W00W
W
The normal equations are then written as
Which in an expanded form looks like
resulting in, after performing the multiplication
or generally shown as
On the first cycle in the adjustment the estimates of the parameters are the same as the observed values
Therefore, the discrepancy vector becomes
Looking at the normal equations, one can see that as the weight matrix for the observed exterior orientation goes to zero then the normal equation reduces to Case I. As before, the solution is expressed as
The adjusted parameters are computed by adding the discrepancy vector to the current estimate of the parameters.
( ) 0fWBBWBTeT
=+∆
0ff
W00W
IBI
BW00W
IB eeeT
ee
eeT =
−+∆
−
−
0fWWfBWBWBeee
Teeee
T =
−+∆
+
0tNe
=+∆
a
o
a
ooXX =
0Ff OOO
ae==
tN1e −
−=∆
Numerical Resection and Orientation Page 63
The adjustment is iterated by making these adjusted parameters the current estimates. The cycling continues until the solution reaches some acceptable level. The residuals are then found by evaluating the function using the observed and final adjusted values.
The unit variance is computed as
Finally, the a posteriori variance-covariance matrix is found by multiplying the unit variance by N-1.
Case III Case III is an extension of Case II in that we now introduce the spatial coordinates as observed quantities thereby constraining the parameters. The math models are:
• For collinearity:
• For the exterior orientation:
e
ooaXX ∆+=
oa
FV −=
626VWV
T2o −
=σ
22o
x
ooN
e
σ=∑
( ) ( )
( ) ( ) 0ZYcyyyF
0ZXcxxxF
o
o
=∆∆−−=
=∆∆−−=
( )( )( )( )( )( ) 0ZZZF
0YYYF
0XXXF
0F
0F
0F
aL
oLL
aL
oLL
aL
oLL
ao
ao
ao
=−=
=−=
=−=
=ω−ω=ω
=ϕ−ϕ=ϕ
=κ−κ=κ
Numerical Resection and Orientation Page 64
• For the ground control:
The observation equations then become
Where the observational residuals on the exterior orientation
e
V , survey coordinates
s
V
and photo coordinates (V) are defined as:
The discrepancy vectors
f,f,f
se are computed by evaluating the functions using the current
estimates of the unknown parameters and the original observations.
The alterations to the current assumed value are shown as:
( )( )( ) 0ZZZF
0YYYF
0XXXF
aj
ojj
aj
ojj
aj
ojj
=−=
=−=
=−=
0fBBV
0fV
0fV
ssee
sss
eee
=+∆+∆+
=+∆−
=+∆−
=
=
= ω
ϕ
κ
n
n
2
1
1
n
2
1
1
1
L
L
L
y
x
x
y
x
Z
X
Z
Y
X
s
Z
Y
X
e
xv
vvv
V
v
vvvv
V
vvvvvv
V�
�
( )
( )
( )( )( )
( )( )( )
( )( )( )
( )( )( )
=
ωϕκ
=
=
n
n
n
J
1
1
s
L
L
L
e
j
j
ZFYFXF
ZFYFXF
f
ZFYFXF
FFF
fyF
xFf
OOO
ooo
�
Numerical Resection and Orientation Page 65
The design matrices, eB and
s
B , are presented as being
Collecting the observations
δδδ
δδδ
=∆
δδδδωδϕδκ
=∆
n
n
n
1
1
1
S
L
L
L
e
ZYX
ZYX
ZYX
�
( )( )
( )( )
( )( )
( )( )
( )( )
( )( )
( )( )
( )( )
( )( )
∂∂
∂∂
∂∂
∂∂
∂∂
=
ωϕκ∂∂
ωϕκ∂∂
ωϕκ∂∂
ωϕκ∂∂
=
nj111
n
nj111
j
nj111
1
nj111
1
nj111
1
s
LLL
n
LLL
n
LLL
1
LLL
1
e
Z,,Z,,Z,Y,XZF
Z,,Z,,Z,Y,XZF
Z,,Z,,Z,Y,XZF
Z,,Z,,Z,Y,XYF
Z,,Z,,Z,Y,XXF
B
Z,Y,X,,,yF
Z,Y,X,,,xF
Z,Y,X,,,yF
Z,Y,X,,,xF
B
��
�
��
�
��
��
��
�
Numerical Resection and Orientation Page 66
or
The function to be minimized is
This leads to the normal equations
where the weight matrix is assumed to be free of any correlation and takes the form:
The normal equation in the expanded form is
or as
The solution is
Then the process is cycled until an acceptable solution is obtained.
0fff
I00IBB
VVV
s
e
s
e
se
s
e=
+
∆∆
−−+
0fBV =+∆+
( )fBV2VWVF TT+∆+λ−=
( ) ( ) 0fWBBWBTT
=+∆
=s
e
W
W
W
W
0fWWfB
fWWfB
WBWB
BWBWBWBsss
T
eeeT
s
e
sssT
seT
eeeT
=
−
−+
∆∆
+
+
0tN =+∆
tN1−
−=∆
Principles of Airborne GPS Page 67
PRINCIPLES OF AIRBORNE GPS
INTRODUCTION
The utilization of the global positioning system (GPS) in photogrammetric mapping began almost from the inception of this technology. Initially, GPS offered a major improvement in the control needed for mapping. It provided coordinate values that were of higher quality and more reliable than those using conventional field surveying techniques. At the same time the cost and labor required for that control were lower than conventional surveying. Experiences from using GPS-control showed several improvements [Salsig and Grissim, 1995]:
a) There was a better fit between the control and the aerotriangulation results,
particularly for large-area projects. b) Surveyors were not concerned with issues like intervisibility between control
points, therefore the photogrammetrist often received the control points in locations advantageous to them instead of the location determined from the execution of a conventional field survey.
c) Visibility of the ground control point to the aerial camera is always important. Fortunately, those points that are “visible” using the GPS receivers are also free of major obstructions that would prevent the image from appearing in the photography. This led to a better recovery rate for the control.
Unfortunately, the window from which GPS observations could be made was not always at the most desirable time of day. This changed as the satellite constellation began to reach its current operational status. Also, with these increasing windows came the idea of placing a GPS receiver within the mapping aircraft. Airborne-GPS is now a practical and operational technology that can be used to enhance the efficiency of photogrammetry, although Abdullah et al [2000] reports that only about 30% of the photogrammetry companies are using this technology at this time. But, this does account for about 40% of the projects undertaken by photogrammetric firms. This data is based on antidotal information. Airborne GPS can be used for: � precise navigation during the photo flight � centered or pin-point photography � determination of the coordinates of the nodal point for aerial triangulation
To achieve the first two applications the user requires real-time differential GPS positioning [Habib and Novak, 1994]. Because the accuracy of position for navigation and centered photography ranges from one to five meters, C/A-code or P-code pseudorange is all that is required. The important capability is the real-time processing. For aerotriangulation, a higher accuracy is needed which means observing pseudorange and phase. Here, real-time processing is not as important in terms of functionality.
Principles of Airborne GPS Page 68
Airborne GPS is used to measure the location of the camera at the instant of exposure. This gives the photogrammetrist XL, YL, and ZL. GPS can also be used to derive the orientation angles by using multiple antennas. Unfortunately, the derived angular relationships only have a precision of about 1’ of arc while photogrammetrists need to obtain these values to better than 10” of arc. To compute the position of the camera during the project, two dual frequency geodetic GPS receivers are commonly employed. One is placed over a point whose location is known and the other is mounted on the aircraft. Carrier phase data are collected by both receivers during the flight with sampling rates generally at either 0.5 or 1 second. The integer ambiguity must be taken into account and this will be discussed later. Generally, on-the-fly integer ambiguity resolution techniques are employed.
ADVANTAGES OF AIRBORNE GPS
The main limitation of photogrammetry is the need to obtain ground control to fix the exterior orientation elements. The necessity of obtaining ground control is costly and time-consuming. In addition, there are many instances where the ability to gather control is not feasible. Corbett and Short [1995] identify situations where this exists: a) Time. Because phenomena change with time, it is possible that the subject of the
mapping has either changed or disappeared when that the control has been collected. Another limitation occurs when the results of the mapping need to be completed in a very short time period.
b) Location. The physical location of the survey site may restrict access because of
geography or the logistics to complete a field survey may be such to make the survey prohibitive.
c) Safety. The phenomena of interest may be hazardous or the subject may be located in
an area that is dangerous for field surveys. d) Cost. Tied to the other problems is that of cost. The necessity of obtaining control
under the conditions outlined above may make the cost of the project prohibitive because control surveys are a labor-intensive activity. Even under normal conditions the charge for procuring control is high and, if too much is needed, could negate the economic advantages that photogrammetry offers. GPS gives the photogrammetrist the opportunity to minimize (or even eliminate) the amount of ground control and still maintain the accuracy needed for a mapping project. Lapine [nd] points out that almost all of the National Oceanic and Atmospheric Administration (NOAA) aerial mapping projects utilize airborne-GPS because they have found efficiencies due to a reduction in the amount of ground control required for their mapping.
While airborne GPS can be used to circumvent the necessity of ground control, it offers the photogrammetrist additional advantages. These include [Abdullah et al, 2000; Lucas, 1994]:
Principles of Airborne GPS Page 69
• It has a stabilizing effect on the geometry. • The attainable accuracy meets most mapping standards. • Substantial cost reduction for medium and large-scale projects are possible. • There is an increase in productivity by decreasing the amount of ground control
necessary for a project. • It reduces the hazards due to traffic, particularly for highway corridor mapping. • Precise flight navigation and pin-point photography are possible with this
technology. It is now possible, at least theoretically, to use GPS aerotriangulation without any ground control. This requires [Lucas, 1996] a near perfect system, an unlikely scenario. Moreover, it would be extremely prudent to have control, if for no other reason than to check the results. While airborne GPS is operational, there are special considerations that must be accounted for to ensure success for a project. Airborne GPS is operational and being used for more mapping projects. There are some concerns that need to be addressed for a successful project. These include [Abdullah et al, 2000]:
• Risk is greater if the project is not properly planned and executed. • There is less ground control. • As ground control gets smaller, datum transformation problems become more
important. • There is some initial financial investment by the mapping organization. • Requires non-traditional technical support.
ERROR SOURCES The use of GPS in photogrammetry contains two sets of error sources and the introduction of additional errors inherent in the integration of these two technologies. For precise work, these errors need to be accounted for. Photogrammetric errors include the following:
a) Errors associated with the placement of targets. The Texas Department of Transportation has determined that an error of 1 cm can be expected in centering the target over the point [Bains, 1995]. This is based on a 10 cm wide cross target. The main problem is that the center of the target is not precisely defined.
b) Errors inherent in the pug device used to mark control on the diapositives. If the pug
is not properly adjusted then the point transfer may locate pass- and tie-points erroneously. Regardless, the process of marking control introduces another source of error into the photogrammetric process.
Principles of Airborne GPS Page 70
c) Camera calibration is crucial in determining the distortion parameters of the aerial camera used in photogrammetry. Bains [1995] has found that the current USGS calibration certificate does not provide the information needed for GPS assisted photogrammetry. Merchant [1992] states that a system calibration is more important with airborne GPS.
d) The camera shutter can contain large random variability as to the time the shutter is
open. Most of the time, this error source is not that important but if this irregularity is too great, contrast within the image could be lost. The major problem with this non-uniformity is when trying to synchronize the time of exposure to the epoch in which the GPS signal collecting data.
Error sources for GPS are well identified. A loss or disruption of the GPS signal could cause problems in resolving the integer ambiguities and could result in erroneous positioning of the camera location thereby invalidating the project. The GPS error sources include:
a) Software problems can cause problems with a GPS mission, particularly in the kinematic mode. Some software cannot resolve cycle slips in a robust fashion, although newer on-the-fly ambiguity resolution software will help. There is also a limitation on the accuracy of different receivers used in the kinematic surveys. Geodetic quality receivers, with 1-2 cm relative accuracy, should be employed for projects where high precision is required.
b) Datum problems. The GPS position is determined in the WGS 84 system whereas the
survey coordinates are in some local coordinate system or in NAD 27 coordinates where there is no exact mathematical relationship between systems.
c) Signal interruption. This is critical if continuous tracking is necessary in order to
process the GPS signal. Interruption may occur during sharp banking turns through the flight.
d) Geometry of the satellite constellation.
e) Receiver clock drift. Although this error is relatively small, this drift should be
accounted for in the processing of GPS observations.
f) Multipath. This is particularly problemsome on surfaces such as the fuselage or on the wings. This error is due to reception of a reflected signal, which represents a delay in the reception time.
Errors that can be found in the integration of GPS with the aerial camera and photogrammetry are [Bains, 1995; Merchant, 1992; Lapine, nd]:
a) The configuration of airborne GPS implies that the two data collectors are not physically in the same location. The GPS antenna must be located outside and on top of the aircraft to receive the satellite signals. The aerial camera is located within the
Principles of Airborne GPS Page 71
aircraft and is situated on the bottom of the craft. The separation distance between the antenna and camera (the nodal point) needs to be accurately determined. This distance is found through a calibration process prior to the flight. This value can also be introduced in the adjustment by constraining the solution or by treating it in the stochastic process.
b) Prior to beginning a GPS photogrammetric mission, the height between the ground
control point and the antenna needs to be measured. Experience has found that there can be variability in this height based on the quantity of fuel in the aircraft. This problem occurs only when the airborne-GPS system is based on an initialization process when solving for the integer ambiguities.
c) The camera shutter can cause problems as was identified above. The effect of this
error creates a time bias. Of concern is the ability to trip the shutter on demand. In the worst case, Merchant [1992] points out that the delay from making the demand for an exposure to the midpoint of the actual exposure could be several seconds. For large-scale photography this could cause serious problems because of the turbulent air in the lower atmosphere and the interpretation from the GPS signal to the effective exposure time. Early experiments with the Wild RC10 with an external pulse generator showed wide variability in time between maximum aperture and shutter release [van der Vegt, 1989]. The values ranged from 10-100 msec. Traveling at 100 m/sec, positional errors from 1-10 m could be expected.
d) Interpolation algorithm used to compute the position of the phase center of the
antenna. Since the instant of exposure does not coincide with the sampling time in the GPS receiver, an interpolation of the position of the antenna at the instant of exposure must be computed. Different algorithms have varying characteristics, which could introduce error in the position. Related to this uncertainty is the sampling rate used to capture the GPS signal. Too low of a rate will increase the processing whereas too high of a rate will degrade the accuracy of the interpolation model.
e) Radio frequency interference can cause problems, particularly onboard the airplane.
A receiver that can filter out this noise should be used. One example receiver is the Trimble 4000 SSI with Super-Trak signal processing which has been used successfully in airborne-GPS [Salsig and Grissim, 1995].
Camera Calibration
One of the weak links in airborne GPS involves the camera calibration. As was pointed out earlier, the traditional camera calibration may not provide the information needed when GPS is used to locate the exposure station. What should be considered is a system calibration whereby the whole process is calibrated and exercised under normal operating conditions [Lapine, 1991; Merchant, 1992]. Because of the complex nature of combining different measurement systems within airborne GPS, two important drawbacks are identified with the traditional component approach to camera calibration [Lapine, 1991]:
Principles of Airborne GPS Page 72
1. The environment is different. In the laboratory, calibration can be performed under
ideal and controlled conditions, situations that are not possible in practice. This leads to different atmospheric conditions and variations in the noise found in photo measurements.
2. The effect of correlation between the different components of the total system are
not considered. Traditionally, survey control on the ground had the effect of compensating for residual systematic errors in the photogrammetric process [Lapine, 1991; Merchant, 1992]. This is due to the projective transformation where ground control is transformed into the photo coordinate system. The exposure station coordinates are free parameters that are allowed to “float” during the adjustment thereby enforcing the collinearity condition. With GPS-observed exposure coordinates, the space position of the nodal point of the camera are fixed and ground coordinates become extrapolated variables. Because of this, calibration of the photogrammetric system under operating conditions becomes critical if high-level accuracy is to be maintained.
GPS Signal Measurements There are many different methods of measuring with GPS: static, fast static, and kinematic. Static surveying requires leaving the antennas over the points for an hour or more. It is the most accurate method of obtaining GPS surveying data. Fast static is a newer approach that yields high accuracies while increasing the productivity since the roving antenna need only be left over a point for 10-15 minutes. The high accuracies are possible because the receiver will revisit each point after an elapsed time of about an hour. Of course, neither of these situations are possible in airborne-GPS. Kinematic measures the position of a point at the instant of the measurement. At the next epoch, the GPS antenna has moved, and continues to move. Because of this measurement process, baseline accuracies determined from kinematic GPS will be 1 cm ± 2 ppm of the baseline distance from the base station to the receiver [Curry and Schuckman, 1993].
Flight Planning for Airborne GPS When planning for an airborne GPS project, special consideration must be taken into account for the addition of the GPS receivers that will be used to record the location of the camera. The first issue is the form of initialization of the receiver to fix the integer ambiguities. Next, when planning the flight lines, the potential loss of lock on the satellites has to be accounted for. Depending on the location of the airborne receiver, wide banking turns by the pilot may result is a loss of the GPS signal. Banking angles of 25° or less are recommended which results in longer flight lines [Abdullah et al, 2000]. The location of the base receiver must also be considered during the planning. Will it be at the airport or near the job site? The longer the distance between the base receiver and the rover on the plane the more uncertain will be the positioning results. It is assumed that the
Principles of Airborne GPS Page 73
relative positioning of the rover will be based upon similar atmospheric conditions. The longer the distance, the less this assumption is valid. Deploying at the site requires additional manpower to deploy the receiver and assurances that the person who is occupying the base is collecting data when the rover is collecting the same data. When planning, try to find those times when the satellite coverage consists of 6 or more satellites with minimum change in coverage [Abdullah et al, 2000]. Also plan for a PDOP that is less than 3 to ensure optimal geometry. Additionally, one might have to arrive at a compromise between favorable sun angle and favorable satellite availability. Make sure that the GPS receiver has enough memory to store the satellite data. This is particularly true when a static initialization is performed and satellite data is collected from the airport. There may also be some consideration on the amount of sidelap and overlap when the camera is locked down during the flight. This will be important when a combined GPS-INS system is used. Finally, a flight management system should be used to precalculate the exposure station locations during the flight The limitations attributed to the loss of lock on the satellite places additional demands on proper planning. These problems can be alleviated to some degree if additional drift parameters are used in the photogrammetric block adjustment.
Antenna Placement To achieve acceptable results using airborne GPS, it is essential that the offset between the GPS antenna and the perspective center of the camera be accurately known in the image coordinate system (figure 1). The measurement of this offset distance is performed by leveling the aircraft using jack above the wheels. Then, either conventional surveying or close range photogrammetry can be used to determine the actual offset.
Principles of Airborne GPS Page 74
Figure 18. GPS Offset For simplicity, the camera can be locked in place during the flight. This helps maintain the geometric relationship of the offset vector. But, the effect is that tilt and crab in the aircraft could result in a loss of coverage on the ground unless more sidelap were accounted for in the planning. If the camera is to be leveled during the flight then the amount of movement should be measured in order to achieve higher accuracy. The location of the antenna on the aircraft should be carefully considered. Although any point on the top side of the plane could be thought of as a candidate site, two locations can be studied further because of their advantages over other sites. These are on the fuselage directly above the camera and the tip of the vertical stabilizer. The location on the fuselage over the camera has the advantage of aligning the phase center along the optical axis of the camera thereby making the measurement of the offset as well as the mathematical modeling easier [Curry and Schuckman, 1993]. Moreover, the crab angle is hardly affected and the tilt corrections are negligible for large image scale [Abdullah et al, 2000]. The disadvantages are as follows. First, the fuselage location increases the probability of multipath. Second, this location, coupled with the wing placement, may lead to a loss of signal because of shadowing. Antenna shadowing is the blockage of the GPS signal, which could occur during sharp banking turns. Finally, mounting on the fuselage may require special modification of the aircraft by certified airplane mechanics. Placing the antenna on the vertical stabilizer will require more work in determining the offset vector between the antenna and the camera [Curry and Schuckman, 1993]. But once determined, it should not have to be remeasured unless some changes would suggest a remeasurement be undertaken. The advantages are that both multipath and shadowing are less likely to occur. Moreover, the actual installation might be far simpler since many aircraft already have a strobe light on the stabilizer, which could easily be adapted to accommodate an antenna.
Principles of Airborne GPS Page 75
Determining the Exposure Station Coordinates
The GPS receiver is preset to sample data at a certain rate, i.e., 1 second intervals. This sample time may not coincide with the actual exposure time. Therefore, it is necessary to interpolate the position of the exposure station between GPS observations. An error in timing will result in a change in the coordinates of the exposure station. For example, if a plane is traveling at 200 km/hr (- 56 m/sec), then a one millisecond difference will result in 6 cm of coordinate error. With the rotary shutters used in aerial cameras the time between when the shutter release signal is sent (see figure 2) to the mid-point of the exposure station varies [Jacobsen, 1991]. Therefore, a sensor must be installed to record the time of exposure. Then, through a calibration process, the offset from the recorded time to the effective instant of exposure can be determined and taken into account. Without calibration the photographer should not change the exposure during the flight thereby maintaining a constant offset distance, which can be accounted for in the processing. This, though, can only be done approximately. Many of the cameras now in use for airborne-GPS will send a signal to the receiver when the exposure was taken. The receiver then records the GPS time for this event marker within the data. Merchant [1993] points out that some cameras can determine the mid-exposure pulse time to 0.1 ms whereas some of the other cameras use a TTL pulse that can be calibrated to accurately measure the mid-point of the exposure. Accuracies better than 1 msec have been reported for time intervals by using a light sensitive device within the aerial camera [van der Vegt, 1989]. This device will create an electrical pulse when the shutter is at the maximum aperture. Prior to determining the exposure station coordinates, the location of the phase center of the antenna must be interpolated. Since the receiver clock contains a small drift of about 1 µs/sec., Lapine [nd] suggests that the position of the antenna be time shifted so that the positions are equally spaced. Several different interpolation models can be employed to determine the trajectory of the aircraft. Some of them include the linear model, polynomial approach, spline function, and quadratic time-dependent polynomial. Some field results found very little difference between these methods [Forlani and Pinto, 1994]. This may have been because they used the GPS receiver PPS (pulse per second) signal to trip the shutter on the aerial camera. This meant that the effective instant of exposure was very close to the GPS time signal.
Principles of Airborne GPS Page 76
Figure 19. Shutter release diagram for rotary shutters [from Jacobsen, 1991].
One of the most simplest interpolation models is the linear approach. The assumption is made that the change in trajectory from one epoch to another is linear. Thus, one can write a simple ratio as:
where: i = time interval between GPS epochs
∆(X,Y,Z) = changes in GPS coordinated between two epochs di = time difference when the exposure was made within an epoch, and
d(X,Y,Z) = changes in GPS coordinates to the exposure time. The advantage of this model is its simplicity. On the other hand, it assumes that the change in position is linear which may not be true. Sudden changes in direction are very common at lower altitudes where large scale mapping missions are flown. For example, figure 3 shows a sudden change in the Z-direction during the flight. Assuming a linear change, the location of the receiver could be considerably different than the actual location during exposure. One alternative would be to decrease the sample interval to, say, 0.5 seconds. This would reduce the effect of the error but increase the number of observations taken and the time to process those data.
( ) ( )Z,Y,Xddi
Z,Y,Xi =
∆
Principles of Airborne GPS Page 77
Figure 20. Effects of linear interpolation model when the aircraft experiences sudden changes in its trajectory between PG epochs
Because of the non-linear nature of the aircraft motion, Jacobsen [1993] suggests that a least-squares polynomial fitting algorithm be used to determine the space position of the perspective center. By varying the degree of the polynomial and the number of neighbors to be included in the interpolation process, a more realistic trajectory should be obtained. The degree and number of points will depend on the time interval between GPS epochs. The added advantage of this method is that if a cycle slip is experienced, it can be used to estimate better the exposure station coordinates than a linear model. A second order polynomial is used by Lapine [nd] to determine the position offset, velocity and acceleration of the aircraft in all three axes. This is done by fitting a curve to a five epoch period around the exposure time. The effect of this polynomial is to smooth the trajectory of the aircraft over the five epochs. The following model is used:
Similar equation can be generated for Y and Z. Thus the three models look, in a general form, like:
( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )2
35X35XX5
234X34XX4
233X33XX3
232X32XX2
231X31XX1
ttcttbaX
ttcttbaX
ttcttbaX
ttcttbaX
ttcttbaX
−+−+=
−+−+=
−+−+=
−+−+=
−+−+=
Principles of Airborne GPS Page 78
where: t = ti - t3 and i = 1, 2, ..., 5
a = distance from the origin b = velocity, and c = twice the acceleration
From this the observation equations can be written as
The design or coefficient matrix is found by differentiating the model with respect to the unknown parameters. All three models have the same coefficient matrix:
The observation vectors (f) are:
The normal equations can then be expressed as
where ∆ represent the parameters ( ∆ = [a b c]T ). The solution becomes
2ZZZ
2YYY
2XXX
tctbaZtctbaYtctbaX
++=
++=
++=
0Ztctbav
0Ytctbav0Xtctbav
2ZZZZ
2YYYY
2XXXX
=−++=
=−++=
=−++=
( ) ( )( ) ( )( ) ( )( ) ( )( ) ( )
−−−−−−−−−−−−−−−
=
−−−−−−−−−−−−−−−−−−−−−−−−−
=
2
2
2
2
2
23535
23434
23333
23232
23131
tt1tt1tt1tt1tt1
tttt1tttt1tttt1tttt1tttt1
B
−−−−−
=
−−−−−
=
−−−−−
=
5
4
3
2
1
Z
5
4
3
2
1
Y
5
4
3
2
1
X
ZZZZZ
f
YYYYY
f
XXXXX
f
ZZZ
YYY
XXX
fBvfBvfBv
+∆=+∆=+∆=
Principles of Airborne GPS Page 79
where W is the weight matrix. Assuming a weight of 1, the weight matrix then becomes the identity matrix and
For the X observed values, as an example,
The weighting scheme is important in the adjustment because an inappropriate choice of weights may biased or unduly influence the results. Lapine looked at assigning equal weights but this choice was rejected because the trajectory of the aircraft may be non-uniform. The final weighting scheme used a binomial expansion technique whereby times further from the central time epoch (t3) were weighted less than those closest to the middle. Using a variance of 1.0 cm2 for the central time epoch, the variance scheme looks like
( )( )( ) Z
T1TZ
YT1T
Y
XT1T
x
WfBWBB
WfBWBB
WfBWBB
−
−
−
−=∆
−=∆
−=∆
==
432
32
2
TT
t5t5t5t5t5t5t5t55
IBBWBB
++++++++
++++==
25
24
23
22
21
5321
54321
XT
XT
tXtXtXtXtXtXtXtXtXtX
XXXXXIfBWfB
=
⋅⋅
⋅⋅
⋅
2
2
2
2
2
22
22
22
22
22
cm4cm4
cm4cm4
cm4
m01.02m01.02
m01.02m01.02
m01.02
Principles of Airborne GPS Page 80
where the off-diagonal values are all zero (0). A basic assumption made in Lapine's study was that the observations are independent therefore there is no covariance. Once the coefficients are solved for, the position of the antenna phase center can be computed using the following expressions
Determination of Integer Ambiguity The important error concern in airborne-GPS is the determination of the integer ambiguity. Unlike ground-based measurements, the whole photogrammetric mission could be lost if a cycle slip occurs and the receiver cannot resolve the ambiguity problem. There are two principal methods of solving for this integer ambiguity: static initialization over a know reference point or using a dual-frequency receiver with on-the-fly ambiguity resolution techniques [Habib and Novak, 1994]. Static initialization can be performed in two basic modes [Abdullah et al, 2000]. The first method of resolving the integer ambiguities is to place the aircraft over a point on a baseline with know coordinates. Only a few observations are required because the vector from the reference receiver to the aircraft is known. The accuracy of the baseline must be better than 6-7 cm. The second approach is a static determination of the vector over a know baseline or from the reference station to the antenna on the aircraft. The integer ambiguities are solved for in a conventional static solution. This method may require a longer time period to complete, varying from 5 minutes to one hour, due to the length of the vector, type of GPS receiver, post-processing software, satellite geometry, and ionospheric stability. When static initialization is performed it does require that the receiver on-board the aircraft maintain a constant lock on at least 4 and preferable 5 GPS satellites. Abdullah et al [2000] identify several weaknesses to static initialization:
• The methods add time to the project and are cumbersome to perform. • GPS data collection begins at the airport during this initialization. • Since the data are collected for so long, large amounts of data are collected and need
to be processed – about 7 Mbytes per hour. • The receiver is susceptable to cycle slips or loss of lock. • It is possible that the initial solution of the integers was incorrect thereby invalidating
the entire photo mission. The use of on-the-fly (OTF) ambiguity integer resolution makes the process much easier. The new GPS receiver and post-processing software are much more robust and easy to use while the receiver is in flight. OTF requires P-code receivers where carrier phase data are
( ) ( )( ) ( )( ) ( )2
3expZ3expZZexp
23expY3expYYexp
23expX3expXXexp
ttcttbaZ
ttcttbaY
ttcttbaX
−+−+=
−+−+=
−+−+=
Principles of Airborne GPS Page 81
collected using both the L1 and L2 frequencies. The solution requires about 10-15 minutes of measurements before entering the project area. Component integration can also create problems. For example, a test conducted by the National Land Survey, Sweden, experienced cycle slips when using the aircraft communication transmitter [Jonsson and Jivall, 1990]. Receiving information was not a problem, just transmissions. This test involved pre-flight initialization with the goal of re-observation over the reference station at the end of the mission. This was not possible.
GPS-Aided Navigation One of the exciting applications of airborne-GPS is its utilization of in flight navigation. The ability to precisely locate the exposure station and activate the shutter at a predetermined interval along the flight line is beneficial for centering the photography over a geographic region, such as in quad-centered photography for orthophoto production. An early test by the Swedish National Land Survey [Jonsson and Jivall, 1990] showed early progress in this endeavor. The system configuration is shown in figure 4. Two personal computers (PCs) where used in the early test - one for navigation and the other for determination of the exposure time.
Figure 21. Configuration of navigation-mode GPS equipment [from Jonsson and Jivall, 1990].
The test consisted of orientation of the receiver on the plane prior to the mission over a ground reference mark. This initialization is performed to solve for the integer ambiguity.
Principles of Airborne GPS Page 82
This method of fixing the ambiguity requires no loss of lock during the flight thus necessitating long banking turns, which adds to the amount of data collected. A flight plan was computed with the location of each exposure station identified. The PC used for the navigation activated a pulse that was sent to the aerial camera to trip the shutter. The test showed that this approach yielded about a 0.5 second delay. Thus, the exposure station locations were 20-40 meters too late. An accuracy of about 6 meters was found at the preselected position along the strip. When compared to the photogrammetrically derived exposure station coordinates, the relative carrier phase measurements were within about 0.15 meters in agreement. The Texas Department of Transportation (TDOT) had a different problem [Bains, 1992]. Using airborne-GPS gave TDOT the ability to reduce the amount of ground control for their design mapping. With GPS one paneled control point was placed at the beginning of the project and a second at the end. If the site was greater than 10 km in length then a third paneled control point was placed near the center. For their low altitude flights (photo scale of 1 cm = 30 m), the desire was to control the side-lap to 50 m. Using real-time differential GPS, accuracies of better than 10 m, at that time, were realistic. Using this 10 m error value, this amount of error would only cause a variation in side-lap of 7%. TDOT uses 60% side-lap for their large scale mapping. For the high altitude mapping (photo scale of 1 cm = 300 m) and 30% side-lap, it was determined that the "50 m was not really necessary. This 50 m value would cause a variation of only about 2%.
PROCESSING AIRBORNE GPS OBSERVATIONS The mathematical model utilized in analytical photogrammetry is the collinearity model, which simply states that the line from object space through the lens cone to the negative plane is a straight line. The functional representation of this model is shown as:
where: xij, yij are the observed photo coordinates, i, for photo j
xo, yo are the coordinates of the principal point c is the camera constant
∆Xi, ∆Yi, ∆Zi are the transformed ground coordinates This mathematical model is often presented in the following form:
( ) ( )
( ) ( ) 0ZYcyyyF
0ZXcxxxF
i
ioij
i
ioij
=∆∆−−=
=∆∆−−=
Principles of Airborne GPS Page 83
( ) ( ) ( )( ) ( ) ( )
( ) ( ) ( )( ) ( ) ( )Li33Li32Li31
Li23Li22Li21oyij
Li33Li32Li31
Li13Li12Li11oxij
ZZmYYmXXmZZmYYmXXm
cyvy
ZZmYYmXXmZZmYYmXXm
cxvx
ij
ij
−+−+−−+−+−
−=+
−+−+−−+−+−
−=+
where: vx, vy are the residuals in x and y respectively for point i on photo j
X, Y, Z are the ground coordinates of point i XL,YL,ZL are the space rectangular coordinates of the exposure station for
photo j m11 ... m33 is the 3x3 rotation matrix that transforms the ground coordinates to
a photo parallel system. The model implies that the difference between the observed photo coordinates, corrected for the location of the principal point, should equal the predicted values of the photo coordinates based upon the current estimates of the parameters. These parameters include the location of the exposure station and the orientation of the photo at the instant of exposure. The former values could be observed quantities from onboard GPS. These central projective equations form the basis for the aerotriangulation. It is common to treat observations as stochastic variables. This is done by expanding the mathematical model. For example, Merchant [1973] gives the additional mathematical model when observations are made on the exterior orientation elements as:
The mathematical model for observation on survey control can be similarly.
( )( )( )( )( )( ) 0ZZZF
0YYYF
0XXXF
0F
0F
0F
aL
oLL
aL
oLL
aL
oLL
ao
ao
ao
=−=
=−=
=−=
=ω−ω=ω
=ϕ−ϕ=ϕ
=κ−κ=κ
Principles of Airborne GPS Page 84
Figure 22. Position ambiguity for a single photo resection [from Lucas, 1996, p.125].
Using GPS to determine the exposure station coordinates without ground control is not applicable to all photogrammetric problems. Ground control is needed for a single photo resection and orientation [Lucas, 1996]. If the exposure station coordinates are precisely known then the only thing known is that the camera lies in some sphere with a radius equal to the offset distance from the GPS antenna to the cameras nodal point, figure 5. The antenna is located at the center of the circle. All positions on the sphere are theoretically possible but from a practical viewpoint, one knows that the camera, being located below the aircraft and pointing to the ground, is below the antenna. The antenna, naturally, is located on top of the aircraft to receive the satellite signals.
Adding a second photo reduces some of the uncertainty. This is due to the additional constraint of the collinearity condition that is placed on the rays from the control to the image position. The collinearity theory will provide the relative orientation between the two photos [Lucas, 1996]. Without ground control, the camera is then free to rotate about a line that passes through the two antenna locations (see figure 6). Without ground control, or some other mechanism to constrain the roll angle along the single strip, this situation could be found throughout a single strip of photography.
Principles of Airborne GPS Page 85
Figure 23. Ambiguity of the camera position for a pair of aerial photos [from Lucas, 1996, p.125]. While independent model triangulation continues to be employed in practice, the usual iterative adjustment cannot be used with the recommended 4 corner control points [Jacobsen, 1993]. Moreover, the 7-parameter solution to independent model triangulation results in a loss of accuracy in the solution. Determining the coordinates of the exposure stations can be easily visualized in the following model [Merchant, 1992]. Assume that the photo coordinate system (x,y,z) are aligned with the coordinate system (U, V, W). Further, assume that the survey control (X, Y, Z) is reported in the WGS 84 system. Then, it remains to transform the offset between the receivers phase center and the nodal point of the aerial camera (DU, DV, DW) into the corresponding survey coordinate system. This is shown as
where: DU, DV, DW are the offset distances
MM is the camera mount orientation ME is the exterior orientation elements of the camera
+
=
DWDVDU
MMZYX
ZYX
ME
A
a
a
L
L
L
Principles of Airborne GPS Page 86
The camera mount orientation is necessary to ensure that the camera is heading correctly down the flight path. In the normal acquisition of aerial photos, the camera is leveled prior to each exposure. This is done so that the photography can be nearly vertical at the instant of exposure even though the aircraft is experiencing pitch, roll and swing (crab or drift). When the coordinate offsets between the antenna and camera were surveyed, the orientation angles on the mount are leveled. A problem occurs if there is an offset between the location of the nodal point and the gimbals’ rotational center on the mount. When the camera is rotated, the relationship between the two points should be considered. The simplest way to ensure that the relationship between the receiver and the camera are consistent would be to forgo any rotation of the camera during the flight. With this rigid relationship fixed, the antenna coordinates can be rotated into a parallel system with respect to the ground by using the tilts experienced during the flight. Alternatively, Lapine [nd] points out that the transformation of the offsets to the local coordinate system can easily be performed using the standard gimbal form. In this situation, pitch and swing angles between the aircraft and the camera are measured. Then, one can simply algebraically sum the camera mount angles with the appropriate measured pitch and swing angles. Here, κ and swing are added to form one rotational element and φ and pitch are similarly combined. Since roll was not measured during the test, ω is treated independently. Using the Wild RC-10 camera mount, Lapine found that the optical axis of the camera coincided with the vertical axis of the mount. That meant that the combination of κ and swing would not produce any eccentricity. Testing revealed that the gimbal center was located approximately 27 mm from the nodal points. Thus, an eccentricity error could be introduced. During the flight, a 1.5o maximum pitch angle between the aircraft and the camera mount was found. Thus the error in neglecting this effect in the flight direction would be maximum pitch error = 0.027m * sin 1.5o = 0.0007 meters Experiences from tests in Europe [Jacobsen, 1991] indicate that the GPS positions of the projection centers differ from the coordinates obtained from a bundle adjustment. Moreover, many of the data sets have shown a time dependent drift pattern in the GPS values. When this systematic tendency is accounted for in the adjustment, excellent results are possible. For relative positioning, 4 cm can be reached whereas 60 cm are possible using absolute positioning. A second approach to perform airborne GPS aerial triangulation is sometimes referred to as the Stuttgart method. In this technique, certain physical conditions are assumed or accepted [Ackermann, 1993]. First, it is accepted that loss of lock will occur. This means that low banking angles onboard the aircraft will not be used as in those methods where a loss of lock means a thwarted mission. Because loss of lock, it is also unnecessary to perform a stationary observation prior to take-off to resolve the integer ambiguities. These ambiguities are solved
Principles of Airborne GPS Page 87
on-the-fly and can be determined for each strip if loss of lock occurs during the banking (or at other times during the photo mission). Seldom will loss of lock happen along the strip though. Second, it will be assumed that single frequency receivers will be used on the aircraft. Finally, the ground or base receiver will probably be located at a great distance from the photo mission. The solution of the integer ambiguities is performed using C/A-code pseudorange positioning. These positions can be affected by selective availability (SA). Because of this, there will be bias in the solution. These drift errors, which can include other effects such as datum effects, are systematic in nature and consist of a linear and time dependent component. The block adjustment is used to solve for these biases. Early test results added confusion to the drift error biases. In a test by the Survey Department of Rijkswaterstaat, Netherlands, a systematic effect was not noticeable on all photo strips [van der Vegt, 1989]. Evaluation of the results indicated that this was probably due to the GPS processing of the cycle slips. The accuracy of the position in the differential mode is predicated on the accuracy of solving these integer ambiguities at both the base receiver and the rover. This test used a technique where the differences between the observed pseudoranges and the phase measurements were averaged. The accuracy of this approach will be dependent upon the accuracy of the measurements, the satellite geometry and how many uncorrelated observations are used in the averaging approach. If no loss of lock occurs during the photo mission, the aircraft trajectories will be continuous and, therefore, only one set of drift parameters need to be carried in the bundle adjustment. Unfortunately, banking turns could have an adverse effect by blocking the signal to some of the satellites causing cycle slips. Høghlen [1993] states that an alternative to the strip-wise application of the biases, the block may be able to be split into parts where the aircraft trajectories are continuous thereby decreasing the number of unknown parameters within the adjustment. The advantage of modeling these drift parameters is that the ground receiver does not have to be situated near the site. It could be 500 km or farther away [Ackermann, 1993]. This is important because it can decrease the costs associated with photo missions. Logistical concerns include not only the deployment of the aircraft but also the ground personnel on the site to operate the base. When projects are located at great distances from the airplanes’ home base, uncertainty in weather could mean field crews already on the site but the photo mission canceled. It also is an asset to flight planning in that on-site GPS ground receivers will require fixing the flight lines at least one day before the mission. During the flying season this could be a problem [Jacobsen, 1994]. In Germany, the problem is solved because of the existence of permanent reference stations throughout the country that could be occupied by the ground receiver. Using the mathematical model for additional stochastic observations within the adjustment as outlined earlier [Merchant, 1973], a new set of observations can be written for the perspective center coordinates as [Blankenberg, 1992].
Principles of Airborne GPS Page 88
where: (XL, YL, ZL)GPS = the perspective center coordinates observed with GPS vX, vY, vZ = residuals on the observed principal center coordinates XL, YL, ZL = the adjusted perspective center coordinates used within the
bundle adjustment
As it was discussed earlier, the antenna does not occupy the same location as the camera nodal point. The geometry is shown in figure 7. Relating the antenna offset to the ground is dependent upon the rotation of the camera with respect to the aircraft and the orientation of the aircraft to the ground. The bundle adjustment can be used to correct the camera offset if the camera remains fixed to the aircraft during the photo mission. If this condition is met then the orientation of the camera offset will only be dependent upon the orientation elements (κ, ϕ, ω). The new additional observation equations to the collinear model are given as [Ackermann, 1993; Høghlen, 1993]:
( )jZ
Y
X
jZ
Y
X
APC
APC
APC
i
iL
L
L
iZ
Y
X
iA
A
A
bbb
dtaaa
zyx
,,RZYX
vvv
ZYX
GPS
GPS
GPS
+
+
κωϕ+
=
+
where: (XA, YA, ZA)GPS = ground coordinates of the GPS antenna for photo i
iL
L
L
iZ
Y
X
iL
L
L
ZYX
vvv
ZYX
GPS
GPS
GPS
=
+
Figure 24. Geometry of the GPS antenna with respect to the aerial camera (zerox copy, source unknown)
Principles of Airborne GPS Page 89
vX, vY, vZ = residuals for the GPS antenna coordinates (XA, YA, ZA)GPS for photo i
XL, YL, ZL = exposure station coordinates of photo i xA
PC, yAPC, zA
PC = eccentricity components to the GPS antenna aX, aY, aZ = GPS drift parameters for strip j representing the constant term dt = difference between the exposure time for photo I and the time
at the start of strip j bX, bY, bZ = GPS drift parameters for strip j representing the linear time-
dependent terms R(ϕ, ω, κ) = orthogonal rotation matrix.
It is recognized in analytical photogrammetry that adding parameters to the adjustment weakens the solution. To strengthen the problem, one can introduce more ground control, but this defeats one of the advantages of airborne GPS. Introducing the stepwise drift parameters and using four ground control points located at the corners of the project, there are three approaches to reducing the instability of the block [Ackermann, 1993]. These are shown in figure 8 and are:
i) using both 60% end- and 60% side-lap ii) using 60% end-lap and 20% side-lap and adding an additional vertical control point
at both ends of each strip, and iii) using the conventional amount of overlap as indicated in (ii) and flying at least two
cross-strips of photography. The block schemes shown in figure 8 are idealized depictions. The figure 8(i) scheme can be used for airborne GPS when no drift parameters are employed in the block adjustment. It is important that the receiver maintains lock during the flight which necessitates flat turns between flights. Maintaining lock ensures that the phase history is recorded from take-off to landing. Abdullah et al [2000] points out that this is the most accurate type of configuration
Figure 25. Idealized block schemes.
Principles of Airborne GPS Page 90
in a production environment. The same control scheme can also be used when block drift parameters are used in the bundle adjustment. If strip drift parameters are used then a control configuration as shown in figure 8(ii) should be used. Here, drift parameters are developed for each flight line strip which requires additional height control at the ends of each strip. The control configuration in figure 8(iii) incorporates two cross strips of photography. This model increases the geometry and provides a check against any gross errors in the ground control. But it does add to the cost of the project because more photography is required to be taken and measured. For that reason, it is not frequently utilized in a production environment. More often the area is not rectangular but rather irregular. In this situation it is advisable to add additional cross-strips or provide more ground control. Figure 9 is an example.
Figure 26. GPS block control configuration. Theoretically, it is possible to perform the block adjustment without any ground control. This can easily be visualized if one considers supplanting the ground control by control located at
Principles of Airborne GPS Page 91
the exposure stations. Nonetheless, it is prudent to include control on every job, if nothing more than providing a check to the aerotriangulation. Using the four control point scheme as just presented has the advantage of using the GPS position for interpolation only within the strip. As is known, conventional aerotriangulation requires ground control. As an example, for planimetric mapping, control is required at an interval of approximately every seventh photo on the edge of the block. Topographic mapping requires vertical control within the block at about the same spacing. Using this background and simulated data, Lucas [1996] was able to develop error ellipses from a bundle adjustment showing the accumulation of error along the edges of the block (figure 10). The is commonly referred to as the edge effect and stems from a weakened geometric configuration that exists because of a loss in redundancy. Under normal circumstances, a point in the middle of a block should be visible on at least nine photos. But on the edge, the photos are taken only from one side of view.
Figure 27. Error ellipses with ground points positioned by conventional aerotriangulation adjustment of a photo block [Lucasm 1994].
Using the same simulated data, Lucas [1996] also showed the error ellipses one would expect to find using 60% end- and side-lap photography along with airborne GPS and no control. The results show that for planimetry, the results are similar. Larger error ellipses were found at the control points but at every other point they were either smaller or nearly equivalent. Elevation errors were much different within the two simulations. Using just aerotriangulation without control, error ellipses grew larger towards the center of the block. Using kinematic GPS, on the other hand, kept the error from getting larger. Compared with the original simulation with vertical control within the block, each point had improvements, except the
Principles of Airborne GPS Page 92
control points that were fixed in the conventional adjustment. Lucas [1996] states that the reason for the improvement lies in the fact that each exposure station is now a control point and the distance between the control is less than one would find conventionally. It would not be practical to have the same density of control as one would have in the air. These results are based on simulations therefore reflect what is possible and not necessarily what one would find in real data. Accuracy considerations are important in determining the viability of using GPS observations within a combined bundle adjustment. Results of projects conducted with combined GPS bundle adjustment show that this approach is not only feasible but also desirable. In conventional aerotriangulation, ground control points helped suppress the effects of block deformation. GPS observed perspective center coordinates stabilize the adjustment thus negating the necessity for extensive control. In fact, their main function now becomes one of assisting in the datum transformation problem [Ackermann, 1993]. If the position of exposure station can be ascertained to an accuracy of 10 cm or better, then the accuracy of the adjustment becomes primarily dependent upon the precision of the measurement of the photo coordinates [Ackermann, 1993]. Designating the standard error of the photo observations as σ0, the projected values expressed in ground units are Oσ . Then as long as OGPS σ≤σ , Ackermann indicates that the following rule could apply. The expected
horizontal accuracy (X, Y) will be approximately O5.1 σ and the vertical accuracy (Z) would be around O0.2 σ . This assumes using the six drift parameters for each strip, four control points and cross-strips.
Strip Airborne GPS For route surveys, such as transportation systems, there is a problem with airborne GPS when the GPS measurements are exclusively used to control the flight. Theoretically, a solution is possible if the exposure stations are distributed along a block and are non-collinear. In the case of strip photography, the exposure station coordinates will nearly lie on a line making it an ill-conditioned or singular system. Therefore, some kind of control needs to be provided on the ground to eliminate the weak solution that would otherwise exist. As an example, Lucas [1996] shows the error ellipses one would expect with only ground control and then with kinematic GPS. These are shown in figure 11 for horizontal values and figure 12 for vertical control. Merchant [1994] states that to solve this adjustment problem, existing ground control could be utilized in the adjustment. Most transportation projects have monumented points throughout the project and intervisible control should be reasonably expected. A test was performed to evaluate the idea of using control for strip photography [Merchant, 1994]. A strip of three photos was taken from a Wild RC-20 aerial camera in a Cessna Citation over the Transportation Research Center test site in Ohio. The aircraft was pressurized and the flying height above the ground was approximately 1800 m. A Trimble
Principles of Airborne GPS Page 93
SSE receiver was used with a distance to the ground-based receiver being approximately 35 km. The photography was acquired with 60% end-lap. Corrections applied to the measured photo coordinates included lens distortion compensation (both Seidel's aberration radial distortion and decentering distortion using the Brown/Conrady model), atmospheric refraction (also accounting for the refraction due to the pressurized cabin), and film deformation (USC&GS 8-parameter model).
Figure 28
Figure 29. The middle photo had 30 targeted image points. For this test, only one or two were used while the remaining control values were withheld. The results are shown in the following table. The full field method utilized all of the checkpoints within the photography. The corridor method only used a narrow band of points along the route, which is typical of the area of interest for many transportation departments [Merchant, 1994]. The results are expressed in terms of the root mean square error (rmse) defined as the measure of variability of the observed and "true" (or withheld) values for the checkpoints. The method is shown as:
Principles of Airborne GPS Page 94
where n is the number of test points.
rmse (meters)
Number of Test Points
X Y
Z
Using 2 targeted ground control points
Full Field
28
0.079
0.057
0.087
Corridor
10
0.031
0.026
0.073
Using 1 targeted ground control point
Full Field
29
0.084
0.050
0.086
Corridor
11
0.034
0.033
0.082
The results indicate that accuracies in elevation are better than 1:20,000 of the flying height, which are comparable to results found from conventional block adjustments. It should also be noted that pass points were targeted therefore errors that may occur due to the marking of conjugate imagery is not present. Moreover, the adjustment also included calibration of the system. Nonetheless, good results can be expected by using ground control to alleviate the ill conditioning of the normal equations. A minimum of one point is needed with additional points being used as a check. Another approach, other than including additional control, would be to fly a cross strip perpendicular to the strip of photography. This will have the effect of anchoring the strip thereby preventing it from accumulating large amounts of error. If the strip was only a single strip, then it is recommended that a cross strip be obtained at both ends of the strip [Lucas, 1996].
Combined INS and GPS Surveying The combination of a combined inertial navigation system (INS) with GPS gives the surveyor the ability to exploit the advantages of both systems. INS has a very high short-term accuracy, which can be used to eliminate multipath effects and aid in the solution of the ambiguity problem. The long-term accuracy of the GPS can be used to correct for the time-
( )nobservedtruermse
2∑ −=
Principles of Airborne GPS Page 95
dependent drift found within the inertial systems. Used together will give the surveyor not only good relative accuracies but also good absolute accuracies as well. Moreover, within the bundle adjustment, only the shift parameters need to be included within the adjustment model [Jacobsen, 1993], thereby increasing, at least theoretically, the accuracy of the aerotriangulation.
Texas DOT Accuracy Assessment Project The Texas Department of Transportation undertook a project to assess the accuracy level that is achievable using GPS and photogrammetry. Bains [1995] describes the project in length. Three considerations were addressed in this project: system description, airborne GPS kinematic processing and statistical analysis. The system description can be summarized as follows:
The site selected was an abandoned U.S. Air Force base located near Bryan, Texas. This site was selected because the targets could be permanently set and there would be minimal obstructions due to traffic. Being an abandoned facility, expansion of the test facility was possible. In addition, the facility could handle the King Air airplane.
Target design is important for the aerial triangulation. A 60 x 60 cm cross target with a pin in the center was selected (based on a photo scale of 1:3000). The location of the center of the target allowed for the precise centering of the ground receiver over the point. In areas where there was no hard surface to paint the target, a prefabricated painter wafer board target was employed.
All of the targets were measured using static GPS measurements. Each target was observed at least once. Using 8 receivers, two occupied master control points while the remaining six simultaneously observed the satellites over the photo control points. The goal was to achieve Order B accuracy in 3-D of 1:1,000,000. In addition, differential levels were run over all targets to test the accuracy of the GPS-derived heights. The offset between the antenna and the camera was measured four times and the mean values determined. Prior to the measurement, the aircraft was jacked up and leveled. The aerial camera was then leveled and locked into place. The offset distances were then measured.
The flight specifications were designed to optimize the accuracy of the test. They are:
Photo Scale: 1:3000 Flying Height: 500 meters Flight Direction: North-South Forward Overlap: 60% minimum Side-lap: 60% Number of Strips: 3
Principles of Airborne GPS Page 96
Exposures per Strip: 12 Focal Length: 152 mm Format: 230 x 230 mm Camera: Wild RC 20 Film Type: Kodak Panatomic 2412 Black/White Sun Angle: 30o minimum Cloud Cover: None GDOP: #4
The mission began by measuring the height of the antenna when the aircraft was parked. The ground receiver was turned on and a sample rate of 1 second was used. The rover receiver in the aircraft was then turned on and tracked the satellites for five minutes with the same one-second sampling rate. Then the aircraft took off and flew its mission.
The processing steps involved the kinematic solution of the GPS observations. The PNAV software was used for on-the-fly ambiguity resolution. The software vendor recommended that the processing be done both forward and backward for better accuracy but the test indicated that, at least for this project, there was no increase in the accuracy when performing that kind of processing. The photogrammetry was processed using soft-copy photogrammetry. A 15Fm pixel size was used. The aerial triangulation was then performed with the GAPP software using only four ground control stations; two at the start and two at the end. The results were then statistically processed using the SAS (Statistical Analysis System). The results of this study showed that the accuracy achieved fell within specifications. In fact, the GPS results were either equal to or better than the accuracy of conventional positioning systems. The results also indicated that there was a need to have a reference point within the site to aid in the transformation to State Plane Coordinates. As an example, Table 1 shows the comparison between the GPS-derived control and the values from the ground truth. These results show that airborne GPS can meet the accuracy specifications for photogrammetric mapping.
Principles of Airborne GPS Page 97
Number of Observations
Variable
Minimum
Maximum
Mean
Standard Deviation
95
Easting
-0.100
0.057
-0.021
0.031
95
Northing
-0.075
0.089
-0.003
0.026
95
Elevation
-0.068
0.105
-0.008
0.027
Table 1. Comparison of airborne GPS assisted triangulation with ground truth on day 279, 1993 over a long strip [from Bains, 1995, p.40].
ECONOMICS OF AIRBORNE-GPS While no studies have been conducted that describe the economic advantages of airborne-GPS, some general findings are available [Ackermann, 1993]. Utilization of airborne-GPS does increase the aerotriangulation costs by about 25% over the conventional approach. This increase includes:
• flying additional cross-strips • film • GPS equipment • GPS base observations • processing the GPS data and computation of aircraft trajectories • aerotriangulation • point transfer and photo observations, and • combined block adjustment
The real savings accrue in the control where the costs are 10% or less than those required using conventional aerotriangulation. The overall net savings will be about 40% when looking at the total project costs. If higher order accuracy is required (Ackermann uses the example of cadastral photogrammetry which needs 1-2 cm accuracy) then the savings will decrease because additional ground control are necessary.
References Page 98
REFERENCES Ackermann, F., 1993. "GPS for Photogrammetry", The Photogrammetric Journal of Finland, 13(20):7-15. Bains, H.S., 1992. "Photogrammetric Surveying by GPS Navigation", Proceedings of the 6th International geodetic Symposium on Satellite Positioning, Vol. II, Columbus, OH, March 17-20, pp 731-738. Bains, H.S., 1995. "Airborne GPS Performance on a Texas Project", ACSM/ASPRS Annual Convention and Exposition Technical Papers, Vol. 2, February 27 - March 2, pp 31-42. Corbett, S.J. and T.M. Short, 1995. "Development of an Airborne Positioning System", Photogrammetric Record, 15(85):3-15. Curry, S. and K. Schuckman, 1993. “Practical Guidelines for the Use of GPS Photogrammetry”, ACSM/ASPRS Annual Convention and Exposition Technical Papers, Vol. 3, New Orleans, LA, pp 79-88. Forlani, G. and L. Pinto, 1994. "Experiences of Combined Block Adjustment with GPS Data", International Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3, Munich, Germany, September 5-9, pp 219-226. Ghosh, S.K., 1979. Analytical Photogrammetry, Pergamon Press, New York, 203p. Habib, A. and K. Novak, 1994. "GPS Controlled Aerial Triangulation of Single Flight Lines", Proceedings of ASPRS/ACSM Annual Convention and Exposition, Vol 1, Reno, NV, April 25-28, pp 225-235; also, International Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 2, Ottawa, Canada, June 6-10, pp 203-210. Høghlen, A., 1993. "GPS-Supported Aerotriangulation in Finland - The Eura Block", The Photogrammetric Journal of Finland, 13(2):68-77. Jacobsen, K., 1991. "Trends in GPS Photogrammetry", Proceedings of ACSM-ASPRS Annual Convention, Vol. 5, Baltimore, MD pp 208-217. Jacobsen, K., 1993. “Correction of GPS Antenna Position for Combined Block Adjustment”, ACSM/ASPRS Annual Convention and Exposition Technical Papes, Vol. 3, New Orleans, LA pp 152-158. Jacobsen, K., 1994. “Combined Block Adjustment with Precise Differential GPS-Data”, International Archives of Photogrammetry and Remote Sensing, Vol. 30, Part 3, Munich, Germany, September 5-9, pp 422-426. Jonsson, B. and A. Jivall, 1990. “Experiences from Kinematic GPS Measurements”, Paper presented at the Nordic Geodetic Commission 11th General Meeting, Copenhagen, 12p.
References Page 99
Lapine, L.A., 1991. “Analytical Calibration of the Airborne Photogrammetric System Using A Priori Knowledge of the Exposure Station Obtained from Kinematic Global Positioning System Techniques”, Department of Geodetic Science and Survey Report No. 411, The Ohio State University, Columbus, OH, 188p. Lapine, L.A., nd. "Airborne Kinematic GPS Positioning for Photogrammetry - The Determination of the Camera Exposure Station", Xerox copy, source unknown. Lucas, J.R., 1996. “Covariance Propagation in Kinematic GPS Photogrammetry” in Digital Photogrammetry: An Addendum to the Manual of Photogrammetry, ASPRS, pp 124-129. Merchant, D. C., 1973. “Elements of Photogrammetry, Part II – Computational Photogrammetry, Department of Geodetic Science, Ohio State University, 75p. Merchant, D.C., 1979. “Instrumentation for Analytical Photogrammetry”, Paper presented at the IX Congresso Brasileiro de Cartografia, Brasil, February 4-9, 8p. Merchant, D.C., 1992. "GPS-Controlled Aerial Photogrammetry", ASPRS/ACSM/RT92 Technical Papers, Col. 2, Washington, D.D., August 3-8, pp 76-85. Merchant, D.C., 1994. "Airborne GPS-Photogrammetry for Transportation Systems", Proceedings of ASPRS/ACSM Annual Convention and Exposition, Vol. 1, Reno, NV, April 25-28, pp 392-395. Novak, K, 1993. Analytical Photogrammetry lecture notes, Department of Geodetic Science and Surveying, The Ohio State University, 133p. Salsig, G. and T. Grissim, 1995. “GPS in Aerial Mapping”, Proceedings of Trimble Surveying and Mapping Users Conference, Santa Clara, CA, August 9-11, pp 48-53. van der Vegt, 1989. “Differential GPS: Efficient Tool in Photogrammetry”, Surveying Engineering, 115(3):285-296.
Recommended