9
Elliptic Edge Polygons for Observational Coverage Planning Michael Trowbridge, Russell Knight, Elly Shao Jet Propulsion Laboratory, California Institute of Technology {michael.a.trowbridge,russell.l.knight,elly.j.shao}@jpl.caltech.edu Abstract This paper presents a representation of the polygonal footprint of an Earth-observing 2D framing sensor (i.e. instruments on Rosetta, Planet Labs SkySat) for ob- servation coverage planning that preserves curvature of the footprint edge at little additional memory cost compared to previously published techniques. Binary operations on field-of-view, ellipsoid intersec- tion edges are introduced, allowing them to serve as edges in a polygon. A computational experiment ex- amines the error produced by using this method versus existing methods of camera footprint representation. Edge approximation error is most significant when the field of view footprint is large compared to the body be- ing observed (small body exploration, fly-bys, or other distant observer scenarios), and negligible when it is small (low altitude Earth observers with narrow fields of view). Great Circles polygons are degenerate ellip- tic edge polygons, admitting them to the polygon and edge operations in this paper. Introduction Planning a mapping campaign from space with a 2D framing instrument implies reasoning about which parts of the target body have been observed. Two ap- proaches to this reasoning are the Footprint Placement for Mosaic Imaging optimization problem (Mitchell et al. 2018) and the Area Coverage Planning problem for 3-axis steerable, 2D framing sensors (Shao et al. 2018). As planning problems, both formalizations may be de- scribed as searches in a decision space for which obser- vation footprint tuples (time, r tgt ) 1 producing a set of instrument footprints F = {f 1 ,f 2 ,...,f n } such that a target region of interest polygon P is contained in the union of all scheduled observation footprints. The goal state g(n) is g(n)= P n [ i=1 f i (1) Copyright c 2019, by the California Institute of Technology. United States Government Sponsorship acknowledged. Contact author: Michael Trowbridge 1 (Mitchell et al. 2018) simplified the problem by fixing time and θ, focusing their approach on optimizing rtgt . with a heuristic distance to the goal state h(n) of h(n) = area(P ) - area P - n [ i=1 f i ! (2) How the planner computes the footprint f i is a lower level detail that can cause problems with evaluation of g(n) and h(n). (Shao et al. 2018) used 4-vertex great circles quadrilaterals to represent footprints f i for com- putational efficiency. In the post-talk Q&A session, Valicka noted that these quadrilaterals did not resemble the curved footprints he had seen in his research, and rightly questioned the reasonableness of the quadrilat- eral approximation (which Shao et al. did not justify). As figure 1 shows, ignoring this curvature can cause the plan to achieve insufficient coverage. Figure 1: A mosaic mapping campaign has insufficient coverage because the distance to goal state heuristic ignored footprint curvature. This paper is an excursion into the computational ge- ometry that supports observational coverage planning with 2D framing instruments. First, we summarize ex- isting ways to represent the edge of a footprint polygon f i . An alternative data structure that preserves the in- cidence angle of the field of view edge is introduced. A computational experiment explores a practical concern of when it is appropriate to use a quadrilateral approx- imation of the footprint polygon for various algorithms of drawing the edge, and when the planner should pre- serve the footprint’s curvature.

Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

Elliptic Edge Polygons for Observational Coverage Planning

Michael Trowbridge, Russell Knight, Elly ShaoJet Propulsion Laboratory, California Institute of Technology{michael.a.trowbridge,russell.l.knight,elly.j.shao}@jpl.caltech.edu

Abstract

This paper presents a representation of the polygonalfootprint of an Earth-observing 2D framing sensor (i.e.instruments on Rosetta, Planet Labs SkySat) for ob-servation coverage planning that preserves curvatureof the footprint edge at little additional memory costcompared to previously published techniques.

Binary operations on field-of-view, ellipsoid intersec-tion edges are introduced, allowing them to serve asedges in a polygon. A computational experiment ex-amines the error produced by using this method versusexisting methods of camera footprint representation.Edge approximation error is most significant when thefield of view footprint is large compared to the body be-ing observed (small body exploration, fly-bys, or otherdistant observer scenarios), and negligible when it issmall (low altitude Earth observers with narrow fieldsof view). Great Circles polygons are degenerate ellip-tic edge polygons, admitting them to the polygon andedge operations in this paper.

Introduction

Planning a mapping campaign from space with a 2Dframing instrument implies reasoning about which partsof the target body have been observed. Two ap-proaches to this reasoning are the Footprint Placementfor Mosaic Imaging optimization problem (Mitchell etal. 2018) and the Area Coverage Planning problem for3-axis steerable, 2D framing sensors (Shao et al. 2018).As planning problems, both formalizations may be de-scribed as searches in a decision space for which obser-vation footprint tuples (time, rtgt, θ)

1 producing a setof instrument footprints F = {f1, f2, . . . , fn} such thata target region of interest polygon P is contained in theunion of all scheduled observation footprints. The goalstate g(n) is

g(n) = P ∈n⋃

i=1

fi (1)

Copyright c© 2019, by the California Institute of Technology.United States Government Sponsorship acknowledged.Contact author: Michael Trowbridge

1(Mitchell et al. 2018) simplified the problem by fixingtime and θ, focusing their approach on optimizing rtgt.

with a heuristic distance to the goal state h(n) of

h(n) = area(P )− area

(P −

n⋃i=1

fi

)(2)

How the planner computes the footprint fi is a lowerlevel detail that can cause problems with evaluation ofg(n) and h(n). (Shao et al. 2018) used 4-vertex greatcircles quadrilaterals to represent footprints fi for com-putational efficiency. In the post-talk Q&A session,Valicka noted that these quadrilaterals did not resemblethe curved footprints he had seen in his research, andrightly questioned the reasonableness of the quadrilat-eral approximation (which Shao et al. did not justify).As figure 1 shows, ignoring this curvature can cause theplan to achieve insufficient coverage.

Figure 1: A mosaic mapping campaign has insufficientcoverage because the distance to goal state heuristicignored footprint curvature.

This paper is an excursion into the computational ge-ometry that supports observational coverage planningwith 2D framing instruments. First, we summarize ex-isting ways to represent the edge of a footprint polygonfi. An alternative data structure that preserves the in-cidence angle of the field of view edge is introduced. Acomputational experiment explores a practical concernof when it is appropriate to use a quadrilateral approx-imation of the footprint polygon for various algorithmsof drawing the edge, and when the planner should pre-serve the footprint’s curvature.

Page 2: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

Background

Special Needs of Space-based Planners

To plan a mapping campaign of a planet, moon, or othersmall body, a planner should handle:

• Geometry on an arbitrary triaxial ellipsoid

• Polygons that contain the North or South pole

• Polygon edges that cross the antimeridian

• Edges defined by the shape of the instrument

While the planets of our solar system are modeledas spheres or spheroids, many of the smaller moonsare modeled as non-spherical, triaxial ellipsoids (Archi-nal et al. 2011). Comet 67P Churyumov-Gerasimenko,which was later found to be very non-ellipsoidal (Jordaet al. 2016), was modelled as a triaxial ellipsoid withsemi-axes 2.3, 1.9 and 1.7 km (Mysen 2004) before theRosetta observation campaign. For small body explo-ration, an observational coverage planner shouldn’t relyon spherical body assumptions.

The poles have been scientifically interesting areas ofMars (Bibring et al. 2004) and the Moon (Spudis etal. 2013). It’s a safe assumption that future missionswill study the poles of other celestial bodies. Studyingthe poles, or making any complete mapping campaign,requires handling observation polygons that cross theantimeridian (−π = π longitude).

One framing instrument (camera) model is a rectan-gular pyramid of infinite height, with the detector atthe peak. Each side of the camera’s field of view isa plane. The intersection of that plane and the tar-get triaxial ellipsoid is a straight line from the camera’sperspective, but it is being projected obliquely onto acurved surface. This can be difficult to represent as twopoints and a connecting edge.

Existing Spherical Polygon Edge Types

In planar Euclidean geometry, the edge that connectstwo vertices of a polygon is a straight line. The closestanalog in spherical geometry is the Great Circle: theintersection between a plane through the sphere’s cen-ter and the surface of the sphere. Great-circles polygonscan be treated much like polygons on a Euclidean plane,despite being on a non-Euclidean surface (the sphere).One implementation, the Computational Geometry Al-gorithm Library (CGAL), replaces 2D Euclidean lineswith oriented planes to partition half-spaces and pointswith 3D vectors (Hachenberger and Kettner 2019).

Two other alternatives are rhumb and equirectangu-lar lines2, which are conceptually related. They areboth curved paths across the sphere that are straight ina 2D map projection (Mercator for rhumb, plate carreefor equirectangular).

Rhumb lines are convenient for navigators becausethey follow a path of constant compass heading. Rhumb

2“Lat-Lon lines” in (Chamberlain and Duquette 2007).

Figure 2: The Great Circle arc is straight when viewedfrom above a sphere

lines have the disadvantages of requiring special han-dling when the line is nearly a line of constant longi-tude (Bennett 1996) or following a logarithmic spiralinto the poles (Alexander 2004).

Equirectangular (Plate Carree)RhumbGreat CircleEllipse section

Figure 3: The equirectangular line is straight in a platecarree projection

Equirectangular plate carree lines are convenient forprogrammers because latitude and longitude map di-rectly onto Cartesian coordinates (Wikipedia contrib-utors 2019). This admits simpler or more powerfulmethods, such as the 2D planar packing algorithmsthat Mitchell et al. applied to the frame instrumentfootprint optimization problem (Mitchell et al. 2018).While this is a compelling trait, it adds projection dis-tortion, which is worse at extreme latitudes. The poles,which are points in 3D, are horizontal lines in the 2Dprojection. Either the prime meridian or antimeridian(0 = 2π or −π=π radians longitude) requires specialhandling when a polygon crosses it.

None of these methods are appropriate for long dis-tances. Figure 4, shows that they can all deviate sig-nificantly from the true shape (beige Elliptic).

The obvious workaround is to interpolate along theedge of the footprint. Instead of four corner points,

Page 3: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

Figure 4: Large footprint with different edge types be-tween 4 corners.

project look vectors at regular intervals along each edgeof the observer’s field of view. Consult a table3 tochoose a point spacing close enough to keep the ap-proximation error within a reasonable bound. Theseextra points add space and computational complexityto subsequent polygon operations. There is a cheaperalternative, which we define in the next two sections.

NomenclatureUnless otherwise specified (/something), assume that theposition vector is relative to the origin of the targetbody in target-fixed (B-frame) coordinates.

ε Working floating point precision of a math op-eration

d The direction of some line

n Surface normal of an ellipsoid-plane intersection

robs Normalized (unit) vector of robs

b An argument (query) point to one of the poly-gon/edge operations

c Position vector of a corner of the instrument’sfield of view

c1/obs Corner 1’s position relative to the observer

d0 Position vector of a known point on some line

m Position vector of an ellipse’s center

robs Observer’s position vector relative to the targetbody’s origin

x A point on the surface of a target triaxial ellip-soid

B The target body-fixed coordinate frame

C The camera coordinate frame

E Edge containment check frame

M Ellipsoid-plane intersection coordinate frame

φ A half-cone angle about some vector

3(Chamberlain and Duquette 2007) contains tables list-ing the maximum allowed edge length before error exceedsa critical bound.

θ A clock angle within an ellipse

ε ErrorMr Vector r expressed in M-frame coordinatesMRB Coordinate transformation matrix from B toMA x semi-axis of a 2D ellipse

B y semi-axis of a 2D ellipse

e A polygon edge

hi Point-to-plane distance of point i

P A polygon on a triaxial ellipsoid

t Some parametric scalar (general)

Formulation

Handedness conventions

The operations in this paper take the approach thatclockwise point ordering defines an enclosed region,with each arc’s normal vector pointing inside. Reversethe order of the cross products if implementing in aframework that uses the opposite convention.

Intersection plane ellipse arcs as edges

The surface of a target triaxial ellipsoid is the set of

points x = [ x y z ]T

such that

x2

a2+y2

b2+z2

c2= 1 (3)

The spacecraft’s position vector relative to the targetbody is robs, which is computed by an ephemeris library.If the sensor field of view is w radians wide and h radi-ans high, the four corners of its field of view, in C-frame(camera) coordinates and relative to the observer, are

Cci/obs =

[1

± tan(w/2)± tan(h/2)

](4)

Figure 5: Camera field of view, C-frame (red=x,green=y, blue=z) relative to the observer

The coordinate transformation from the C frame tothe B frame is constructed by basis vectors as

BRC =[ Bu Bv Bw

](5)

Page 4: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

setting

u =rtgt − robs|rtgt − robs|

(6)

choosing the secondary alignment axis to achieve a de-sired target footprint alignment and using a cross prod-uct to compute the third.

If the four corners of the footprint intersect the targetbody, the intersection point will satisfy both equation3 and

ci = robs + tci/obs (7)

for some value of scalar t. We use the CSPICE functionsurfpt c() for this calculation (Acton 1996).

The observer’s position and the surface intercepts oftwo consecutive imager field of view corner points c1,c2 form a plane that defines a field of view edge. Thisplane’s surface normal n may be computed as

c1/obs = c1 − robs (8)

c2/obs = c2 − robs (9)

n =c1/obs × c2/obs∣∣c1/obs × c2/obs

∣∣ (10)

The position vector x = [ x y z ]T

of each pointon the edge e between c1 and c2 satisfies the target’sellipsoid surface equation 3 and is entirely in the planedefined by c1 and n:

(x− c1) · n = 0 (11)

Figure 6: 3D view of the observer, target and FOV edge

The intersection of this plane and the ellipsoid is anellipse with a solution of the form (Klein 2012)4

x = m +A cos θr +B sin θs (12)

4Klein’s paper has instructions for computing m, A,B, rand s. Van Wal and Scheeres present an abridged version(Van wal and Scheeres 2016).

Figure 7: 2D view of the FOV intersection ellipse plane

The polygon edge e covers the domain θ1 ≤ θ ≤ θ2,where θ1 corresponds to c1 and θ2 corresponds to c2.We can compute θ1 and θ2 by defining a new coordinateframeM centered at m and basis vectors { r s n }.The coordinate transformation from target body-fixedframe B to M is

MRB =

rT

sT

nT

(13)

To transform some arbitrary vector b to M-frame co-ordinates relative to the ellipse center,

Mb/m =[MRB] (b−m) (14)

The arc edge e may be represented minimally as e =(c1, c2, n).

Operations on intersection ellipse arcsThis section defines some primitive operations on ellipsearc edges that are required by many polygon operationalgorithms. They are listed constructively, building onthe prior operations.

Operation 1 (Is point b ∈ e?). Is some point b =

[ bx by bz ]Ton the arc e = (m, n, c1, c2)?

If b ∈ e, the relative position vector

b/m = b−m (15)

must be within the intersection plane and in a sectorbounded by lines (m, c1) and (m, c2)5. The in-planecondition can be checked with the dot product∣∣b/m · n

∣∣ < ε (16)

5If b is actually on the surface (not inside or outside ofthe target ellipsoid), b must also satisfy the triaxial ellip-soid surface equation. This check is deliberately omitted sothat the operation handles sub-surface and irregular terrainpoints.

Page 5: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

The sector bounds may be checked by computing thedot product of b/m with two line normal vectors: nor-mal to (m, c1) and in the c2 direction (c1⊥) and its con-verse, normal to (m, c2) and in the c1 direction (c2⊥).

c1/2 = c1 − c2 (17)

c1⊥ = c1/2 − c1/2 · c1c1 (18)

c2⊥ = −c1/2 + c1/2 · c2c2 (19)

The vectors, n, c1⊥ and c2⊥ may be computed onceand memoized as a non-orthogonal frame basis E , wherethe coordinate transformation from B to E is

ERB = [ c1⊥ c2⊥ n ]T

(20)

Transform b/m to E-frame coordinates:

Eb/m =

b1/mb2/mb3/m

E

=[ERB] [Bb/m

](21)

Point b ∈ e iff, in E-frame coordinates,(b1/m ≥ 0

)∧(b2/m ≥ 0

)∧(∣∣b3/m∣∣ < ε

)(22)

Operation 2 (Are arcs e1, e2 coplanar?).

Defining each arc as

e1 = (m1, n1, c11, c12) (23)

e2 = (m2, n2, c21, c22) (24)

Arc e1 is coplanar with e2 if

|(m2 −m1) · n1| < ε (25)

and|1− |n1 · n2|| < ε (26)

Operation 3 (Find intersection of arcs e1, e2). Where,if anywhere, does arc e1 intersect arc e2? Possible out-puts: ∅,p1 or (p1,p2)

If e1 intersects e2, conditions 3.1, 3.2 and 3.3 must besatisfied. Checking the conditions in that order reducescorner cases later.

Condition 3.1 (Planes are not parallel). Equation 26must be false.

If the two planes are not parallel, then the planes willintersect at a line of the form

d0 + td (27)

This line will be orthogonal to both plane normal vec-tors, so its direction d can be computed from their crossproduct (McKinnon 2012):

d =n1 × n2

|n1 × n2|(28)

There are an infinite number of correct d0, so choosea convenient form:

d0 = m1 + cr1 (29)

Because d0 must also be in e2’s plane,

(m1 + cr1 −m2) · n2 = 0 (30)

yielding

c =(m2 −m1) · n2

r1 · n2(31)

Equation 31 contains a singularity when r1 · n2 = 0. Ifthis occurs, substitute s1 for r1 in equations 29 to 31.

Condition 3.2 (Intersection line crosses ellipses). The

line of planar intersection d0 + td must intersect bothellipses 1 and 2.

This condition is easier to evaluate in theM-frame ofellipse 1, relative to its center m1. Transform equation27 as

M(d0 + td

)=[MRB] (d0 + td−m1

)[xy0

]=

[c00

]+ t

[δ1δ20

]if r1 · n2 6= 0[

0c0

]+ t

[δ1δ20

]otherwise

The first case corresponds to equations 29 to 31 aswritten. The second case applies when s1 must be usedto avoid equation 31’s singularity. Removing t, the firstcase reduces to

x = c+y

δ2δ1 → y =

δ2δ1x− δ2

δ1c (32)

or, if s1 were used instead of r1,

y =δ2δ1x+ c (33)

Equations 32 and 33 have the form y = mx+b. If thereis an intersection between this line and the ellipse, it

will also satisfy the ellipse equation x2

A2 + y2

B2 = 1 forsome real-valued x and y. Substituting for y,

x2

A2+

(mx+ b)2

B2= 1 (34)

arranging in quadratic form,(A2m2 +B2

)x2 + 2A2mbx+A2b2 −A2B2 = 0 (35)

defining

α = A2m2 +B2 (36)

β = 2A2mb (37)

κ = A2b2 −A2B2 (38)

then applying the quadratic formula,

x =−β ±

√β2 − 4ακ

2α(39)

If β2 > 4ακ, then there are two real valued solutions(two intersections). If β2 = 4ακ, then there is a singleintersection (a tangent point), and if β2 < 4ακ, then

Page 6: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

the plane intersection line does not intersect the ellipse.Evaluate equation 39 to obtain xi, then either equation32 or 33 (as appropriate) to obtain yi in M frame co-ordinates. Reverse the transformation in equation 14to construct the intersection point position vectors inbody-fixed B-frame coordinates:

pi =[MRB]T

MxiMyi0

+ m1 (40)

Condition 3.3 (Ellipse-line intersections within thearc). One or two of the line-ellipse intersections mustbe within the each ellipse’s angle bounds θi1 and θi2.

The points pi are intersections of the two ellipses. Wemust now check that they fall within the bounds of thearcs, which are subsets of the ellipses. Invoke operation1 for p1 and p2 against arcs e1 and e2. Point pi is anarc intersection if

pi ∈ e1 ∧ pi ∈ e2 (41)

Operation 4 (Split an arc). Given a point b ∈ e =(n, c1, c2) ,b 6= c1 ∧ b 6= c2, subdivide e into arcs e1and e2 at b.

Trivial:

e1 = (n, c1,b) (42)

e2 = (n,b, c2) (43)

Operation 5 (Furthest point from cone center). Findthe point xmax ∈ e that is the furthest from the centerof a unit vector b drawn from the origin, where

d = 1− cosφ = 1− x · b|x|

(44)

Equation 44 is continuous and differentiable becausex(θ) is a linear sum of constants and continuous, dif-ferentiable functions. Its derivative with respect to θis

∂d

∂θ=∂x

∂θ·

(x · b|x|3

− b

|x|

)(45)

and∂x

∂θ= −A sin θr +B cos θs (46)

Use a second derivative concavity check to determinewhether e is the one local maximum or one local min-imum case. If d(θ) is concave up on (θ1, θ2), then thelocal max of d will be either θ1 or θ2. If d(θ) is con-cave down, gradient ascent or bisection may be used tolocate the maximum.

After locating

θmax = argmax d, θ ∈ (θ1, θ2) (47)

the most extreme point on the arc xmax is obtainedfrom evaluating equation 12 for θ = θmax.

Operation 6 (Compute polygon bounding cone).

Construct a bounding cone (b, dmax) enclosing polygon

P , where b is a unit vector drawn from the center ofthe triaxial ellipsoid and d corresponds to a half-coneangle φ about b.

Start by using the optimal bounding cone algorithmfor vectors in three dimensions (Barequet and Elber2005), where each 3D vector to be bounded is a vertex ofpolygon P . Because the edges of P are not great circlesarcs, we must expand the bounding cone to enclose themost extreme point on each arc:

dmax ← max (dmax,max (d(θmax(ei)), ei ∈ P )) (48)

The expansion in equation 48 doesn’t consider alter-nate pointings for b, causing (b, dmax) to lose its guar-antee of optimality. The cone retains the trait of com-pletely enclosing P , however, which is sufficient for ourpurposes.

Operation 7 (Point-in-convex-polygon check). Issome point b inside, outside, or on the boundary ofconvex polygon P = {(c1, c2, . . . , cn) , (e1, e2, . . . , en)}?

Because our bounding planes are not required to passthrough the ellipsoid’s center, the volume that they en-close may intersect the ellipsoid on both sides. We usea bounding cone to to separate the polygon of interestP from its antipodal polygon P ′.

Condition 7.1 (Bounding cone containment). If pointb is contained by polygon P , b must also be containedby the bounding cone (b, dmax) enclosing P .

Define xb as the position vector corresponding topoint b. Compute the distance db between xb and busing equation 44. Point b is contained by the bound-ing cone iff db ≤ dmax.

Condition 7.2 (Convex bounding plane containment).To be contained in P , point b must also be on the insideof each edge’s intersection ellipse plane.

Evaluate the signed point to plane distance hi foreach edge ei:

hi = (b− ci) · ni (49)

If any hi < −ε, b is outside. If all hi > ε, b is inside.Otherwise, point b is on the boundary of P .

Theorem 1 (Interoperability with Great Circles).Polygons may be composed of any mix of Great Circlesand ellipsoid-plane intersection arcs.

Lemma 1.1 (Great Circle degeneracy). Great Circlearcs are degenerate ellipsoid-plane intersection arcs.

Set m = ~0, r = c1 and n = (c2 × c1) / |c2 × c1|. Thecenter of the intersection ellipse will be at the origin(center of the target body). All intersections of a planeand a sphere are a circle, which makes this particularelliptic edge also a Great Circle.

Corollary 1.1. Great Circle arcs may be converted toellipsoid-plane intersection arcs.

The Great Circle arc (c1, c2) may be augmentedwith n = (c2 × c1) / |c2 × c1| to completely define e =(c1, c2, n).

Page 7: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

MethodologyWe hypothesize that the choice of polygon line is impor-tant for large, oblique footprints of distant observers,but not important for small footprints of closer ob-servers. A computational experiment will test this hy-pothesis for two mission cases.

The first mission case (near/narrow) is modeled onthe Earth-observing Planet Labs (formerly Skybox,Terra Bella) SkySat-1 in a Low Earth Orbit (LEO). Itsfield of view is derived from nominal scene size at refer-ence orbit altitude in the Planet Image Product Spec-ifications (Planet Labs, Inc. 2018). The nadir altitudeis based on a propagation of the SkySat-1 ephemerisfrom the STK Data Federate (Analytic Graphics, Inc.2019).

The second case examines a smaller, more distantbody, with a larger observer field of view. The MarsReconnaissance Orbiter (MRO) aerobraking phase (Se-menov and You 2006) is used with the 6◦ MRO contextcamera (Malin Space Systems, Inc. 2005) and a nadirfootprint that is 17% of the Mars ellipsoid’s smallestsemiaxis. This is similar to the closest approach in theRosetta OSIRIS mapping campaign (Jorda et al. 2016),where a 9 km altitude image with the 2.2◦ square field ofview NAC has a footprint approximately 16% comet’ssmallest approximating ellipsoid6 semiaxis.

Table 1: Observer footprint configurations

Near/Narrow Far/Wide

Spacecraft SkySat-1 MROBody observed Earth MarsTrajectory LEO AerobrakingNadir Altitude 578 km 8992 kmField of view 0.37◦ × 0.15◦ 6◦ × 6◦

Rhumb lines, equirectangular lines and great cir-cles arcs as approximations of the true shape (anellipsoid-plane intersection arc). Error of this approx-imation is computed as a fractional disagreement εbetween the truth polygon Pt and the approximation(rhumb/equirectangular/great circle) polygon Pa:

ε =area (Pa ∪ Pt)− area (Pa ∩ Pt)

area (Pt)(50)

Large ε is a poor approximation and ε = 0 is a perfectapproximation.

Polygon area is computed using the great circles poly-gon algorithm in (Chamberlain and Duquette 2007).All footprint polygons will be interpolated with 100points per side according to the edge type under testand stored as great circles polygons. Polygon intersec-tions and unions will use the Margalit and Knott al-

6(Jorda et al. 2016) provides an approximating triaxialellipsoid, but notes that 67B’s structure is more properlymodeled as bilobate.

gorithm (Margalit and Knott 1989), adapted for greatcircle arcs.

Results

Impact of the footprint polygon line type

Figure 8 shows that for a low altitude observer with asmall footprint, the approximation error of the footprintis small (less than 1%). In general, error increases asoff-nadir angle increases.

0 10 20 30 40 50Boresight off-nadir angle [deg]

0.0

0.1

0.2

0.3

0.4

Err

or [%

of e

llipt

ic]

EquirectangularRhumbGreat circles

Figure 8: Error caused by non-elliptic edges for a lowaltitude observer with a small footprint (SkySat-1)

When the observer was further away and had a largerfield of view, the error grew to almost 10% (figure 9).

0.0 2.5 5.0 7.5 10.0 12.5 15.0Boresight off-nadir angle [deg]

2.5

5.0

7.5

10.0

Err

or [%

of e

llipt

ic]

EquirectangularRhumbGreat circles

Figure 9: Error caused by non-elliptic edges for a largerfootprint further from the target body (MRO/CTX)

Discussion

Does the type of edge line matter?

Sometimes. For commercial imagery operators in LowEarth Orbit (close relative to the target body’s radius)with small fields of view, choice of edge type doesn’tmatter. All of the edge types in the Earth LEO missioncase had less than 1% approximation error, even up to55◦ off-nadir angle.

When the observer is far from the target or has a largefield of view relative to the target, yes, the choice ofline can matter. The edge type was more significant asoff nadir angle increased. Even at 10% error, however,

Page 8: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

a planner could compensate by margining the field ofview size or using more interpolation points.

Round trip error to/from ellipse angle

While testing against Earth, we found that repeat-edly transforming a point between position vector andellipse-frame clock angle θ via equation 12 had roundtrip error ranging from 0.01 km (single trip, nadir) to44 km (10 round trips, 35◦ off-nadir). The error gen-erally increased with off-nadir angle and accumulatedlinearly with each invocation. We did not identify theroot cause, but did find a workaround.

Earth, which is near spherical, is an edge case inKlein’s algorithm whereD1r ≈ D1s. Klein recommendsusing ω = π/4 in this case (Klein 2012). ω = π/4removed the compounding effect from the round triperror, but did not correct the first round trip error.

The best workaround we found was to first define theunit vector b pointing in the direction of θ, expressedin M-frame coordinates, as

Mb = [ cos θ sin θ 0 ]T

(51)

then transform it to body-fixed B-frame coordinates

Bb = [ r s n ][Mb

](52)

Projected to the surface of the triaxial ellipsoid, theoutput point b can be written as

b = [ bx by bz ] = m + tb (53)

where only t is unknown and

b2xa2

+b2yb2

+b2zc2

= 1 (54)

Evaluating these equations with CSPICE surfpt c() had10−12 km round trip error (machine precision).

Generalizations and Applicability

No spherical approximations are used in the formula-tion of the elliptic edge polygon. Without modifica-tion, these polygons may be used for any triaxial ellip-soid where the field of view is small enough that it doesnot span more than 180◦ of the ellipsoid (defined by acutting plane through the ellipsoid’s origin).

Elliptic edge polygons treat the edge of the instru-ment footprint as a plane. Pushbroom sensor swathsides are solids of revolution formed by the sensorcenter line. Elliptic edge polygons are applicable tothe ASPEN Eagle Eye scheduler (Knight, Donnellan,and Green 2013), but not the Compressed Large Ac-tivity Scheduler-Planner (CLASP) (Knight and Chien2006) or AEOS strip selection problems (Lemaıtre etal. 2002).

Related workShao et al. showed that footprint size and skew changeduring an overflight, but used 4-corner Great Circles arc

polygons for the footprints, losing the curvature of foot-print edges (Shao et al. 2018). GeoPlace used a squarecamera footprint with more realistic curvature at theedges (Mitchell et al. 2018). The publicly released Geo-Place source code appears to use an oblique intersectionof the square field of view and a sphere, treating thefootprint as a 2D planar polygon thereafter. Edge cur-vature was preserved by adding intermediate vertices7.This paper presents a method of achieving footprintedge fidelity similar to (Mitchell et al. 2018), but at alower space complexity and on a triaxial ellipsoid.

The Computational Algorithms Geometry Library(CGAL) implements Nef polygons on a sphere, whereedges are segments of the intersection of a sphere andplane through the origin (Hachenberger and Kettner2019). This is sufficient for great circles polygons, buthas no explicit provision for triaxial ellipsoids or halfs-paces from planes that do not intersect the origin.

Recommendations for future workThe equation for the area of an arbitrary polygon ona sphere is supported by a proof that approximatesa polyhedron with a sphere. It is not clear thatthis proof also covers polygons on ellipsoids whose arcedges do not pass through the body’s center. Handed-ness checks and winding number point-in-polygon algo-rithms should be similarly scrutinized.

This paper is missing a critical operation: point-in-concave-polygon containment checks, required for poly-gon intersection, union and subtraction (Margalit andKnott 1989). Our next step would have been to decom-pose the concave polygon into convex polygons usingthe method outlined in (Li, Wang, and Wu 2007).

ConclusionBy storing an observer’s camera footprint polygon asan elliptic edge polygon, the planner can preserve thecurvature of the footprint without adding intermediatepoints along the edge. The experiment showed thatcurvature is only a concern when the observer’s fieldof view is large compared to the observed body, moreso when the observer obliquely views the target body.The elliptic edge polygon algorithms in this paper canbe used more broadly to handle edge cases in existingGreat Circles polygon/arc algorithms.

AcknowledgementsThe research was carried out at the Jet Propulsion Lab-oratory, California Institute of Technology, under a con-tract with the National Aeronautics and Space Admin-istration.

Special thanks to Chris Valicka for prompting thecloser look that led to this paper, and to Marco Ortegaand Arthur Bohren for the discussions on geodesy andnonorthogonal transformations.

7See files Point.hpp, squareprojectfootprint cubit.jouand squareprojectfootprint.txt from the GitHub repositoryhttps://github.com/cgvalic/GeoPlace

Page 9: Elliptic Edge Polygons for Observational Coverage Planning · requires handling observation polygons that cross the antimeridian ( ˇ= ˇlongitude). One framing instrument (camera)

ReferencesActon, C. 1996. Ancillary Data Services of NASA’sNavigation and Ancillary Information Facility. Plane-tary and Space Science 44(1):65–70.

Alexander, J. 2004. Loxodromes: A rhumb way to go.Mathematics magazine 77(5):349–356.

Analytic Graphics, Inc. 2019. Orbital ephemeris (agistk data federate) for skysat-1 (39418), epoch 21 march2019 17:42:20 utc. Electronic. Accessed 21 March 2019.

Archinal, B. A.; A’Hearn, M. F.; Bowell, E.; Conrad,A.; Consolmagno, G. J.; Courtin, R.; Fukushima, T.;Hestroffer, D.; Hilton, J. L.; Krasinsky, G. A.; et al.2011. Report of the iau working group on cartographiccoordinates and rotational elements: 2009. CelestialMechanics and Dynamical Astronomy 109(2):101–135.

Barequet, G., and Elber, G. 2005. Optimal bound-ing cones of vectors in three dimensions. InformationProcessing Letters 93(2):83–89.

Bennett, G. 1996. Practical rhumb line calculations onthe spheroid. The Journal of Navigation 49(1):112–119.

Bibring, J.-P.; Langevin, Y.; Poulet, F.; Gendrin, A.;Gondet, B.; Berthe, M.; Soufflot, A.; Drossart, P.;Combes, M.; Bellucci, G.; et al. 2004. Perennial waterice identified in the south polar cap of mars. Nature428(6983):627.

Chamberlain, R. G., and Duquette, W. H. 2007. Somealgorithms for polygons on a sphere. Technical report,Pasadena, CA: Jet Propulsion Laboratory.

Hachenberger, P., and Kettner, L. 2019. 2D booleanoperations on nef polygons embedded on the sphere.In CGAL User and Reference Manual. CGAL EditorialBoard, 4.14 edition.

Jorda, L.; Gaskell, R.; Capanna, C.; Hviid, S.; Lamy,P.; Durech, J.; Faury, G.; Groussin, O.; Gutierrez, P.;Jackman, C.; et al. 2016. The global shape, density androtation of Comet 67P/Churyumov-Gerasimenko frompreperihelion Rosetta/OSIRIS observations. Icarus277:257–278.

Klein, P. P. 2012. On the ellipsoid and plane intersec-tion equation. Applied Mathematics 3(11):1634–1640.

Knight, R., and Chien, S. 2006. Producing large obser-vation campaigns using compressed problem represen-tations. In International Workshop on Planning andScheduling for Space (IWPSS-2006).

Knight, R.; Donnellan, A.; and Green, J. 2013. Missiondesign evaluation using automated planning for highresolution imaging of dynamic surface processes fromthe iss. In International Workshop on Planning andScheduling for Space (IWPSS 2013).

Lemaıtre, M.; Verfaillie, G.; Jouhaud, F.; Lachiver, J.-M.; and Bataille, N. 2002. Selecting and schedulingobservations of agile satellites. Aerospace Science andTechnology 6(5):367–381.

Li, J.; Wang, W.; and Wu, E. 2007. Point-in-polygontests by convex decomposition. Computers & Graphics31(4):636 – 648.

Malin Space Systems, Inc. 2005. Mars Reconnais-sance Orbiter (MRO) Context Camera (CTX) Instru-ment Description. Electronic (web). [Online - accessed4 April 2019].

Margalit, A., and Knott, G. D. 1989. An algorithm forcomputing the union, intersection or difference of twopolygons. Computers & Graphics 13(2):167–183.

McKinnon, J. 2012. Finding the line of intersection oftwo planes. Electronic (web).

Mitchell, S. A.; Valicka, C. G.; Rowe, S.; and Zou, S. X.2018. Footprint placement for mosaic imaging by sam-pling and optimization. In Int. Conf. on AutomatedPlanning and Scheduling, ICAPS, 8.

Mysen, E. 2004. Rotational dynamics of subsolar sub-limating triaxial comets. Planetary and Space Science52(10):897 – 907. Nonlinear Processes in Solar SystemPlasmas.

Planet Labs, Inc. 2018. Planet imagery product speci-fications. techreport, Planet Labs, Inc.

Semenov, B. V., and You, T.-H. 2006. Mars Reconnais-sance Orbiter Aerobraking SPK file, MRO NAV Solu-tion (mro ab.bsp). Binary SPICE kernel.

Shao, E.; Byon, A.; Davies, C.; Davis, E.; Knight, R.;Lewellen, G.; Trowbridge, M.; and Chien, S. 2018. Areacoverage planning with 3-axis steerable, 2d framing sen-sors. In Scheduling and Planning Applications Work-shop , International Conference on Automated Plan-ning and Scheduling (ICAPS SPARK 2018).

Spudis, P.; Bussey, D.; Baloga, S.; Cahill, J.; Glaze, L.;Patterson, G.; Raney, R.; Thompson, T.; Thomson, B.;and Ustinov, E. 2013. Evidence for water ice on themoon: Results for anomalous polar craters from the lromini-rf imaging radar. Journal of Geophysical Research:Planets 118(10):2016–2029.

Van wal, S., and Scheeres, D. J. 2016. The lift-offvelocity on the surface of an arbitrary body. CelestialMechanics and Dynamical Astronomy 125(1):1–31.

Wikipedia contributors. 2019. Equirectangular pro-jection — Wikipedia, the free encyclopedia. [Online;accessed 4-April-2019].