1
1. Wheeler, R.E. Notes on View Camera Geometry. 2003, 55. www.bobwheeler.com/photo/ViewCam.pdf. 2. Wolf, P.R. and DeWitt, B.A. Elements of Photogrammetry(with Applications in GIS). McGraw-Hill Higher Education, 2000. 3. Vincenty, T. Direct and Inverse Solutions of Geodesics on the ellipsoid with application of nested equations. Survey Review 23, 176 (1975), 88-93. Modelling Spatial Video as part of a GIS Video- Analysis Framework Paul Lewis Introduct ion Research presented in this poster was funded by a Strategic Research Cluster Grant (07/SRC/I1168) by Science Foundation Ireland under the National Development Plan. The authors gratefully acknowledge this support. Spatial Video Data Model Results and Applications Reference s Viewpoint Implementation Figure 3 Software module for semi-supervised spatial video Viewpoint fitting. It is now common for video; real-time and collected, mobile and static, to be georeferenced and stored in large archives for users of expert systems to access and interact with. In the ground-based terrestrial context georeferenced video is also becoming more commonly collected and accessed in recent years. The collecting, storing and distributing of these data are now more easily achieved through the development and affordability of dedicated systems such as Mobile Mapping Systems (MMS) , the StratAG XP1 experimental platform is shown in figure 1. Such MMS are being used for infrastructural monitoring and mapping because they can now generate high accuracy geospatial data. Camera Model (table 1) The Circle Of Confusion (COC) measurement is the maximum permissible blur circle for an image and directly affects the Depth- Of-Field (DOF) calculations. An Angle-Of-View (AOV) represents the camera lens properties as an arc angle. A DOF Hyperfocal Distance is a measurable distance in front of the camera lens from which point to infinity the DOF extends. In figure 5 we show one result from the large system calibration testing. While in figure 6 we highlight one GIS analysis operation on a large spatial video data warehouse. In this case a buildings footprint data set is used to generate higher accuracy Viewpoints by intersecting both GIS constrained spatial data sets. Figure 1: StratAG Mobile Mapping System (MMS) XP1 experimental platform. However, significant problems exist for access to large volumes of archived georeferenced MMS data in a GIS context, particularly Video which is the subject of this poster. These include the semantics behind the modelling of its spatial content and the development of a computationally efficient query model that can isolate video sequences of geographical interest. This poster outlines a georeferenced-video framework where a GIS- viewshed oriented approach to modelling and querying terrestrial mobile imagery is detailed. Technically, this viewshed model implements a subtle and more flexible optimisation of the Open Geospatial Consortiums (OGC) Geo- Video (GVS) ViewCone data type, however, it does define some fundamentally different properties. This in turn has enable a significantly different spatial context to be defined for the geographic space that terrestrial mobile video captures by optimising it in terms of areal coverage and perceptible depth. A view frustum model, calculated from the camera parameters, forms the basis of the geographical extent representation of each spatial video frame (or static sequence). The OGC GVS implementation uses this principle in a, restricted, two dimensional ViewCone data structure. Our Viewpoint approach (figure 2) has lead to a distinct change in the overall GVS data structure from a single polygonal representation of space to a capture point and disconnected but associated image-space polygon representation set. The camera image- capture location is represented by a point while the polygon defines the geographical space, as the optimized focus range, for the image space of each frame. Figure 3 shows a software tool for semi- supervised fitting of spatial video to a Viewpoint data view in a GIS. In this tool both the GIS space and the camera/image properties can be adjusted to achieve an accurate geographical space representation. Figure 2: 3D Spatial Video Viewpoint. Figure 4: . Spatial video post-survey processing flowchart. Spatial Data: decode GPS NMEA messages Video Data: Converted to an easily indexed format Spatial Video Viewpoints Model (Figure 2) GIS Query and Analysis of Viewpoints DB Spatial Video Survey data loaded onto PC. Decode Video Frame Index ID's Store Video Automatic Database Population process Spatial Video Player/Viewer Using standard photogrammetric methods from camera calibration literature, [1][2], and used in a general case like this we implement the following steps to generate a Viewpoints extents: Figure 6: Using buildings footprint spatial data to generate higher accuracy Viewpoints. Calculation Operation Equation Circle of Confusion (COC) Angle-Of-View (AOV) Depth-Of-Field (DOF) Hyperfocal Distance Near Limit DOF Far Limit DOF Near Limit DOF + View Limit pc ac sd COC / f nc D D h nh 1 1 2 f nc f D h 2 f d A 2 arctan 2 Spatial Extrapolation Model The automated process performed by the software shown in figure 3 is highlighted here in figure 4. 1.Adjust the GPS coordinates to be coincident with the principle point of the camera image plane. 2.Calculate an adjusted Hyperfocal Sharpness Distance to the eight Viewpoint plane intersection points, figure 3. 3.Calculate an adjusted azimuth. 4.Calculate adjusted altitudes of the Viewpoint plane intersection points. 5. Use these results to solve the 3D geodetic forward algorithm as defined in Vincenty, [3]. Table 1. Implemented Camera Model Operations Viewpoint Line Poi nt Point to Line Distance in Meters Result C to D X 0.798mtrs Inside C to D W 0.026mtrs Inside A to B Y 1.066mtrs Outside A to B Z 0.051mtrs Inside Figure 5: Plan view of Viewpoint fitting in test scenario showing accuracy results.

1. Wheeler, R.E. Notes on View Camera Geometry. 2003, 55. . 2. Wolf, P.R. and DeWitt, B.A. Elements of Photogrammetry(with

Embed Size (px)

Citation preview

Page 1: 1. Wheeler, R.E. Notes on View Camera Geometry. 2003, 55. . 2. Wolf, P.R. and DeWitt, B.A. Elements of Photogrammetry(with

1. Wheeler, R.E. Notes on View Camera Geometry. 2003, 55. www.bobwheeler.com/photo/ViewCam.pdf.2. Wolf, P.R. and DeWitt, B.A. Elements of Photogrammetry(with Applications in GIS). McGraw-Hill Higher Education, 2000.3. Vincenty, T. Direct and Inverse Solutions of Geodesics on the ellipsoid with application of nested equations. Survey Review 23, 176 (1975), 88-93.

Modelling Spatial Video as part of a GIS Video-Analysis Framework

Paul Lewis

Introduction

Research presented in this poster was funded by a Strategic Research Cluster Grant (07/SRC/I1168) by Science Foundation Ireland under the National Development Plan. The authors gratefully acknowledge this support.

Spatial Video Data Model Results and Applications

References

Viewpoint Implementation

Figure 3 Software module for semi-supervised spatial video Viewpoint

fitting.

It is now common for video; real-time and collected, mobile and static, to be georeferenced and stored in large archives for users of expert systems to access and interact with. In the ground-based terrestrial context georeferenced video is also becoming more commonly collected and accessed in recent years. The collecting, storing and distributing of these data are now more easily achieved through the development and affordability of dedicated systems such as Mobile Mapping Systems (MMS) , the StratAG XP1 experimental platform is shown in figure 1. Such MMS are being used for infrastructural monitoring and mapping because they can now generate high accuracy geospatial data.

Camera Model (table 1)The Circle Of Confusion (COC) measurement is the maximum permissible blur circle for an image and directly affects the Depth-Of-Field (DOF) calculations.

An Angle-Of-View (AOV) represents the camera lens properties as an arc angle.

A DOF Hyperfocal Distance is a measurable distance in front of the camera lens from which point to infinity the DOF extends.

In figure 5 we show one result from the large system calibration testing. While in figure 6 we highlight one GIS analysis operation on a large spatial video data warehouse. In this case a buildings footprint data set is used to generate higher accuracy Viewpoints by intersecting both GIS constrained spatial data sets.

Figure 1: StratAG Mobile Mapping System (MMS) XP1 experimental platform.

However, significant problems exist for access to large volumes of archived georeferenced MMS data in a GIS context, particularly Video which is the subject of this poster. These include the semantics behind the modelling of its spatial content and the development of a computationally efficient query model that can isolate video sequences of geographical interest. This poster outlines a georeferenced-video framework where a GIS-viewshed oriented approach to modelling and querying terrestrial mobile imagery is detailed. Technically, this viewshed model implements a subtle and more flexible optimisation of the Open Geospatial Consortiums (OGC) Geo-Video (GVS) ViewCone data type, however, it does define some fundamentally different properties. This in turn has enable a significantly different spatial context to be defined for the geographic space that terrestrial mobile video captures by optimising it in terms of areal coverage and perceptible depth.

A view frustum model, calculated from the camera parameters, forms the basis of the geographical extent representation of each spatial video frame (or static sequence). The OGC GVS implementation uses this principle in a, restricted, two dimensional ViewCone data structure. Our Viewpoint approach (figure 2) has lead to a distinct change in the overall GVS data structure from a single polygonal representation of space to a capture point and disconnected but associated image-space polygon representation set. The camera image-capture location is represented by a point while the polygon defines the geographical space, as the optimized focus range, for the image space of each frame. Figure 3 shows a software tool for semi-supervised fitting of spatial video to a Viewpoint data view in a GIS. In this tool both the GIS space and the camera/image properties can be adjusted to achieve an accurate geographical space representation.

Figure 2: 3D Spatial Video Viewpoint.

Figure 4: . Spatial video post-survey processing flowchart.

Spatial Data:decode GPS NMEA messages

Video Data:Converted to an easily indexed

format

Spatial Video Viewpoints Model (Figure 2)

GIS Query and Analysis of Viewpoints DB

Spatial Video Survey data loaded onto PC.

Decode Video Frame Index ID'sStore Video

Automatic Database Population process

Spatial Video Player/Viewer

Using standard photogrammetric methods from camera calibration literature, [1][2], and used in a general case like this we implement the following steps to generate a Viewpoints extents:

Figure 6: Using buildings footprint spatial data to generate higher

accuracy Viewpoints.

Calculation Operation EquationCircle of Confusion (COC)

Angle-Of-View (AOV)

Depth-Of-Field (DOF) Hyperfocal Distance

Near Limit DOF

Far Limit DOF Near Limit DOF + View Limit

pcacsdCOC /

f

nc

DD hnh

1

1

2

fnc

fDh

2

f

dA

2arctan2

Spatial Extrapolation ModelThe automated process performed by the software shown in figure 3 is highlighted here in figure 4.1.Adjust the GPS coordinates to be coincident with the principle point of the camera image plane.2.Calculate an adjusted Hyperfocal Sharpness Distance to the eight Viewpoint plane intersection points, figure 3.3.Calculate an adjusted azimuth.4.Calculate adjusted altitudes of the Viewpoint plane intersection points.

5. Use these results to solve the 3D geodetic forward algorithm as defined in Vincenty, [3].

Table 1. Implemented Camera Model Operations

Viewpoint

Line Point Point to Line Distance in Meters Result

C to D X 0.798mtrs Inside

C to D W 0.026mtrs Inside

A to B Y 1.066mtrs Outside

A to B Z 0.051mtrs InsideFigure 5: Plan view of Viewpoint fitting in test scenario showing accuracy results.