8
Using multi-angle WorldView-2 imagery to determine ocean depth near the island of Oahu, Hawaii Krista R. Lee*, Richard C. Olsen, Fred A. Kruse Department of Physics and Remote Sensing Center Naval Postgraduate School, 833 Dyer Road, Monterey, California, 93943, USA ABSTRACT Multispectral imaging (MSI) data acquired at different view angles provide an analyst with a unique view into shallow water. Observations from DigitalGlobe’s WorldView-2 (WV-2) sensor, acquired in 39 images in one orbital pass on 30 July 2011, are being analyzed to determine bathymetry along the windward side of the Oahu coastline. Satellite azimuth and elevation range from 18.8 to 185.8 degrees, and 24.9 (forward-looking) to 24.9 (backward-looking) degrees with 90 degrees representing a nadir view (respectively). WV-2’s eight multispectral bands provide depth information (especially using the Blue, Green, and Yellow bands), as well as information about bottom type and surface glint (using the Red and NIR bands). Bathymetric analyses of the optical data will be compared to LiDAR-derived bathymetry in future work. This research shows the impact of varying view angle on inferred bathymetry and discusses the differences between angle acquisitions. Keywords: WorldView-2, multispectral imagery, shallow water bathymetry, depth determination, multi-angle remote sensing 1. INTRODUCTION The ability to use satellite and airborne imagery to “see” into shallow water is highly beneficial because it permits remote derivation of water depth, detection of underwater objects, and classification of bottom types. Previous authors have explored this topic and developed methods that estimate such factors. The results are used for a variety of applications, for example, to locate navigable paths that are deep enough for a boat or ship to safely travel from the open ocean to shore. The aim of this research is to analyze 39 WorldView-2 (WV-2) images acquired with varying view angle and report upon the role that image acquisition angle plays in depth determination. Methods used to prepare multi-angle images for analysis, a selected depth determination algorithm, and initial analysis results for data acquired at four different acquisition angles are described. 2. BACKGROUND 2.1 WorldView-2 satellite WorldView-2 (WV-2) is DigitalGlobe’s third operational satellite. It is capable of capturing 1.84 m multispectral spatial resolution. The spectral range covered is from 400 to 1040 nm (Figure 1). In addition to the Red, Green, Blue, and Near-Infrared (NIR) bands, WV-2 also has an extra four bands. These include bands designated as “Coastal Blue” (a shorter wavelength blue band), “Yellow, “Red Edge” (on the edge of the vegetation IR plateau), and “NIR-2” (a second NIR band). Each band is dedicated to a particular part of the electromagnetic spectrum to be sensitive to a specific type of feature on land, above or beneath water bodies, and in the atmospheric column 1 . The Coastal Blue band (400 to 450 nm) is least absorbed by water, and is useful in bathymetric studies. The Blue band (450 to 510 nm) also provides good penetration of water and is less affected by atmospheric scattering and absorption compared to the Coastal Blue band 1 . Lee et al. (2011) found that band combinations that use the Green (510 to 580 nm) and Yellow (585 to 625 nm) bands are beneficial when determining depth and bottom type 2 . * E-mail: [email protected] , Phone: (831) 656-3330, www.nps.edu/rsc Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII, edited by Sylvia S. Shen, Paul E. Lewis, Proc. of SPIE Vol. 8390, 83901I · © 2012 SPIE CCC code: 0277-786X/12/$18 · doi: 10.1117/12.918716 Proc. of SPIE Vol. 8390 83901I-1 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Using multi-angle WorldView-2 imagery to determine ocean

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Using multi-angle WorldView-2 imagery to determine ocean

Using multi-angle WorldView-2 imagery to determine ocean depth near the island of Oahu, Hawaii

Krista R. Lee*, Richard C. Olsen, Fred A. Kruse

Department of Physics and Remote Sensing Center Naval Postgraduate School, 833 Dyer Road, Monterey, California, 93943, USA

ABSTRACT

Multispectral imaging (MSI) data acquired at different view angles provide an analyst with a unique view into shallow water. Observations from DigitalGlobe’s WorldView-2 (WV-2) sensor, acquired in 39 images in one orbital pass on 30 July 2011, are being analyzed to determine bathymetry along the windward side of the Oahu coastline. Satellite azimuth and elevation range from 18.8 to 185.8 degrees, and 24.9 (forward-looking) to 24.9 (backward-looking) degrees with 90 degrees representing a nadir view (respectively). WV-2’s eight multispectral bands provide depth information (especially using the Blue, Green, and Yellow bands), as well as information about bottom type and surface glint (using the Red and NIR bands). Bathymetric analyses of the optical data will be compared to LiDAR-derived bathymetry in future work. This research shows the impact of varying view angle on inferred bathymetry and discusses the differences between angle acquisitions. Keywords: WorldView-2, multispectral imagery, shallow water bathymetry, depth determination, multi-angle remote sensing

1. INTRODUCTION The ability to use satellite and airborne imagery to “see” into shallow water is highly beneficial because it permits remote derivation of water depth, detection of underwater objects, and classification of bottom types. Previous authors have explored this topic and developed methods that estimate such factors. The results are used for a variety of applications, for example, to locate navigable paths that are deep enough for a boat or ship to safely travel from the open ocean to shore. The aim of this research is to analyze 39 WorldView-2 (WV-2) images acquired with varying view angle and report upon the role that image acquisition angle plays in depth determination. Methods used to prepare multi-angle images for analysis, a selected depth determination algorithm, and initial analysis results for data acquired at four different acquisition angles are described.

2. BACKGROUND 2.1 WorldView-2 satellite WorldView-2 (WV-2) is DigitalGlobe’s third operational satellite. It is capable of capturing 1.84 m multispectral spatial resolution. The spectral range covered is from 400 to 1040 nm (Figure 1). In addition to the Red, Green, Blue, and Near-Infrared (NIR) bands, WV-2 also has an extra four bands. These include bands designated as “Coastal Blue” (a shorter wavelength blue band), “Yellow, “Red Edge” (on the edge of the vegetation IR plateau), and “NIR-2” (a second NIR band). Each band is dedicated to a particular part of the electromagnetic spectrum to be sensitive to a specific type of feature on land, above or beneath water bodies, and in the atmospheric column1. The Coastal Blue band (400 to 450 nm) is least absorbed by water, and is useful in bathymetric studies. The Blue band (450 to 510 nm) also provides good penetration of water and is less affected by atmospheric scattering and absorption compared to the Coastal Blue band1. Lee et al. (2011) found that band combinations that use the Green (510 to 580 nm) and Yellow (585 to 625 nm) bands are beneficial when determining depth and bottom type2. * E-mail: [email protected], Phone: (831) 656-3330, www.nps.edu/rsc

Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII,edited by Sylvia S. Shen, Paul E. Lewis, Proc. of SPIE Vol. 8390, 83901I · © 2012 SPIE

CCC code: 0277-786X/12/$18 · doi: 10.1117/12.918716

Proc. of SPIE Vol. 8390 83901I-1

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 2: Using multi-angle WorldView-2 imagery to determine ocean

Figure 1. WV-2 Spectral Radiance Response3

2.2 Determination of bottom type and bathymetry using multispectral data There are numerous advantages to using WV-2 to image coastlines, shallow water, and reefs. With an average revisit time of 1.1 days4, WV-2 is well suited to monitoring dynamic and constantly changing environments like the ocean. WV-2 is capable of providing imagery of remote places and, if ground truth is available, has been shown to calculate bathymetry within one meter down to depths of 30 meters in clear coastal waters through supervised classification techniques5. Multispectral data have been used to analyze bathymetry by a number of authors. Philpot (1989) found that, with invariant water quality and atmospheric conditions throughout a scene, both depth and an effective attenuation coefficient of the water over several different bottom types may be retrieved from passive, multispectral imagery6. Camacho (2006) analyzed depth near Midway Atoll using DigitalGlobe’s first operational satellite, QuickBird7. Ohlendorf et al. (2011) used WV-2, IKONOS, and QuickBird data to map bathymetry and spectral sea floor classes in a range of coastal areas8. Kerr (2011) modified a model for optically-derived bathymetry to take advantage of WV-2’s increased spectral sensitivity. Kerr created 63 versions of this model, each with a different combination of input variables, and compared them using an information-theoretic approach. This comparison of models demonstrated that a model containing the full set of input variables (6-band ratios) provided the most reliable depth estimates9. 2.3 WV-2 multi-angle capabilities The camera control system carried aboard WV-2 is capable of rapid retargeting and high off-nadir imagery collection. This allows researchers to task a collection of several multi-angle images over a large target during a single overflight. Multi-angle collections increase the dimensionality of available data for a single target. Longbotham et al. (2011) has made use of this type of dataset for urban applications10. Multi-angle capabilities offer several advantages with respect to a single shot dataset. Multi-angular data fusion has been shown to allow:

• The exploitation /investigation of the Bidirectional Reflectance Distribution Function (BRDF), • The extraction of digital height maps, • Atmospheric parameter retrieval, • Classification improvement, and • Many more capabilities11.

The images used for this research were collected over approximately a six-minute period at ten second intervals and have mean satellite elevation angles that range from 24.9 degrees (most forward-looking) to 77.8 degrees (most nadir) to 24.9 degrees (most backward-looking). Examples of these images are in Part 3 (Methods). 2.4 Study Site This study focuses on one location on the windward side of the Oahu coastline of Hawaii (Figure 2). Due to cloud cover along the rest of the coastline, bathymetry near Kailua Bay has been analyzed. Imagery was collected on 30 July 2011.

Proc. of SPIE Vol. 8390 83901I-2

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 3: Using multi-angle WorldView-2 imagery to determine ocean

Figure 2. A Google Earth image of the Hawaiian Islands (left) focusing on Kailua Bay (circled in red) on the island of Oahu

2.4.1 Kailua Bay Kailua Bay (Figure 2, circled in red above and Figure 3, below) is a carbonate reef-dominated embayment located on southeast Oahu. Two categories of benthic substrate are found in this location – areas of carbonate sand and fossil reef hardgrounds, and reef habitats of coral and algae species. There is a sand-floored channel at the center of the bay that cuts across the reef and connects the seaward and nearshore sand fields. Corals and algae grow on the plains12. Locations with sand and fossil reef show up as light-colored, highly reflective areas in the WV-2 imagery. The coral and algae communities show up as dark-colored, low-reflectance areas in the imagery (Figure 3)12.

Figure 3. Kailua Bay, contrast enhanced to better show bathymetry and benthic substrate (WV-2 imagery)

3. METHODS

An integral part of the image analysis procedure is the pre-processing of the imagery. This involves such techniques as radiance conversion, spectral calibration, atmospheric correction, and glint removal. Upon completion of these processes, a band ratio method can be applied for bathymetry derivation. Once bathymetry is estimated, the effect of image acquisition angle can be analyzed.

Proc. of SPIE Vol. 8390 83901I-3

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 4: Using multi-angle WorldView-2 imagery to determine ocean

3.1 Initial data preparation A number of data preparation steps were required before depths could be determined. The data were then mosaicked and analyzed for overall quality and cloud cover. Image data received in latitude/longitude map projections were converted to UTM, WGS84. Scenes were ordered by mean satellite elevation angle (from most forward-looking to most nadir to most backward-looking). Figure 4 shows the images used for this research, along with the scene label (1010, 2010, 2100, and 3100), approximate location above the earth, and time acquired (in GMT) displayed over a Satellite Tool Kit (STK) layout.

Figure 4. WV-2 scenes used for this research

3.2 Calibration to radiance The WV-2 spectral radiance response is defined as the ratio of the number of photo-electrons measured by the system, to the spectral radiance [W-m-2-sr-1-μm-1] at a particular wavelength present at the entrance to the telescope aperture. The maximum response value for the spectral radiance response value of each band is divided to arrive at a normalized relative spectral radiance response13. Relative radiometric calibration and correction are necessary. Non-uniformity in images, which can appear in the form of streaks and banding, are minimized by relative radiometric correction. Data from all WV-2 detectors are radiometrically corrected and used to generate WV-2 products13.

Proc. of SPIE Vol. 8390 83901I-4

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 5: Using multi-angle WorldView-2 imagery to determine ocean

Atmospheric effects observed across a multi-angle sequence vary because of increasing optical path length at lower satellite elevations. Energy scattered at the sensor by the atmosphere, or upwelling radiance, is a major contributor to the atmospheric spectral distortion effect. If this effect is ignored, increased radiance at shorter wavelengths and lower satellite elevation will be observed10. WorldView data are typically distributed in relative radiance, so these values were converted into absolute radiance in units of [(μW)/(cm2*nm*sr)] by running a process utilizing gains provided by DigitalGlobe in the metafile delivered with the data14. 3.3 Calibration to apparent reflectance using Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) It is desirable to eliminate atmospheric effects caused by molecular and particulate scattering and absorption from the measurements on the imagery. The process of atmospheric correction converts the data from spectral radiance to apparent spectral reflectance15, and is absolutely necessary to account for view-angle atmospheric effects10. The WV-2 images used for this research were transformed into reflectance using the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) software15. The atmospheric correction method begins with the retrieval of atmospheric parameters, such as aerosol description and column water amount. Current methods allow aerosol retrieval over water and an average visibility is obtained. The RT equation for the given aerosol and column water vapor is solved, and the data are then transformed to reflectance15. 3.4 Image registration and subsetting (chipping) Data were subsetted (chipped) to focus on the water off of Kailua Bay. The image chips were then registered using the most nadir image as the reference image (Figure 5).

1010 2010 2100 3100

Figure 5. 1010: Mean satellite elevation angle of 24.9 degrees (most forward-looking) 2010: Mean satellite angle of 46.7 degrees (forward-looking) 2100: Mean satellite elevation angle of 77.8 degrees (most nadir) 3100: Mean satellite angle of 43.7 degrees (backward-looking)

Tie points were interactively selected for the same locations in the other images. The data were then warped using a first order polynomial14. Maximum pixel error for the registrations was 1.521 pixels with an average error of about 1.346 pixels. Images were then re-chipped so that each image matched the exact geographic region of the others. 3.5 Land and cloud mask and glint removal An Interactive Data Language (IDL) program was written for this research to create masks for land and clouds, to remove glint, and to apply a band ratio method of depth determination. The land and cloud masks were created with a rule-based classifier. Regions of interest were analyzed and the Red and Red Edge bands were used to determine land versus water classes.

Proc. of SPIE Vol. 8390 83901I-5

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 6: Using multi-angle WorldView-2 imagery to determine ocean

The effects of specular reflection at the surface of the water, or sunglint, make it difficult for optical sensors to look below the surface16. The surface deglinting expressions used for this research are from Goodman et al. (2008)16, derived from Lee et al. (1999)17, and assume that reflectance at 750 nm should approach zero but that conditions exist where this reflectance is greater than zero (e.g., shallow areas in clear water). The sunglint correction is calculated as a constant offset across all wavelengths such that reflectance at 750 nm is equal to a spectral constant, ∆. For raw remote sensing reflectance, Rrs

raw(sr-1), as derived through atmospheric correction, an approximation of surface remote sensing reflectance, Rrs(sr-1), is determined by16

Rrs(λ) = Rrsraw(λ) – Rrs

raw(750) + ∆ , (1)

∆ = 0.000019 + 0.1[Rrsraw(640) – Rrs

raw(750)] . (2) 3.6 Bathymetry derivation using a band ratio transform A two-step process was used to derive bathymetry. Relative bathymetry was determined using the imagery, and then absolute bathymetry values were obtained by regressing relative bathymetry values against verified depth data. Camacho calculated the relative bathymetry using the natural log transformed reflectance values of the deglinted reflectance image. The relative bathymetric values were extracted using the expression7 . (3) Based on research previously performed by this group2, the band ratio method was used to calculate depth from the Green and Yellow bands. Therefore, in Equation (3), b1 is the Green band and b2 is the Yellow band. This expression is based on Stumpf et al.’s (2003) ratio transform18.

4. RESULTS The goal of this research is to analyze the effects of collection geometry on water depth calculations. The band-ratio-calculated depth values calculated above were divided into shallow/deep water classes based on extreme differences in spectral response (light blue versus dark blue pixels). Estimated depth was determined, and the ratios of the depth estimate for the different images were compared. Initial results show differences in reflectance between the most nadir and other images (Figure 6). The first row of images represents estimated depths using the band ratio method with WV-2’s Green and Yellow bands for the nadir and non-nadir images. The scatter plots are similar, but not one-to-one. Red indicates deep water and blue indicates shallow water. The second row depicts the ratio of the depth estimate for the nadir and non-nadir images. A value of 1.00 is a perfect match and any value above or below represents deviation between the two datasets. These color-scaled plots show that there are distinct regions with different ratios, most likely reflecting the wind pattern on the surface of the water, and hence glint effects.

Proc. of SPIE Vol. 8390 83901I-6

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 7: Using multi-angle WorldView-2 imagery to determine ocean

Figure 6. Comparisons between the nadir and non-nadir images

5. CONCLUSIONS

A number of observations from WV-2, acquired at multiple angles in one orbital pass over Hawaii on 30 July 2011, have been analyzed. Data were calibrated to radiance and then apparent reflectance. Data analyses were focused on the geographic location of Kailua Bay, on the windward side of Oahu. Three non-nadir images were registered and compared to the most nadir image. The Green and Yellow bands were compared. This allowed for differences in reflectance to be analyzed and for the derivation of a unit-less depth estimate. Actual units will be determined once the depth estimate is calibrated against LiDAR bathymetry. Initial analysis results for depth determination using multi-angle WV-2 imagery show differences in reflectance between the nadir and non-nadir images. At this point, it has not been determined whether this is a result of BRDF, errors in the models, or another reason entirely. Camacho did, however, note that the Stumpf et al. ratio method did not perform well over shallow, high albedo substrates, such as sand in waters less than two meters deep7. This may explain some of the variation seen in the imagery because of Kailua Bay’s sand channel.

6. FUTURE WORK Next steps include application of the depth estimation approach to the full 39-image multi-angle dataset. The cloud and land masks, as well as the deglinting model, will be made more sensitive by training the model to look at pixel classes in a more discriminatory manner (land/surf zone, water with glint, and water without glint). The glint removal portion of

Proc. of SPIE Vol. 8390 83901I-7

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx

Page 8: Using multi-angle WorldView-2 imagery to determine ocean

the model will incorporate work by Abileah19 to better separate pixels with and without glint. Units will be converted from natural log into meters and then results will be compared to LiDAR bathymetry (SHOALS) data.

REFERENCES [1] “White Paper – The Benefits of the 8 Spectral Bands of WorldView-2,” White Paper, DigitalGlobe, Inc., March 2010 (http://www.docstoc.com/docs/86111968/The-Benefits-of-the-8-Spectral-Bands-of-WorldView-2). [2] Lee, K. R., Kim, A. M., Olsen, R. C., et al., "Using WorldView-2 to determine bottom-type and bathymetry," Proc. SPIE Vol. 8030, 80300D (2011). [3] “Spectral Response for DigitalGlobe WorldView 1 and WorldView 2 Earth Imaging Instruments,” DigitalGlobe, Inc., updated 2011 (http://www.digitalglobe.com/downloads/DigitalGlobe_Spectral_Response.pdf). [4] WorldView-2 data sheet: http://worldview2.digitalglobe.com/about/. [5] Gilbert, I., “8-Band Multispectral Imagery, Insight is in the Details,” (http://fluidbook.webtraders.nl/geoinformatics/07-2010/#18). [6] Philpot, W.D., “Bathymetric mapping with passive multispectral imagery,” Applied Optics, 28(8), 1569-1578 (1989). [7] Camacho, M. A., "Depth analysis of Midway Atoll using QuickBird multi-spectral imaging over variable substrates," Unpublished M.S. Thesis, Naval Postgraduate School, Monterey, CA, (2006). [8] Ohlendorf, S., Müller, A., Heege, T., Cerdeira-Estrada, S., Kobryn, H.T., “Bathymetry mapping and sea floor classification using multispectral satellite data and standardized physics-based data processing,” Proc. SPIE 8175, (2011). [9] Kerr, J.M., “WorldView-02 offers new capabilities for the monitoring of threatened coral reefs,” DigitalGlobe 8-Band Challenge Winner, (2011) (http://dgl.us.neolane.net/res/img/e85427bdc6833386bf479015de07c9c6.pdf, http://dgl.us.neolane.net/res/dgl/survey/_8bandchallenge.jsp). [10] Longbotham, N., Chaapel, C., Bleiler, C., Bleiler, L., Padwick, C., Emery, W.J., Pacifici, F., “Very High Resolution Multiangle Urban Classification Analysis,” IEEE Transactions on Geoscience and Remote Sensing, Vol. 99, 1-16 (2011). [11] Pacifici, F., Chanussot, J., Du, Q., “2011 GRSS Data Fusion Contest: Exploiting WorldView-2 Multi-Angular Acquisitions,” IGARSS 2011, (2011) (http://www.grss-ieee.org/wp-content/uploads/lectures/igarss2011_2tu1_5_pacifici/start.html). [12] Isoun, E., Fletcher, C.H., Frazer, N., Gradie, J., “Multi-spectral mapping of reef bathymetry and coral cover; Kailua Bay, Hawaii,” Coral Reefs, 22, 68-82 (2003). [13] Updike, T., Comp, C., “Radiometric Use of WorldView-2 Imagery,” Technical Note, DigitalGlobe, Inc., Revision 1.0 (2010) (http://www.digitalglobe.com/downloads/Radiometric_Use_of_WorldView-2_Imagery.pdf). [14] ITT Exelis, "ENVI User's Guide, Version 4.8: ITT Exelis, Boulder, Colorado, unpaginated (installation) CD-ROM (2010). [15] Matthew, M.W., Adler-Golden, S.M., Berk, A., Felde, G., Anderson, G.P., Gorodetzky, D., Paswaters, S., Shippert, M., “Atmospheric correction of spectral imagery: evaluation of the FLAASH algorithm with AVIRIS data,” Proc. SPIE 5093, 474-482 (2003). [16] Goodman, J.A., Lee, Z., Ustin, S.L., “Influence of atmospheric and sea-surface correction on retrieval of bottom depth and reflectance using a semi-analytical model: a case study in Kaneohe Bay, Hawaii,” Applied Optics, 47(28), F1 (2008). [17] Lee, Z., Carder, K., Mobley, C.D., Steward, R., Patch, J., “Hyperspectral remote sensing for shallow waters: 2. deriving bottom depths and water properties by optimization,” Applied Optics, 38, 3831-3843 (1999). [18] Stumpf, R.P., Holderied, K., Sinclair, M., “Determination of water depth with high-resolution satellite imagery over variable bottom types,” Limnol. Oceanogr., 48(1, part 2), 547-556 (2003). [19] Abileah, R., jOmegak, private communication, March 2012.

Proc. of SPIE Vol. 8390 83901I-8

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/27/2017 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx